Representational Structure
# Continue PAPER_TPL
AI
Linearly decodable latent-state representations (ball position/velocity) emerge in hidden states and predict primate-like behavior.
"In sum, RNNs that carried explicit (linearly decodable) information about the latent position of the ball behind the occluder (i.e., performed dynamic inference) were able to capture primate behavioral patterns more accurately than those that did not."
Comparing primates and recurrent neural network models, p. 5
This shows that explicit latent-state codes in RNN hidden activity (a representational structure) align with primate behavior, linking AI embeddings/decoders to neural-style population codes relevant to conscious access and inference .
"We observe the emergence of a representation of ball velocity during the visible segment… there was strong and persistent velocity coding at nearly all ball positions throughout the occluded segment… the velocity coding was not simply maintained, but increased during the occluded epoch, suggesting that the representation of ball velocity is more linearly decodable in the recurrent dynamics than in the input-driven dynamics."
Dynamics underlying computations performed by RNNs, p. 6
Persistent, increasingly decodable velocity signals during occlusion indicate structured, accessible internal representations that support ongoing inference—a key bridge between AI embeddings and brain-like latent-state coding in conscious tracking tasks .
Limitations: Findings are from trained RNNs on a specific occlusion-tracking task; representational readouts used linear decoders and were not directly validated against neural recordings.