Recurrent neural networks with explicit representation of dynamic latent variables can mimic behavioral patterns in a physical inference task

Rishi Rajalingham, Mehrdad Jazayeri · 2022 · View original paper

← Back to overview
Evidence (3)
Representational Structure # Continue PAPER_TPL AI
Linearly decodable latent-state representations (ball position/velocity) emerge in hidden states and predict primate-like behavior.
"In sum, RNNs that carried explicit (linearly decodable) information about the latent position of the ball behind the occluder (i.e., performed dynamic inference) were able to capture primate behavioral patterns more accurately than those that did not."
Comparing primates and recurrent neural network models, p. 5
This shows that explicit latent-state codes in RNN hidden activity (a representational structure) align with primate behavior, linking AI embeddings/decoders to neural-style population codes relevant to conscious access and inference .
"We observe the emergence of a representation of ball velocity during the visible segment… there was strong and persistent velocity coding at nearly all ball positions throughout the occluded segment… the velocity coding was not simply maintained, but increased during the occluded epoch, suggesting that the representation of ball velocity is more linearly decodable in the recurrent dynamics than in the input-driven dynamics."
Dynamics underlying computations performed by RNNs, p. 6
Persistent, increasingly decodable velocity signals during occlusion indicate structured, accessible internal representations that support ongoing inference—a key bridge between AI embeddings and brain-like latent-state coding in conscious tracking tasks .
Limitations: Findings are from trained RNNs on a specific occlusion-tracking task; representational readouts used linear decoders and were not directly validated against neural recordings.
Emergent Dynamics # Continue PAPER_TPL AI
Networks that better match primate behavior exhibit slow, smooth, low-dimensional activity dynamics.
"Networks that exhibited “simple dynamics”—i.e., whose activity representations were lower dimensional, lower speed, and lower curvature – better predicted behavioral patterns of humans (Fig. 4B)."
Dynamics underlying primate-like behavior, p. 6
The emergence of simple, low-dimensional dynamics associated with primate-like behavior suggests an emergent dynamical regime that could correspond to brain-wide stable manifolds supporting conscious inference .
Limitations: Metrics of ‘simple dynamics’ are correlational and derived from model activity; no direct measurement of biological criticality/complexity indices in this study.
Causal Control # Continue PAPER_TPL AI
Changing the training objective to include dynamic inference constraints causally changes internal dynamics and increases primate-consistency.
"RNNs optimized for simple dynamics better matched human behavior than RNNs optimized for task performance alone… However, they failed to capture primate behavior as well as models optimized for dynamic inference."
Dynamics underlying primate-like behavior, p. 6
Altering the objective (adding dynamic-inference constraints) changes both internal computations and behavioral similarity to primates, demonstrating causal control over model computations relevant to consciousness-like inference capacities .
"To summarize, we constructed RNN models that varied… and were differently optimized (loss_weight_type: no_sim, vis_sim, all_sim, or all_sim2)."
RNN optimization, p. 10
The controlled objective manipulations (no_sim vs. vis_sim vs. all_sim/all_sim2) enable causal testing of how training constraints shape internal mechanisms and behavior—a standard AI manipulation paralleling brain intervention logic .
Limitations: Objective interventions are in silico; while strongly suggestive, they are not direct neural causal manipulations (e.g., optogenetics), and generalization beyond this task/domain is untested here.