Information Integration
# Continue PAPER_TPL
OTHER
Derives limits on integrated information: classical coding and physical systems can integrate only to specific bounds; Hopfield networks scale poorly (~37 bits), and quantum states maximize at ~0.25 bits, motivating added principles beyond integration alone.
"The integrated information of a Hopfield network is even lower. For a Hopfield network of n neurons with Hebbian learning, the total number of attractors is bounded by 0.14n [26], so the maximum information capacity is merely S ≈ log2 0.14n ≈ log2 n ≈ 37 bits for n = 1011 neurons. Even in the most favorable case where these bits are maximally integrated, our 1011 neurons thus provide a measly Φ ≈ 37 bits of integrated information... This leaves us with an integration paradox: why does the information content of our conscious experience appear to be vastly larger than 37 bits?"
G. The integration paradox, p. 11
Shows that biologically plausible associative-memory networks have severely limited integrated information, framing the integration paradox and why additional mechanisms may be required for unified conscious content in brains or AI systems .
"In summary, no matter how large a quantum system we create, its state can never contain more than about a quarter of a bit of integrated information! This exacerbates the integration paradox from Section II G... Let us therefore begin exploring the third resolution: that our definition of integrated information must be modified or supplemented by at least one additional principle."
D. The quantum integration paradox, p. 14
Demonstrates a fundamental quantum bound (~0.25 bits) on state-based integration, implying that unified access in conscious systems likely depends on additional structure or dynamics beyond static integration alone .
Limitations: Analysis is theoretical and depends on a particular Φ definition and partitioning scheme; does not directly validate in biological data or modern AI models.
Emergent Dynamics
# Continue PAPER_TPL
OTHER
Defines autonomy A via two timescales and shows, in a constructed class of models, that linear-entropy growth is suppressed while internal dynamics scales up, yielding autonomy that increases rapidly with system size through a 'diagonal-sliding' mechanism tied to quantum Darwinism.
"Figure 14 shows the linear entropy after one orbit, Slin1(T), as a function of the number of qubits b in our subsystem... the figure shows that Slin1(T) decreases exponentially with system size, asymptotically falling as 2−4b as b → ∞. Let us define the dynamical timescale τdyn = ~/δH and the independence timescale τind = [S̈lin1(0)]−1/2."
D. The exponential growth of autonomy with system size, p. 24
By separating timescales for internal dynamics and environment exchange, the model yields autonomy that scales strongly with size, offering a route for large-scale systems to self-organize persistent internal dynamics—a hallmark of emergent dynamics relevant to conscious processing .
Limitations: Results are based on specific toy Hamiltonians and initial states; the autonomy scaling and 'diagonal-sliding' mechanism are theoretical and not directly mapped onto neural or AI architectures.
Representational Structure
# Continue PAPER_TPL
OTHER
Operationalizes integrated information Φ via the 'cruelest cut' (minimum mutual information partition) and frames object perception as robustness from integration vs independence; formalizes mutual information I = S(ρ1)+S(ρ2)−S(ρ) and uses a physical hierarchy example.
"If the interaction energy H3 were so small that we could neglect it altogether... any thermal state would be factorizable: ρ ∝ e−H/kT = e−H1/kT e−H2/kT = ρ1ρ2. In this case, the mutual information I ≡ S(ρ1) + S(ρ2) − S(ρ) vanishes..."
B. Integration and mutual information, p. 6
Provides a clear formal criterion for separability and zero integration in terms of mutual information, grounding the representational structure of 'parts' versus 'whole' in a precise information-theoretic framework relevant to both brain and AI representations .
Figures
FIG. 1 (p. 6)
: Illustrates hierarchical object structure as stronger within-part coupling than outside, linking integration and independence to how representations are organized in complex systems .
Limitations: While the formalism is precise, it abstracts away from biological circuitry and AI training details; robustness and cuts are defined at the level of Hamiltonians and density matrices rather than measured neural/ANN activity.