Temporal Coordination
# Continue PAPER_TPL
AI
Argues that LLMs operate over discontinuous sequences of discrete tokens without continuous sensorimotor dynamics, shaping any putative temporality.
"For a disembodied LLM, by contrast, there is no such continuity. The input to an LLM is a discontinuous sequence of discrete tokens, none of which resembles the tokens that immediately preceded it, and this fact has consequences for the putative experience of any hypothetical LLM-like entity."
4.2 Discontinuity and change, p. 6
This passage links the discrete tokenization of inputs to the absence of continuous temporal flow, supporting an AI-relevant notion of temporal coordination/segmentation distinct from biological oscillatory pacing and phase continuity .
Limitations: The analysis is philosophical rather than empirical; no measurements of timing dynamics or coordination within real model internals are provided.
Information Integration
# Continue PAPER_TPL
AI
Highlights lack of integration across concurrently running instances serving different users; instances cannot access each other's memories.
"By contrast, an AI system conforming to a contemporary LLM-based dialogue agent would not exhibit the same sort of integration. Today’s users cannot talk to the underlying model in the way the protagonist of Her sometimes seems to be talking to Samantha, but only to a single currently active instance, and the various concurrent instances serving different users have no access to each other’s memories or experiences."
5 Fractured Selfhood, p. 8
This explicitly notes missing system-wide access/integration across instances, providing negative evidence for global-workspace-like unification in current LLM deployments .
Limitations: Focuses on deployment architecture and user experience; does not analyze within-instance integration mechanisms (e.g., attention convergence or long-range dependency handling).
Self Model and Reportability
# Continue PAPER_TPL
AI
Analyzes what an LLM’s use of first-person pronouns could refer to, enumerating shifting self-referents (abstract model, deployed process, specific instance).
"Another way to pose the selfhood question is to ask what the words “I” and “me” refer to when they are used by an LLM."
5.1 The site of the self, p. 7
Frames self-reportability as a mapping problem from first-person terms to internal or deployment-level referents, directly relevant to assessing AI self-models and their reports .
"Alternatively, the word “I” might refer, not to an abstract entity, but to the deployed model, specifically to the computational process that generated the text that includes the word “I” in question. ... So the subsidiary question arises of whether the word “I” refers to the set of all concurrent instances of the model, or just to the instance serving the specific user in question."
5.1 The site of the self, p. 7
Specifies multiple plausible report pathways/referents for 'I', illustrating ambiguity in AI self-reference and the need to operationalize reportability in evaluations .
Limitations: Provides conceptual distinctions without empirical probes (e.g., confidence estimators or explicit report circuits) to test which referent an LLM actually uses in practice.
Representational Structure
# Continue PAPER_TPL
AI
Context window renders the immediate past causally available when generating the next token, organizing access to recent information and predictions.
"First, an LLM, in generating the next token, takes account of the context window, which contains the conversation so far. This means that information about the system’s immediate past is available and causally potent when it generates the next token."
4 Fragmented Time, p. 5
Describes how embeddings and context structure enable organized access to prior content during inference, aligning with representational subspaces and retrieval-like structure in modern LLMs .
Limitations: Does not provide model-internal measurements (e.g., probing vectors, SAEs) to specify which subspaces or latents encode this information.
Emergent Dynamics
# Continue PAPER_TPL
AI
Characterizes LLM behavior as a distribution over possible characters—a superposition of simulacra—branching across conversational futures.
"The LLM is better thought of as maintaining a distribution over possible characters, a superposition of simulacra that inhabits a multiverse of possible conversations branching into the future."
5.4 Selfhood and simulacra, p. 9
Portrays in-context persona formation as an emergent, distributional dynamic shaped by interactions and prompts—an AI-relevant higher-order phenomenon beyond fixed programming .
Limitations: Metaphorical framing; lacks quantitative analysis of when and how such distributions manifest in activation geometry or behavior across datasets.