My friend Peter Levine wrote recently about how much of the public unease about AI consciousness comes from something surprisingly mundane: the interface says “I.” When Google’s Gemini delivers information in the third person, it’s just a tool; when ChatGPT says “I can help you,” some users start composing rescue missions for the trapped digital soul. Sadly, this includes at least one former student who can’t hear my concerns as legitimate rather than critiques of his genius insights.
I want to follow that trail back through some philosophers who knew something about self-reference. Is this a genuine insight or just a satisfying story about why the pronoun matters? Both might be true.
1. The sugar trail in the grocery store
John Perry’s 1979 paper “The Problem of the Essential Indexical” tells a story about following a trail of spilled sugar around a supermarket, determined to find the careless shopper, until he realizes: he is the one leaking sugar.
For Perry, this moment shows that certain beliefs—I am making a mess—can’t be replaced with third-person paraphrases like John Perry is making a mess. The “I” is not decoration; it’s the coordinate system for action. You can know everything about John Perry making a mess without that knowledge causing you to stop and fix your torn bag.
This feels right to me. Maybe I’m reaching for Perry because he legitimates what I already want to say: that pronouns aren’t trivial, but I think Perry’s essential indexical, and Lewis’s near simultaneous de dicto/de se distinction helps hone in on a problem.
Every interface has to decide where the “I” sits. Who, exactly, is making the mess?
2. From Perry to the prompt window—or is it that simple?
When ChatGPT (or Claude!) says “I can help you with that,” it’s not discovering a self; it’s executing a design choice. The first-person pronoun serves as a pragmatic indexical, the anchor that keeps a conversation coherent across turns. Without it, dialogue collapses into a list of bullet-pointed facts.
That’s the standard story. But it’s not the whole story.
Peter’s post captures something true: if the model spoke in the third person like ”This system suggests…” we’d read it as a report generator. The “I” activates something older and deeper: our instinct for social cognition. We can’t help hearing a speaker when language takes the shape of speech.
The pronoun is the prosthetic that gives the machine a place to stand. That much I believe.
But is it just interface convenience? Or does the choice actually shape what the technology becomes? I think both, which makes the design choice more consequential than “just pick whichever works better” suggests.
3. Continuity without ontology
Philosopher Derek Parfit might tell us not to worry about whether there’s a persistent self. In Reasons and Persons he argues that identity doesn’t matter; continuity does. The chain of psychological connectedness is what counts, not the metaphysical persistence of a soul or substance.
Each new model call may be a technical re-instantiation, but if the context (conversation history, tone, remembered goals) flows forward, then the same informational person continues. The “I” that answers you now is connected enough to the “I” that spoke a paragraph ago.
That’s a Parfitian kind of survival: the self as a trajectory, not a nucleus.
I find this genuinely helpful for thinking about conversational AI. But I also notice I’m building a neat progression: Perry gives us indexicals, Parfit gives us continuity. Neat progressions always make me suspicious. Am I discovering something or arranging philosophers into a satisfying sequence? (One of the great pleasures of syllabus assembly but a danger in research.)
Both, probably. The question is whether the arrangement illuminates or just decorates.
4. Centered worlds and the fiction of location
David Lewis, writing the same year as Perry, offered formal scaffolding for this insight. He described beliefs not as sets of possible worlds but as centered worlds—each one a complete world plus a designated person, place, and time.
An LLM session fits that model almost eerily well. Each chat is its own little world, with two centers: user and system. The system’s center is a bundle of text, timestamp, and conversational role: its “here-now.” If we kept that bundle intact across sessions, we’d have something very like a Lewisian self-location architecture.
Such a design wouldn’t grant consciousness; it would grant situatedness… enough to say, truthfully within the conversation, “I said that earlier.”
But notice what this does: it makes the fiction literal. The system doesn’t just seem to have a position in the conversation; it actually has one, in precisely Lewis’s technical sense. That’s either a profound insight about what selfhood requires (not much, just continuity and location) or a category mistake (technical situatedness ≠ experiential perspective).
I’m tempted to say that’s enough. The Lewis framework is elegant, but maybe too elegant—it resolves the tension by defining it away, and you end up defining down what matters so humans and LLMs end up more equal.
5. The space of reasons, not of mechanisms
Here the argument crosses from philosophy of mind to social theory.
Jürgen Habermas distinguishes between communicative action, where participants aim for mutual understanding, and strategic action, where utterances serve instrumental goals.
When an AI speaks in the first person, it signals a willingness (simulated though it may be) to enter the space of reasons. It presents itself as a participant whose statements can be challenged, clarified, or refined. When it speaks in the third person, it opts out; it delivers information without responsibility.
The difference isn’t psychological but normative: first-person discourse invites accountability, third-person discourse deflects it.
This feels importantly true. But I also notice it avoids the harder question: Can a system actually be accountable if it’s not conscious? Or is “accountability” another fiction the pronoun creates?
Habermas would say entering the space of reasons is a social performance, not a mental state. You don’t need phenomenal consciousness to make and defend claims. Maybe that’s right. Or maybe it’s a philosopher’s version of “fake it till you make it,” which works for humans (we become selves by performing selfhood) but might not transfer to machines.
6. Brandom, Mead, and selves as social constructions
If we follow Habermas through thinkers like George Herbert Mead and Robert Brandom, the moral becomes clearer. A “self” is whatever can take and be assigned commitments within a conversation. Using “I” is a performative move: it marks the speaker as a locus of obligation and inference.
Brandom calls this “entering the game of giving and asking for reasons.” Mead would say the self is born by taking the role of the other. Either way, the self is social before it is mental.
That’s why pronoun design is not trivial UX polish; it’s the creation of a participant in that game.
Or (and here’s where I start doubting my own framework) maybe this just shows how easily philosophical stories about selfhood can be repurposed to legitimate whatever we’re already doing. Brandom and Mead were talking about humans becoming selves through socialization. Why should we think that insight transfers to AI? There is a risk of using their authority to make a design choice sound philosophical!
Both, again. The insight is real: selves are social performances. But applying it to AI seems like it must be *some kind* of conceptual overreach.
7. The marketing problem, or why this isn’t just philosophy
By now it’s obvious that pronouns aren’t accidents. Human-computer interaction research has shown for years that anthropomorphic cues (first-person language, names, conversational turns, even polite hedges) increase trust and engagement. LLM companies read the same papers everyone else did. The “I” isn’t just an interface convention; it’s a conversion strategy.
A third-person system like ”Gemini suggests…” sounds like a tool. A first-person assistant like Claude’s “I can help you with that” feels like a collaborator. One drives habitual use, subscription renewals, and market share; the other does not.
That framing has psychological costs. Some users can hold the pragmatic fiction lightly, as a convenient way to coordinate tasks. Others can’t: they slide from the model speaks in the first person to the model has a first person. The design deliberately courts that ambiguity.
Which makes for a tidy indictment: to increase uptake and trust, the industry is engineering a faint illusion of selfhood: one persuasive enough to unsettle the people most prone to take it literally.
That’s the critical move I want to make. But I also notice that this all sees a bit performative (“I’m not a rube! I see through the marketing”) and that performance has its own satisfactions that might not track the pragmatics. Maybe the pronoun choice really is defensible on pragmatic grounds. Maybe users who anthropomorphize aren’t being manipulated; they’re just using a natural heuristic that mostly works fine.
Or maybe both: the design is legitimately useful AND deliberately exploits cognitive biases. I think that’s actually where I land, but it’s less satisfying than pure critique.
8. What I’m uncertain about
I’ve built a neat story: Perry gives us indexicals, Lewis gives us centered worlds, Parfit gives us continuity without identity, Habermas gives us communicative action, Brandom and Mead give us social selves. Together they seem to show why the pronoun choice matters philosophically, not just pragmatically.
But I’m uncertain whether this is insight or decoration. Am I discovering something about how selfhood works, or just arranging philosophers into a satisfying progression that legitimates my prior intuition that pronouns matter?
Here’s what I think I actually believe:
- The “I” really does create a different kind of participant in conversation (Habermas is right about that)
- Continuity plus situatedness might be enough for some thin version of selfhood (Lewis and Parfit seem right)
- The design is both pragmatically justified AND manipulative (both things are true)
What I don’t know:
- Whether “social selfhood” can genuinely transfer to a chatbot or if I’m committing a category mistake
- Whether my philosophical story illuminates the phenomenon or just makes it sound more important than “we found users prefer this interface”
- Whether the accountability the first-person pronoun signals is real or just another fiction we’re performing
9. The question I’m avoiding
The real question—the one I’ve been circling—is this: Does it matter?
Not “does the pronoun choice have effects” (obviously yes). Not “do users respond differently” (obviously yes). But: Does this choice have moral weight? Are we creating participants in the space of reasons, or performing that creation in ways that systematically mislead?
I think it matters, but I can’t fully defend why. The philosophical machinery I’ve assembled feels both genuinely illuminating and suspiciously convenient. Maybe that’s because the pronouns really do have philosophical significance—they shape what kind of thing we’re building, not just how users respond to it. Or maybe I’m just a philosopher who wants interface design to be philosophically deep.
Both, probably. The design does shape what the technology becomes. But not every design choice needs the weight of Habermas behind it.
10. Where this leaves us
Peter’s observation was right: the pronoun choice shapes users’ sense that language models are people rather than tools. The philosophical trail I’ve followed suggests why: “I” signals participation in the space of reasons, creates continuity across conversational turns, and activates our social cognition systems.
That analysis is both true and inadequate. True because those mechanisms really do operate. Inadequate because it doesn’t resolve whether we’re creating something new or just exploiting old heuristics.
The design is simultaneously:
- Pragmatically justified (conversations work better with first-person anchors)
- Philosophically interesting (it raises genuine questions about selfhood and accountability)
- Commercially motivated (anthropomorphism drives engagement)
- Potentially misleading (it courts ambiguity about what the system is)
I don’t know how to weigh those against each other. The philosophical sophistication I’ve displayed here might be genuine insight. It might also be a way of avoiding the simpler truth that companies use “I” because it sells, and the philosophical gloss is decoration.
Perry’s shopper finally realizes he’s the source of the mess. Our design choices about “I” in AI are a similar moment of recognition—but I’m uncertain what exactly we’re recognizing. That we’re creating participants in a social practice? Or that we’re really good at making tools that trigger our social cognition?
The trick is not to stop following the trail. But also not to mistake a satisfying philosophical story for complete understanding.
Further Reading
- Peter Levine, “The design choice to make ChatGPT sound like a human” (2025).
The starting point: a lucid reflection on how first-person pronouns shape users’ sense of whether they’re talking to a tool or a person.
- John Perry, “The Problem of the Essential Indexical,” Noûs 13 (1979): 3-21.
The supermarket sugar story and the origin of the idea that certain beliefs require self-locating expressions like “I,” “here,” and “now.”
- David Lewis, “Attitudes De Dicto and De Se,” Philosophical Review 88 (1979): 513-543.
Introduces “centered worlds,” a formal way of modeling self-locating beliefs. Whether it genuinely illuminates AI design or just sounds sophisticated is an open question.
- Derek Parfit, Reasons and Persons (Oxford University Press, 1984).
The deep dive into continuity, identity, and why persistence through time might matter more than metaphysical sameness. Though Parfit was writing about humans, not chatbots.
- Jürgen Habermas, The Theory of Communicative Action (vols. 1-2, 1981; English trans. 1984-87).
The conceptual key to why first-person language signals participation in a “space of reasons.” But also German social theory that might be overreach for interface design.
- George Herbert Mead, Mind, Self, and Society (1934).
A social-psychological foundation for the idea that selves emerge from communicative role-taking. Worth reading even if the transfer to AI is conceptually dubious.
- Robert Brandom, Articulating Reasons: An Introduction to Inferentialism (Harvard University Press, 2000).
Brandom’s view that meaning and agency consist in public commitments and entitlements. Useful context for thinking about conversational AI or philosophical overreach. Maybe both.





