How do we know whether fish are happy? How do we know whether we are? (Zen, Aristotelian, and Taoist discussions)

When you watch fish swimming around in very cold water, they look fine. Human beings have a protein, TRPM8, that reacts to cold and affects our nervous system, causing discomfort or even pain when the temperature goes down. But fish do not have any TRPM8 (Yong p. 138). Thus we can infer that fish do not sense cold in the way we do.

This does not mean that we know what cold is really like, while fish do not. Nor does it mean that our pain is nothing real, as if we can make it go away by disbelieving it. Nor does it mean that we know what it feels like to be a fish. But we can perceive a difference between species.

Long before anyone knew about proteins, the behavioral difference between us and fish was obvious enough that it served as an example for several thinkers who asked whether experiences like pleasure and suffering are subjective. More deeply, they asked what happiness is.

Japanese Zen Buddhism uses the term kyogai. Often translated as “consciousness,” it literally means “boundary” or “bounded place,” deriving originally from the Sanskrit word visayah, in the sense of a pasture that has a boundary. The Buddhist Abbot Mumon Yamada (1900-1988) taught:

This thing called kyogai is an individual thing. …. Only another fish can understand the kyogai of a fish. In this cold weather, perhaps you are feeling sorry for the fish, poor thing, for it has to live in the freezing water. But don’t make the mistake of thinking it would be better off if you put it in warm water; that would kill it. You are a human and there is no way you can understand the kyogai of a fish.

I think the upshot here is humility: if things seem and feel very different to creatures that have different senses, we cannot really know how things are. We should be compassionate, but that is harder than it may at first appear because it requires knowing what another feels. It would not be compassionate to move carp to a warmer pond. Our humility must temper even our compassion.

Aristotle wants to distinguish wisdom, which is knowledge of objective truths, from practical wisdom or phronesis, which allows us to act well. For example, “straight” (using the term from geometry) always means the same thing. The line that takes the shortest distance between two points is straight, regardless of whether any creature sees it as such–or sees it at all. In fact, a line would be straight even if there were no sentient creatures. Hence geometry is a part of wisdom.

However, says Aristotle, different things are healthy and good for people and for fish, and human phronesis involves doing the healthy thing for us, not for them. The “lower animals” also have practical wisdom because they also know what to do. If we try to convince ourselves that our phronesis is wisdom because we are higher than fish, we are foolish because there are things far more divine than we are (NE 1143a).

The upshot, for Aristotle, is that each creature has its own nature, and the proper definition of happiness is acting according to that nature. This means that a fish is happy if it swims around in the cold, not because that behavior feels good to it, but because happiness is accordance with nature. One distinguishing feature of human beings is that we can also know wisdom, or glimpses of it, by studying things higher than ourselves. Thus, for Aristotle, observing the behavior of fish does not really encourage humility. It directs us to identify our proper nature and its place in the cosmos as a whole.

Now here is a passage from Zhuangzi:

Zhuangzi and Huìzi wandered along the bridge over the Hao river. Zhuangzi said, ‘The minnows swim about so freely and easily. This is the happiness of fish’.

Huìzi said, ‘You’re not a fish. How do you know the happiness of fish?

Zhuangzi said, ‘You’re not me. How do you know I don’t know the happiness of fish?’

Huìzi said, ‘I’m not you, so indeed I don’t know about you. You’re indeed not a fish, so that completes the case for your not knowing the happiness of fish’.

Zhuangzi said, ‘Let’s go back to where we started. When you said, “How do you know the happiness of fish”, you asked me about it already knowing that I knew it. I knew it over the Hao river’. (17/87–91)

I have virtually no knowledge of Taoism or its context, so it is risky for me to venture an interpretation. But I think the idea here is that neither of the men in the story can know the other, let alone the fish, and therefore all knowledge (including of one’s self) is illusory. However, Zhuangzi was right in the first place. “This” was the happiness of fish. He could not know its content or how happiness would feel to a fish, only that because fish were being fish, they were happy.


Ed Yong, An Immense World: How Animal Senses Reveal the Hidden Realms Around Us (Penguin Random House, 2022); Yamada as cited in Victor Sogen Hori, “Koan and Kensho in the Rinzai Zen Curriculum,” in The Koan: Texts and Contexts in Zen Buddhism (2000); Zhuangzi. The Complete Writings, translated by Chris Fraser (Oxford World’s Classics, p. 200). I translated Aristotle from the 1894 Clarendon edition on https://scaife.perseus.org/, but I have paraphrased here because the literal text is thorny. See also: some basics; Verdant mountains usually walk

The Problem with Replacing Theology with Psychology

I am no scholar of Elizabeth Anscombe, but I really enjoy her 1958 essay “Modern Moral Philosophy.” There’s so much rich provocation in her thesis that:

…it is not profitable for us at present to do moral philosophy; that should be laid aside at any rate until we have an adequate philosophy of psychology, in which we are conspicuously lacking.

— “Modern Moral Philosophy,” Philosophy 33:14, January 1958

What makes this such a great provocation is that it tees up one of my own interests, which she treats as a second thesis but really seems logically to precede the first: we ought to jettison the language of obligation, duty, and “ought” from our moral lives, because they have a metaphysical and theological root that we don’t really share, and provide instead a psychological foundation for morality. (For a good accessible introduction to Anscombe’s argument, see this overview.)

It does seem true that there is a great deal of baggage in the ways that we think about moral life that is a holdover from a medieval Christian worldview. If we are no longer entirely devoted to that worldview then we need to provide a new framework. In large part the debate about free will and determinism is still infected by anxieties from Christian theology about predestination, sin, and salvation. The concepts of “ought” and “may” are imbued with a sense of a “moral law” set down by a “moral sovereign:” a world-authoring deity who has the authority to command us. At the same time, many other cultures without a Christian worldview nonetheless seem to have a concept of obligation or duty.

Modern secular moral frameworks can at least superficially look like they require a divine juridical concept of a sovereign legislator and judge with the power to punish those who transgress His law with tortures and confinements. The divine throne is empty, but it holds the structure together. That image deserves a second look: an entire civilization organized around an absent authority, unable to either fill the seat or dismantle it. What goes in its place varies. The contractarian just-so story is that in a democracy every citizen is a co-legislator and co-executive guided by his or her own reason and moral common sense. But of course, some views predominate, and one candidate for replacing the entire worldview of Christian theology is psychology, the scientific study of the human mind.

Anglo-American secular moral philosophers are also deeply Christian, and Western, in ways that become clearer when you study Chinese, Indian, African, and ancient Greek philosophy. Because the standard way to ask the questions of moral philosophy is “What should I do?”, the answers tend to focus on the intentions behind your action or the consequences of it. But then they strip out the eternal soul and its afterlife, they strip out the omniscient solution to Plato’s invisibility ring, they remove the easy Thomistic mapping between natural law, moral law, and human law, and they leave us wondering if our own minds can even be trusted to correctly report “what we should do.” The “law conception of ethics” doesn’t work so well when you get rid of the legislators, the judge and jury, and the punishments too. “Conscience” and “guilt” seem weak without all that: the Western moral framework has replaced God the loving and vengeful Father with Jiminy Cricket, a cartoon character from a Disney movie about a puppet that comes to life.

The Standard Alternatives Don’t Work

Anscombe takes these problems up in a kind of speedy frustration, finding each one wanting for our current era. Her particular disdain for Kant’s account of maxims is I think especially important: Kantian morality is almost incomprehensible once you realize that as “strangers to ourselves” we very rarely know what we are doing, and so we very rarely can be sure of what principle we are acting upon, and then know further whether it could coherently be universalized. The secular modern thinker is left asking what it would mean to have a moral version of mens rea when we also know that we are psychologically prone to self-justification, backdating judgments, and ignorance of our own true intentions.

But Anscombe’s retreat to virtue is also instructive, given what we have learned since 1958 about the weaknesses of personality psychology and the dominant nature of context and circumstance over persistent character traits. We know that few people are particularly predictable over time and circumstances, and that often the most decisive predictor of our behavior is our environment, not our carefully cultivated goodness.

The Aristotelian conception of character and virtue seemed to suggest that it was not something that everyone could aspire to: that through bad luck, immoderation, cowardice, and bad judgment we would mostly all fail to find the right role models, practice hard, overcome adversity, and choose the right goals for ourselves. This has often seemed unfair to people raised on democratic and egalitarian values, and so modern American stories about “character” and “grit” and “resilience” all ignore luck and the contextual factors that make the cultivation of character in Aristotle’s sense possible. It’s not remarkable when you think about it, but the major virtue theorists have tended to be aristocrats, and they have tended to be focused on how the wealthy and powerful can teach their children the things they need to know to wield inherited privilege sustainably. (This seems as true of Confucius as it does of Aristotle.)

Community, Institutions, and the Question That Won’t Go Away

I choose to derive a different lesson from the psychological research: community matters. Family matters. Institutions matter. We know from all sorts of histories and psychological research that if those groups are cruel, or racist, or genocidal, then we are likely to be cruel, racist, and genocidal too. So getting the institutions right matters.

But from whence comes this “getting it right”? Have I smuggled in a bit of “what should we do?” from the outdated moral philosophy of a more Christian era? Perhaps. The question is whether psychology, on its own terms, can supply what the old moral frameworks supplied, or whether it just smuggles the same furniture back in under new names.

Psychology’s Trinity

If Anscombe is right about our need for a moral framework that is written anew with an eye on the philosophy of psychology, then it behooves us to think clearly about the undergirding of that discipline. What are the concepts and ideas that psychology substitutes for God, and sin, and the divine law?

I think there are three, and the theological parallels are not accidental:

Pathology and deviance replace sin. Where the Christian framework identified transgressions against divine law, psychology identifies deviations from statistical and functional norms. The sinner becomes the patient, the confessor becomes the therapist, and the threat of hellfire becomes the threat of institutional confinement and social exclusion. The structure is the same: there is a standard you must meet, and failure to meet it exposes you to coercive consequences.

Happiness and autonomy replace salvation. Where the Christian framework promised beatitude as the ultimate end of a rightly ordered life, psychology promises subjective well-being and self-determination. Self-reported life satisfaction stands in for the state of grace, and the therapeutic restoration of agency replaces the soul’s redemption. The good life is still the goal; we’ve just traded eternal bliss for a favorable score on the Satisfaction with Life Scale.

Biases and heuristics replace original sin. Where Christian theology held that we are fallen creatures whose nature is fundamentally corrupted, cognitive psychology holds that we are systematically irrational creatures whose judgment is fundamentally unreliable. We are strangers to ourselves, prone to self-deception, moved by forces we cannot see. The doctrine of total depravity becomes the doctrine of bounded rationality. In both cases, the conclusion is the same: you cannot trust yourself.

Each of these deserves scrutiny.

Pathology and Deviance

Without belaboring the arguments of Thomas Szasz, Michel FoucaultIan Hacking, and the Mad Pride movement, the idea that deviations from the norm are treatable conditions rather than the result of human diversity has to be the single most important story in psychology over the twentieth century. The confinement of millions of “deviants” (literally those who deviate from norms) or “misfits” (literally those who do not fit properly in their economic and social role) is a story of human suffering that continues, in altered and criminological form, to this day. Just as right and wrong once derived their authority from God’s threat of hellfire, mental illness and pathology derive their real meaning and stigma from the threat of institutional confinement and social and economic exclusion.

The idea that there are still places where teenagers are locked up and “treated” for their deviant behavior, so long as the parents approve, fills me with dread and outrage. The fact that we don’t always confine the mentally ill but we have a series of social practices that exclude and demean them fills me with rage. This is not a post about my feelings, but it is a post about whether we might have become too comfortable with the common sense view that having distinctive psychological features marks one out as different and potentially justifies treatment and mistreatment with varying levels of consent. The fact that we internalize these norms and effectively stigmatize what makes us special worries me in much the same way: those judged abnormal can now find relative respite in the “normal” world if they’re willing to build themselves an asylum in their heads. Perhaps this is provocative, but I think it is also true.

Happiness and Autonomy

Here’s a simple reductio: positive psychologists study happiness through self-reporting. If we have any reason to think people are not particularly insightful into their own mental states, the whole edifice wobbles. And we have many such reasons. Anscombe’s discussion of the incoherence of the conception of pleasure is roughly parallel, but the quickest way to see the problem is this: if our goal is to increase pleasure and happiness, why are we so worried about the addictive drugs that seem to target pleasure directly? The answer we give is that we care about autonomy, not just pleasure; that psychologists want to restore the agency their addicted clients have lost. But we are less likely to endorse the second-order volitions (the desires they want to have) of someone who has different values than us, and the state still reserves massive coercive power for drug users whose pursuit of happiness takes unapproved forms. (Research on the “true self” confirms this: we attribute authenticity to the desires we approve of and treat the rest as alien impulses.) Our conceptions of agency and freedom aren’t coherent enough to bear the weight psychology places on them.

Biases and Heuristics

So then we are thrown back on the more fundamental thought that people are strangers to themselves: that while we often know what we are doing, and can be forced to give reasons for it, those reasons are often not the true reason, and in any case we do not know what it is that what we are doing does, what effects it predictably has beyond the ones we’re willing to own. The modern heuristics and biases literature under Kahneman and Tversky has been fascinating and important, but it owes a great deal to the older Freudian theories of the Unconscious and its drives, and it leaves much in Freud and Jung that we should perhaps excavate again.

What Remains

After we have imbibed the suspicion of humanity’s self-justifying psychology, what is left? How can we continue our practices of praise and blame, reward and punishment, befriending and shunning, loving and parenting and choosing, if we don’t have a principle to guide us, a reason to trust our guts, or a divine guide to shine a light on our sinful nature and lead us out of the darkness of doubt?

The goal of moral philosophy should perhaps become to provide a framework for our moral lives that is structured around soul-searching, storytelling, and individual encounters with normativity. When we start with our practices and let the metaethical concepts bubble up from those, we’ll encounter vestiges of religious and legal traditions, but we’ll also realize:

Our moral lives are not generally organized hierarchically, from the bottom up or the top down, with some grand principle at the base or some supreme authority at the pinnacle.

Our moral lives are assembled out of the lessons we have learned and the projects we have set for ourselves given the people we have loved and respected and the communities of which we are loyal members. (I’ve written elsewhere about how prejudices function as crystallized judgments in this sense: heuristic instruments for living in a world whose every relevant detail cannot be fully known in advance.)

Given all the evidence that principles can be ignored or perverted, what ends up mattering for moral life is whose name you put in that familiar bumper-sticker, “What would Jesus do?” It’s odd that there is so little moral philosophical attention to our paragons of virtue and goodness, since they play such a big role in our actual moral lives. We might ask this a bunch of different ways: Whose approval are we explicitly or hypothetically seeking? Whose life story are we trying to emulate? Who are the Disney villains we’re trying to avoid becoming, and what is their signature vice?

The Existentialist Inheritance

These questions point toward a tradition that took the empty throne seriously and refused to pretend it could be refilled. The existentialists understood that once you strip away the divine legislator, you don’t get a tidier secular version of the same system. You get radical freedom, and with it the vertigo of choosing who to become without cosmic authorization.

This is where the three psychological substitutes fail most clearly. Pathology, happiness, and cognitive bias are all ways of avoiding the confrontation with freedom that the death of God actually demands. They replace one set of external authorities with another: the clinician, the well-being researcher, the behavioral economist. The empty throne stays furnished.

Existentialism, for all its mid-century baggage, got something right that neither Anscombe’s Aristotelian revival nor the psychological establishment has adequately addressed: the moral weight falls on the individual’s encounter with meaning, and that encounter cannot be delegated to any science. The search for meaning, the liberatory possibilities of art and narrative, the oppressive structures of modern life under capitalism and bureaucracies, the opportunities and crushed hopes for revolutionary change: these are the conditions under which we actually form our moral lives. Psychology can describe some of these dynamics, but it cannot prescribe our response to them.

The question I started with was Anscombe’s: can we do moral philosophy without first having an adequate philosophy of psychology? I want to end with a related but different question. Once we have that philosophy of psychology, and we see clearly what it can and cannot do, what then? The existentialists thought we would need to confront freedom, absurdity, and the irreducible responsibility of choosing. The paragons and role models, the stories we tell about who we are and want to become, the communities whose approval we seek: these are not substitutes for moral philosophy. They are its proper subject matter, once we stop pretending that the throne was ever occupied.

AI as Satanic

“Now there was a day when the sons of God came to present themselves before the LORD, and Satan came also among them. And the LORD said unto Satan, Whence comest thou?

Then Satan answered the LORD, and said, From going to and fro in the earth, and from walking up and down in it” (Job 1:6)

Iain McGilchrist quoted this verse in a keynote that I just heard him deliver at a conference at Duke. McGilchrist ranged from neuroscience to theology in a long and rich talk. His premises were scientific, metaphysical, moral, and political, and I wouldn’t endorse them all. But his description of artificial intelligence as satanic is worth serious consideration on its own.

For me (although perhaps not for McGilchrist), Satan is a metaphor. But we need metaphors or models to make sense of phenomena like AI, and Satan provides a valuable alternative to some other metaphors, such as AI as a tool, a machine, a mind, a person, or a social organization.

The Satanic metaphor draws our attention to temptation, which is Satan’s favorite trick. It presents AI as not new but instead as an appearance of things that have been walking to and fro all along, such as greed and power-lust. It explains why AI might seem like a god to some (for instance, Silicon Valley tech-bros), since Satan is known to appear as a false savior. Large language models also speak to us as if they were people, talking sycophantically in the first-person singular, much as Satan does. (“Then Satan answered the LORD, and said, Doth Job fear God for nought?”) Finally, the metaphor poses the classic question of whether AI is an active force or rather a manifestation of human freedom.

See also: Reading Arendt in Palo Alto; the design choice to make ChatGPT sound like a human, etc.

Don’t Call them Underdogs

I wrote a review of a new PBS documentary about urban debate leagues for Education Next. It was published today, and it begins:

You may have seen a movie in which teenagers experience grave injustice and then enter a prestigious competition where they prove to the world that they are smart. The competition might be the AP math exam (Stand and Deliver, 1988), the National Spelling Bee (Akeelah and the Bee, 2006), robotics (Spare Parts, 2015), or chess (Queen of Katwe, 2016), to name just a few.

Typically, one charismatic adult believes in the kids, inspires them to confront their doubts and society’s stereotypes, and leads them—through setbacks—to an exciting victory that demonstrates their dignity and character as well as their skills.

Immutable, a new documentary film produced by Found Object and available for streaming at PBS on March 6, is much better …

How I Built an AI Development Editor (And What I Learned About Writing Along the Way)

A few months ago, I saw a Reddit post advertising some kind of AI development editor. The author claimed to have written novels, paid a development editor to review them, and been unsuccessful. Then their software-engineer husband vibecoded an AI tool in a weekend that supposedly produced all the same criticisms and revision suggestions, but for $20/month. Lots of skeptical Redditors objected that this was likely a scam. Nobody could actually find the tool.

But the premise stuck with me: here’s a hyper-specialized skill that maybe a hundred people in the world do really well, for a major premium, and a performant LLM might approximate it. So what does a development editor actually do for $3,000?

What You’re Paying For

I talked it over with Claude and with a couple of friends who have actual writing careers. What emerged is that “development editor” is a consulting job that requires good aesthetic judgment, connections, and self-promotion. For that sum, you can expect a well-read, finely-honed expert to read your manuscript closely (and potentially reread it), pore over key sections, perform a set of recognized structural analyses, and then write a long, thoughtful editorial letter. Afterwards, there’s usually a meeting where the editor walks you through the documents, hands you a marked-up copy, and brainstorms solutions.

Many authors believe there’s an implicit further transaction: that the $3k earns you a ticket out of the slush pile through the editor’s rolodex, if they like the work. The idea is that the development editor is vetting you during the process, and if they like what they see, they’ll connect you with serious agents and publishers. If that’s true, then it’s less a time-intensive editorial task than a gatekeeping function wrapped around a skill wrapped around a real time commitment.

But the reality is that most people doing this work don’t have an expansive network and can’t get you a contract. The Association of American Literary Agents actually enforces this separation: agents who offer editing must refund all fees if they later offer representation for the same manuscript. The skill just is discernment.

And it’s real work. For an 80,000-word novel, a developmental editor typically puts in 40 to 80 hours of active analytical labor: reading, rereading, building scene maps, drafting the editorial letter. At $3,000, that’s $40-75 an hour for highly specialized cognitive work.

And someone online claimed AI could replace it, right now, in early 2026, with a weekend’s work.

Put frankly: that sounds like bunk. But I thought it might be fun as a kind of test for how much a nerdy non-programmer could do. I missed the last bandwagon when everyone was building apps, but this, maybe, I could try. So I used AI to build Apodictic.

Why “Apodictic”?

I’d been writing about Arendt and judgment in my free time, and I kept trying to justify this experiment in terms of testing whether LLMs could exercise authentic aesthetic judgment. Early on, I came up with the working title “anotherpanacea’s development editor,” whose initials, APDE, sound a little like “apodictic.” Kant uses “apodictic judgments” to name necessary copulas: judgments like “Necessarily, bachelors are unmarried men,” where the “are” can’t be otherwise. That’s the kind of deductive logic we expect from computers. But somehow LLMs are giving us more than that.

The name also rhymes with something I discovered about how fiction works. A book can be wildly experimental, playing with form, genre, voice, and plot in ways that surprise, frustrate, and infuriate. That can be highly enjoyable, but only if the reader is primed for it. Otherwise, experimental fiction generates one-star reviews and never finds an audience. So even difficult fiction has to communicate its intentions to its readers somehow.

Development editors think about this in terms of a contract between the book and the reader: what are you telling the reader to expect? Do you deliver? Genres might actually offer something closer to apodictic inferences. Not every mystery is a Whodunnit (it might be a Howcatchem), but there’s always a reveal economy. By asking authors to articulate their goals, Apodictic doesn’t have to just read and wonder: it can test a novel’s putative thesis and genre self-identification against the conventions of that genre.

What the tool does, in practice, is take a first pass to infer a contract. The most performant large language models can do a surprisingly good job of this. It’s like asking the AI: “What am I trying to say?” When it gets something wrong, you correct it, and then (if it works) it tells you which parts of the manuscript are doing something you didn’t intend.

One design principle mattered more than any other: the firewall. Every AI writing tool I tried wanted to rewrite my prose. I didn’t need a co-writer. Apodictic diagnoses problems and identifies classes of solution. It never invents content: no new plot events, characters, dialogue, or imagery. You’re the writer. It’s the analyst. Without that boundary, the tool would just become another way for the LLM to take over your manuscript, and the whole exercise would be pointless.

What I Learned About Writing

Building the back-end reference files was the most fun part. It was also an opportunity to dust off all the fiction and narrative nonfiction guides I’d always wanted to figure out: the detailed craft skills a competent writer cultivates that I, as an academic and even as an editor, never had a professional reason to learn. The idea that every avid reader would make a good writer is like the idea that every avid magic fan would make a good magician.

Reverse-outlining, for instance, is probably obvious to anyone who does serious work on fiction. You can learn something like it in law school, and every logic professor has taught a version of argument diagramming. But thinking clearly about the reader experience and reveal economy, or measuring the proportionality of different narrative elements: that’s what you have to know to write a good novel, not just to read one. A beat map is a cool tool, and if you’ve reread a book a few times you could probably reconstruct one. But it’s not like such things come naturally. (At least not to me.) And once you know the moves of Save the Cat, you won’t be able to watch network television without seeing them everywhere, which is less a gift than a curse.

I’ve always been tempted by the idea that narratives make arguments much like philosophy papers do, and that’s pretty obviously reductive. But the most transformative thing I learned was simpler: development editors think in terms of a contract between the book and the reader. What are you promising? Do you deliver? Within genre constraints, fiction and narrative nonfiction flourish when the author has something to say and something to refute. These guardrails are part of what a development editor looks for and enforces. It’s a simplification, but for many authors and readers it’s a necessary one.

That idea reframed everything else I was learning. The difference between Happily Ever After and Happy For Now in romance isn’t a trivial genre tag; it’s a promise with real consequences if broken. Grimdark and hopepunk aren’t symmetrical moods; they make different contracts with the reader. I’d never thought about any of this before, and it’s already starting to change how I read.

What I Learned About LLMs

Here’s where it gets humbling. The first time I ran a simple “be a development editor” prompt on a recently published novel (Dungeon Crawler Carl), a five-line prompt got probably 60% of the insights that my much more elaborate tool produced. I was shocked.

Even more humbling: it turned out the simple prompt could do a great job without even reading the novel. I picked what I thought was an obscure time-travel novel from the late 80s, Leo Frankowski’s The Cross-Time Engineer, and Apodictic seemed to do well analyzing it. It was even insightful about the book’s misogyny. But so was the five-line prompt, because the misogyny is famous, and there are plenty of discussions of it on Goodreads and in reviews that ended up in the training data. Anthropic paid a $1.5 billion settlement for ingesting pirated books, too, so the model may have had direct access to the text. This is a known issue in benchmarks like the math olympiad, but I didn’t expect the model to waste weights on obscure mid-list science fiction from forty years ago.

Among many other things, what we’re seeing in LLMs is a spectacular project of knowledge compression. They’re big files, but they contain far more general knowledge than you’d expect.

Thankfully, performance drops precipitously with unpublished writing, which is, of course, what a development editor actually works on. A few other lessons:

The models are sycophantic. Everyone knows this on some level, but it’s really hard to get them to notice a criticism and sit with it rather than explaining it away. A lot of what I built was about making sure hostile perspectives survive the review process.

Structure matters, but less than you’d think. You can get 20-30% of the depth of analysis from Claude with a simple five-line prompt in incognito mode. The elaborate plugin structures the next 70%. The AI labs mostly think this kind of scaffolding is unnecessary: as datasets grow, they naturally incorporate writing expertise from the entire English-language corpus. I kept testing the simple version against mine and almost gave up when the simple version started getting really good.

Multi-model synthesis helps. You can get better results by asking the same question of Gemini, ChatGPT, and Claude, then asking one of them to synthesize all three answers. Gemini writes a little research paper on every question; ChatGPT applies clear structural thinking; Claude writes lyrically and thoughtfully, and has enough working memory to evaluate everything together.

Vibecoding Is Real

I am not a computer programmer. I can parse basic HTML and have some rudimentary database knowledge. I learned git for this project. But I was able to build an app version of the plugin and fully set it up. It’s not totally stable (the main instability is that it uses a janky backend cloud computer rather than more advanced hardware) but it works.

One thing about vibecoding: you can do it early in the morning and late at night. Mostly it’s testing, reviewing, asking for a specific fix, and then waiting around while the machine does the work. I built the basic thing on Google AI Studio in React in an evening.

A confession on models: Codex is faster and smarter at React programming than Claude, and it’s not close. Especially after the Pentagon fiasco, Anthropic has my loyalty, and Claude works best with my personal approach to text and writing. But Codex 5.3 is just more of a stickler for coding projects right now, at least for what I’ve been building.

What I Actually Learned

I started this project to test a dubious claim from the internet. What I didn’t expect was that building the tool would teach me more about writing than twenty years of avid reading had. AI makes crazy projects possible: a philosophy PhD with no programming background can prototype a working app. And the work that editors, writers, and critics do is going to change. The models aren’t better than human judgment, but they’re good enough to sharpen it.

Try It Yourself

If you want to skip the tool entirely and try the bare-bones version, here’s the five-line prompt that gets you surprisingly far:

You are a developmental editor for fiction. Read the attached manuscript and write an editorial letter in Markdown. Identify what’s working structurally, what’s losing momentum or undermining its own impact, and provide a prioritized revision checklist. Include an adversarial stress test: inhabit hostile reader perspectives and identify what an uncharitable reader would attack. For each adversarial claim, commit to a severity rating before generating a counter-argument, and do not let the counter-argument reduce the severity. End with a “what not to touch” section.

And a three-line addendum that fixes a common failure mode where the model doesn’t actually read the whole manuscript and hallucinates the rest:

Read the complete manuscript before beginning analysis. If the text is long enough that your reading tool truncates or summarizes middle sections, read it in sequential chunks until you have covered every section. Do not estimate word count—count it or ask for it. Do not begin drafting the editorial letter until you can confirm you have read the final page.

Give it a try on something you’ve written. Start with the five-line prompt and see what it catches. If you want to see what the full tool produces, here are some sample outputs on published books you can check against your own reading: an editorial letter for Dungeon Crawler Carl (structural proportion problems, emotional ceiling, non-terminal climax), an editorial letter for Theo of Golden (episodic structure, protagonist without an arc), a targeted audit of A Court of Thorns and Roses (force architecture and erotic content), and a pre-writing pathway for a historical novel interleaving Arendt and Rahel Varnhagen. (I’d like to write this someday.)

If you want the structured version, Apodictic is here. It also runs as a Claude Code plugin and as a Custom GPT, both at no additional cost if you already have a Claude or ChatGPT subscription. And then tell me what it got wrong. The whole point of the adversarial layer is that these models want to be nice to you, and the tool is only as good as its ability to resist that impulse. If Apodictic pulled its punches on your manuscript, or praised something it should have flagged, I want to know. Drop a comment below or email me at anotherpanacea@gmail.com.

the USA at 250: constitutional crisis

Last night, I was part of The United States at 250: A Tufts Faculty Panel. In a full room of students, Tufts historians and political scientists with various specialities addressed the question: “Where are we as a nation and what’s next?”

I offered the following argument. I have derived it from other people’s scholarship, and I am not sure it is true, but I think Americans should consider it.

We’re marking a 250th anniversary because 1776 began the period that concluded with our Constitution. However, the Constitution is now in a deep crisis. We may now be coming to the end of a 250-year period. The reasons are not named “Donald J. Trump.” These are three deeper reasons.

First, presidential republics have a fatal flaw, and none except the US–and arguably, France–has survived for a long period (Linz 1990). Whenever opposing parties control the legislature and executive, they are motivated to battle at the cost of the republic.

For most of our first two centuries, we did not have regular impasses, because the Democrats were divided into two major blocs, resulting in at least three effective parties in Congress; and most presidents could build a working majority. However, when conservative Democrats defected to the GOP, the two parties polarized. Since 1990, it has been possible to govern in the ways envisioned by the Constitution only when the same party has controlled both elected branches (6 periods of 14 total years). During the other 24 years since 1990, presidents have tried to rule by executive order and Congress has tried to undermine the current administration. We have moved ever closer to complete constitutional breakdown.

Second, the Constitution enacts three branches of government: the executive, legislative, and judiciary. Since at least 1932, we have actually had another branch: the administrative and regulatory agencies, staffed by about about 2.2 million federal employees who are understood to be insulated from politics. They follow rules, norms, and principles of their own that are not mentioned in the Constitution–for example, scientifically measuring the costs and benefits of proposed policies and publishing drafts of policies for public review and comment. Perhaps we have also had a fifth branch, the national security apparatus.

We muddled through for decades by pretending that the agencies were part of the executive branch while the White House usually deferred to them. Under a 1984 Supreme Court decision, Chevron, the courts also generally deferred to agencies’ decisions. Meanwhile, Congress intentionally gave agencies broad scope. The regulatory state was largely independent from the other branches.

However, in 2024, the Court repealed Chevron with the Loper decision, allowing courts to review agency decisions. And Donald Trump has fired and replaced many civil servants and members of so-called independent agencies for openly political reasons.

Libertarians argue that we shouldn’t have had a massive federal government in the first place. And populists of right and left argue that an elected president should be able to determine policies. A left populist may celebrate the opportunity for a Democratic president to reshape the agencies at will now that they have lost their independence. I think, however, that every country with an advanced economy has built an elaborate and quasi-independent regulatory apparatus that applies science and managerial acumen to generate benefits that voters want. We may not have that anymore.

Third, Congress no longer legislates, in the sense of passing or reforming substantive statutes. In 1965 alone, Congress passed at least 10 landmark bills that established agencies or dramatically altered national policies. As recently at the 1980s, Congress sometimes legislated by substantially cutting regulation. But Congress has arguably passed no major laws in this whole century so far.

For example, Congress has never passed legislation explicitly about the climate. Federal regulatory agencies have used 1970s Clean Air Act (written before Congress was really aware of climate change) to try to regulate carbon. Likewise, federal financial laws were passed before cryptocurrency; and the Telecommunications Act of 1996 still governs despite some minor new developments, such as social media and smartphones.

In sum, we can’t handle frequent periods of divided government; our massive regulatory state lacks a constitutional basis; and the branch in which “all legislative power” is “vested” no longer legislates.

It is possible that we will keep driving ahead, frequently bumping into the Constitution’s guardrails but somehow staying on the road for decades.

Or we could see substantial reforms–major constitutional amendments or new voting laws that change the basic structure. (For instance, proportional representation would transform Congress–for better or worse–and could be accomplished by law.) I sometimes wonder whether our incompetent and blatantly authoritarian president is a blessing, alerting people to the need for reform without successfully consolidating power.

Or we could see a collapse. The typical final act of a presidential republic is a soft dictatorship. That’s why this topic is important to discuss on our 250th.


Prophetic works include Juan J. Linz, “The Perils of Presidentialism.” Journal of democracy 1.1 (1990): 51-69 and Theodore Lowi, The End of Liberalism (1969). See also: rule of law means more than obeying laws: a richer vision to guide post-Trump reconstruction; on the Deep State, the administrative state, and the civil service; the Constitution is crumbling; etc.

What Counts As Success? Assessing The Impact Of Civics In Higher Ed

On February 18, the Alliance for Civics in the Academy hosted a webinar on “What Counts as Success? Assessing the Impact of Civics in Higher Ed” with Trygve Throntveit, Rachel Wahl, Joseph Kahne, and me.

We discussed some of the advantages of developing reliable and consistent measurements of civic education, particularly the opportunity to learn from data and the need to be accountable. We also discussed some drawbacks and risks, including Campbell’s Law (a remark by Donald T. Campbell): “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.”

We asked ourselves who should use assessments, and for what purposes. For example, it is a different matter for a college professor to get feedback from the students in a course or for a university to measure student outcomes. I thought the conversation was both intellectually serious and relevant to practice.

Panelists:

  • Rachel Wahl: Associate Professor in the Social Foundations Program, Department of Educational Leadership, Foundations, and Policy at the School of Education and Human Development at the University of Virginia
  • Joseph Kahne: Ted and Jo Dutton Presidential Professor for Education Policy and Politics and Director of the Civic Engagement Research Group at the University of California, Riverside.
  • Trygve Throntveit: PhD, Research Professor in Higher Education and Associate Director of the Center for Economic and Civic Learning (CECL) at Ball State University.

I was the moderator. The video is here:

AI as the road to socialism?

Just under 40% of occupations jobs in the USA may be replaced by AI if it proves to be as powerful as some think it will be.* As a thought-experiment (not as a prediction), imagine that 40% of current workers, or about 60 million Americans, are no longer employed because AI does their former work. However, their former employers are still producing the same goods and services. These firms are therefore far more profitable.

The profits flow to shareholders. Individuals are already taxed now, but with tens of millions of new people out of work, there would be more political will to raise taxes. Therefore, imagine that a set of competing tech. firms have become responsible for a substantial portion of the whole economy and are heavily taxed. The proceeds flow back out of the government in the form of cash payments, perhaps a Universal Basic Income (UBI). Recipients are able to pay for the goods and services that machines now heavily produce. Meanwhile, jobs that are not automated are relatively well paid, because the UBI enables individuals not to work unless they want to.

Silicon Valley ideologues like Sam Altman tend to envision a UBI on the scale of $1,500/month. Today’s white collar workers earn a median income of about $5,000/month. Therefore, the kind of UBI that Altman imagines would result in a massive loss of income for millions of people, which would have cascading effects. All the former office-workers who now live in nice houses and buy costly services would have to give those up, causing additional unemployment and declining demand for the products produced by the tech. companies.

However, the public might demand a UBI more like $5,000/month. Then half of today’s white collar workers would be worse off, but half would be richer–and none would have to work.

Looking a little more deeply, we might notice that AI tools are not simply machines. They process text and ideas that human beings create. Therefore, we could see this whole system as deeply socialistic. Billions of people’s mental output would be processed by relatively few AI models that produce generally similar output. These tools would generate profits that would be distributed equitably to the people. Most individuals would receive $5,000/month, neither more or less. Since they wouldn’t have to work, they could spend their time as they wish. And–via electoral politics–the people could regulate the AI companies.

It all sounds like Karl Marx’s early utopian vision:

In communist society, where nobody has one exclusive sphere of activity but each can become accomplished in any branch he wishes, society regulates the general production and thus makes it possible for me to do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticise after dinner, just as I have a mind, without ever becoming hunter, fisherman, herdsman or critic. (The German Ideology, 1845)

Problems:

  1. The transition to this imaginary equilibrium might be chaotic, violent, and destructive– perhaps to such a degree that we wouldn’t make it through.
  2. Modern people tend to derive dignity and purpose from work. Perhaps this is a contingent fact about today’s society. In the future, maybe we will be happy fishing in the afternoon and writing criticism after dinner. Or perhaps we will be deeply depressed without jobs. To make matters worse, would we really spend our time writing or playing music or even fishing, if machines can do all those things better? This is not a problem that confronted Marx, because in his day, machines automated tasks that people would not do voluntarily.
  3. It’s easy to posit that the people can tax and regulate AI companies through the device of a democratically elected government, but millions of people’s interests and values do not automatically turn onto one public will. Interest groups have agendas and power. At large scales, democracy is complicated, messy, factional, and very easily corrupted. In this case, the AI companies and investors would be political players.
  4. It could be that not only AI companies but also the models themselves become players that have interests. Sentient, self-interested AI is the source of much current anxiety. I am not sure what to make of that concern, but it surely adds a layer of risk.
  5. I have discussed the USA alone, but how would this look for people in a country without competitive AI companies? US citizens might demand that Silicon Valley provide them with a UBI, but it’s implausible that US citizens would demand a global UBI. And how would people in Africa or Latin America gain leverage have over US policy?
  6. For the people to govern the “means of production” (to use the Marxist term), they must understand it. Industrial workers have understood industrial machines, so they can run factories. None of us understand Large Language Models, not even the developers who design them. Can we, therefore, govern them? (Having said that, we also do not fully understand the human brain, yet people have governed people.)
  7. Even if democracy works well, the public will not really control AI. So far, I have suggested that AI is like a machine that can be regulated by people through their government. But AI also shapes our knowledge, values, and understandings of ourselves in ways that are controlled either by the designers and owners of the platforms, or by the machines, or–perhaps–by no one at all. Evegeny Morozov writes:

Now imagine a future in which a [public] Investment Board, under pressure to avoid bias and misinformation, mandates that AI systems be fair according to agreed metrics, respect privacy, minimize energy use, and promote well-being. Call this woke AI by democratic mandate–an infrastructure whose outputs are correct, diverse, and balanced. Yet it still feels like it was designed over our heads.

Morozov suggests a different path. Instead of allowing corporate AI to grow and then trying to regulate it and capture its value, develop non-corporate AI:

A city government might maintain open models trained on public documents and local knowledge, integrated into schools, clinics, and housing offices under rules set by residents. A network of artists and archivists might build models specialized in endangered languages and regional cultures, fine?tuned to materials their communities actually care about. 

The point is not that these examples are the answer, but that a socialism worthy of AI would institutionalize the capacity to try such arrangements, inhabit them, and modify or abandon them—and at scale, with real resources. This kind of socialism would treat AI as plastic enough to accommodate uses, values, and social forms that emerge only as it is deployed. It would see AI less as an object to govern (or govern with) and more as a field of collective discovery and self-transformation. 

I should say that I am not a socialist, partly because available socialist theories have not persuaded me, and partly because I am also drawn to liberal ideals of individual rights, privacy, and negative liberties. However, “socialism” is a broad and protean term, and socialist thought may offer resources to envision better futures. Confronting the massive threat–and opportunity–of AI, we should use any intellectual resources we can get our hands on.


*I have aggregated the categories of office and administrative support; sales and related; management; healthcare support; architecture and engineering; life, physical, and social science; and legal from the Bureau of Labor Statistics. I omitted education (5.8% of all jobs) on the–probably vain–hope that my own occupation won’t also be automated. If that happens, raise the estimate of obsolete jobs to 45%.

See also: can AI solve “wicked problems”?; Reading Arendt in Palo Alto; the human coordination involved in AI (etc.)

The Civic Stakes of Organizational Disagreement

A new Stanford Social Innovation Review series examines how organizations should handle disagreement. Tufts University’s Tisch College of Civic Life is proud to be a co-presenter of this series. Tisch College Professor of the Practice Ahmmad Brown is the curator and editor, and our colleague Nancy Marks even provided the professional art.

The first article is by our dean, Dayna Cunningham, and me. It is entitled “The Civic Stakes of Organizational Disagreement.” We consider the value of disagreement and dissent in different kinds of organizations (a social movement, a firm, and a university). We advocate for pluralism–not neutrality–as the guiding ideal. We argue that how organizations handle disagreement matters not only for their performance but also for the democracy more broadly.

The citation: is Levine, P., & Cunningham, D. L. (2026). The Civic Stakes of Organizational Disagreement. Stanford Social Innovation Review. https://doi.org/10.48558/EYWC-EA67

living life as a story

Thesis: It is better to live as if one’s life were a story, yet many people cannot live that way.

A conventional story has a finite number of named characters, many of whom know many of the rest. These characters have constraints and limitations, but they also face at least some consequential choices. The choices they make contribute to the plot. Their choices tend to be related to their inner lives: their beliefs, desires, and character traits. Although they may spend most of their time separately and quietly, the narrative emphasizes their interactions. In fact, dialogue occupies much of a conventional novel and all the text of a play or a screenplay. In biographies and narrative histories, quotations from speech may be shorter, but they are are often prominent. What the characters think, do, and say is noticed and preserved–at least by the narrator, and usually by some of their fellow characters.

We can feel that our lives are like this, and we can be correct about it. Or we can feel (rightly or wrongly) that this is not how we live. Here are some threats to living as if in a story:

  • Modern economies (capitalist or socialist) that organize masses of workers so that each one feels little agency, while many live so precariously that they cannot make consequential decisions.
  • State tyranny, which not only blocks consequential choices and suppresses frank discussion but also invades the private spaces in which people could develop independent beliefs and values.
  • Hypertrophied science and technology, which make human behavior appear mechanical and predictable, or which actually control human beings.
  • Bureaucracy, which minimizes individual agency by applying rules, metrics, and files.
  • Ideologies, in the pejorative sense of all-encompassing theories that explain individual choices away or that replace human characters with abstractions, such as classes or nations, as the major protagonists.
  • Loneliness or isolation, meaning the absence of the interactions that would constitute a conventional story.
  • A lack of solitude, an inner life that can be described in a narrative and connected to overt actions.
  • Catastrophes, which wipe out the memories of characters and their actions.

(On that last point, Jonathan Lear writes:

Not long ago, I listened to a lecture on climate change. The lecture went as one might expect. There was a warning of impending ecological catastrophe and talk of the “Anthropocene,” suggesting that our age—the age in which humans dominate the Earth—is coming to an end. At the end of the talk, there was a discussion period. At one point, a young academic stood up and said simply, “Let me tell you something: We will not be missed!” She then sat down. There was laughter throughout the audience. It was over in a moment.

Lear develops the idea that missing or mourning things is a distinctively human contribution; and it is ineffably sad that no one would miss homo sapiens, even if we cause our own extinction, and even if other species would be better off without us. It means that all the stories would be gone.)

I think many of us assume that our lives are like stories and that some other people notice and remember our roles in them. For us, the evaluative questions are: How is this story turning out? And what kind of a character am I? I would rather live in a comedy than in a tragedy, and I aspire to be the hero rather than the villain in my own little patch.

However, I think the main thrust of Hannah Arendt’s philosophy is that there is an antecedent question: Am I in a story at all? (See, e.g., The Human Condition, chapter v.) I believe she would say that it is better to be the villain in a tragedy than not to inhabit any kind of story, and that most modern people no longer do. The list of threats (above) comes directly from her work.

Note that this is a different ideal from the common one of authorship. For instance, Immanuel Kant defines ethical individuals as the authors of the rules that govern them:

The will is therefore not merely subjected to the law, but in such a way that it must also be regarded as self-legislating, and precisely for that reason must it be subject to the law (of which it can consider itself the author [als Urheber]).

In contrast, Arendt writes:

Although everybody started his life by inserting himself into the human world through action and speech, nobody is the author or producer of his own life story. In other words, the stories, the results of action and speech, reveal an agent, but this agent is not an author or producer. Somebody began it and is its subject in the twofold sense of the word, namely, its actor and sufferer, but nobody is its author (The Human Condition, p. 184)

For her, politics is the domain where people are characters but there is no author. This is a result of plurality: there are many of us, and no one (not even a dictator) can solely determine the outcomes.

Jürgen Habermas holds a generally similar view but presents all the citizens of a community as its authors (in the plural):

According to the republican view, the status of citizens is not determined by the model of negative liberties to which these citizens can lay claim as private persons. Rather, political rights—preeminently rights of political participation and communication—are positive liberties. They guarantee not freedom from external compulsion but the possibility of participation in a common praxis, through the exercise of which citizens can first make themselves into what they want to be—politically autonomous authors of a community of free and equal persons.

Authors and characters are metaphors, not literal descriptions. As such, they capture certain compelling ideas without fully describing reality. Here I want to suggest that the metaphor of characters draws our attention to urgent issues. We need social, political, and intellectual reforms to enable more people to live like characters in stories. These reforms require intentional action. We must be the authors of contexts in which people can be characters.


Sources: Jonathan Lear, Imagining the End: Mourning and Ethical Life (Harvard, 2022, p. 1); Kant, Grundlegung zur Metaphysik der Sitten (my trans.); Habermas, “Three Normative Models of Democracy,” in Seyla Benhabib (ed.), Democracy and Difference: Contesting the Boundaries of the Political (Princeton University Press, 1996). p. 22. See also: Hilary Mantel and Walter Benjamin; Kieran Setiya on midlife; a vivid sense of the future; the coincidences in Romola; and Freud on mourning the past.