Semantic Insatiability and Logophilic Etymologies

Most people know the experience of saying a word over and over again until it loses its meaning and becomes a sound: this experience is called “semantic satiation.”

But what about the opposite experience? Can you puzzle over a word until it gains meanings? Can we re-saturate meanings? Can we make words resilient against this kind of loss? Call this semantic insatiability: a technique for bolstering an idea against the exhaustion of its meaning.

Here’s a literal example: Gertrude Stein’s “Rose is a rose is a rose is a rose.” She liked to say that the goal of that line was to steal roses back for poets to use again, to strip all the semantic goop of romance off of the flower. “I think in that line the rose is red for the first time in English poetry for a hundred years.” To make it useable again merely as a flower, to accrue new meanings, connotations, and implications.

If you read her poem Sacred Emily where this line appears, her claim becomes a lot less plausible–like most of her poetry, it reads like logorrhea, and the phrase is lost in a sea of repetitive noise. (Her poetic explanations tend to confirm this view.) I don’t like the poetry, but I like the claim. It suits my purposes even if it seems both false in context and is certainly wrong as a matter of causality. (“Sacred Emily” was published in 1913 and in 2024 the Society of American Florists estimated that we’d send 250 million roses domestically. If the poet really tried to save roses from their cultural baggage, she failed.)

Some people who love stories also love poetry. Some don’t. I love both narratives and the words, sentences, and paragraphs that help to build them. And so I want to talk about how I, personally, try to reverse semantic satiation–how I re-saturate semantics at this level.

Contronyms Resist Semantic Saturation

Consider the contronym (a word that is also its antonym). “Fast” can mean both “stable” (hold fast!) and “quick,” “sanction” can mean both “punish” and “authorize.” And as you’re messing around with these words trying them out in different sentences, semantic satiation might set in–it all starts feeling like gibberish. If you spend some time with the Oxford English Dictionary (or the Online Etymology Dictionary for folks without academic library access) though, you’ll learn a lot about how the meanings diverged, and at least in my experience, the ordinary usage meanings and distinctions will re-emerge. “Fast” began in Old English as “constant, secure, or watertight.” We can still hear that sense in “steadfast” and similar terms. But then we don’t know precisely how we got to the speedy sense of the word: perhaps from Old Norse/Scandinavian: “fast” could mean firmly (unmoving) but that sense of “strength” gets applied as an adjective to standing and holding, and then from there to fighting, drinking, or even running. To run firmly (telja fast in Scandinavian) then implies quick movement. Other things are quick by analogy.

Is that how it happened? Or did it come “fast” meaning “nearby, to hand, immediate”? “Stay fast by my side!” Then somehow from “nearby” to “immediately” to “quickly”?

What I want to note is how we don’t tend to trip over semantic satiation as we explore contronyms. Puzzling this out doesn’t exhaust the word’s meaning the way ordinary repetition might. I suspect this is because we’re often contextually cluing the varied meanings: tracing the linguistic origins and carefully noting the different senses of the word in different contexts can actually restore what mere repetition has depleted. I think this is one of the secret gifts of studying the languages from which English is derived, as well: by learning Latin, French, or Spanish; Greek, German, or Sanskrit, you get a deeper sense of the ways your own language is put together.

Semantic Satiation and Linguistic Determinism

Consider another contronym: if you don’t want to make an oversight, you may need oversight. The term indicates both the mistake and a technique for mistake prevention. And it doesn’t stop there: it helps for someone to look over your work so you don’t overlook something important. If you dig into these incorrect/correction pairings, it’s not clear which came first, etymologically: the error or the editor. And as you’re digging, you come to see that English has quite a lot of words for watching someone from above: survey, surveillance, supervisor, superintendent, overseer, overwatch, watch over, etc. English has lots of words, period, but it seems like it especially has a lot of hierarchical words for watching someone else work!

Does this tell us something about Anglophone culture? Is this like having lots of words for snow (and the attendant Sapir-Whorf theories of cultural determinism and linguistic relativity) an indicator that hierarchy, bureaucracy, or even colonization and enslavement have such deep roots in our history that they’ve found lots of different expressions over time?

I used to think that the Sapir-Whorf theory had been falsified, but it’s more that it’s always been ambiguous between two different versions: that (a) language provides the medium for thought and thus (b) affordances and resistances. It’s clear that we often think in words, and thus if there’s no word for an idea there will be some difficulty in thinking it. But what happens then? Perhaps the thought is defied and goes unthought, we stutter and give it up. Perhaps we misuse another word. Perhaps we create a neologism. And perhaps we use poetic language to eff the ineffable with metaphor and simile.

My sense is that Benjamin Whorf, at least, thought we couldn’t understand the sheer variety of the world if our language didn’t give us the tools. A language that lacked some term would be spoken by a group that couldn’t grasp the attendant concept. You hear examples occasionally: did the ancient Greeks lack a word for “blue”? Why did Homer call the sea “wine-dark”? Did he lack the ability to see its true color? Is there really an Amazonian tribe that lacks number terms beyond “one, two, and many“? How does that work? This is empirically testable, though, and while there’s some evidence for the mathematical incapacity of the Piraha there’s fairly clear counter-evidence on color: at best we perceive different colors a bit more quickly when we have a color term to match them. (It’s too bad, really, that we have managed to empirically disenchant this fanciful view that inhabitants of the ancient world look out on vistas with utterly different eyes.)

Is there anything left of the Sapir-Whorf hypothesis worth crediting? On one version of Whorf’s view, a language will gradually develop to capture distinctions and ideas that are especially relevant to its speakers. In short: if you don’t live in a highly hierarchical world, you won’t understand all the ways that bosses can come to be relevant to your life and your language won’t give you those tools. Meanwhile, if you’re surrounded by snow from a young age, and your culture and history have been built up around snow for generations, your language will give you the concepts to see how many different ways snow can be.

We have the same word for falling snow, snow on the ground, snow packed hard like ice, slushy snow, wind-driven flying snow — whatever the situation may be. To an Eskimo, this all-inclusive word would be almost unthinkable; he would say that falling snow, slushy snow, and so on, are sensuously and operationally different, different things to contend with; he uses different words for them and for other kinds of snow. The Aztecs go even farther than we in the opposite direction, with ‘cold,’ ‘ice,’ and ‘snow’ all represented by the same basic word with different terminations; ‘ice’ is the noun form; ‘cold,’ the adjectival form; and for ‘snow,’ “ice mist.”

Whorf, Benjamin Lee. “Science and linguistics,” Technology Review (MIT) 42, no. 6 (April 1940).

Another fun recent example is the observation that the English language has an absolutely gob-smacking number of words for being intoxicated by alcohol. As Christina Sanchez-Stockhammer and Peter Uhrig recently detailed, there are at least 546 synonyms for the word “drunk”!

A meaningful argument structure construction without any lexical content might be available to speakers, but we conjecture that additional contextual cues as to the topic of drunkenness will be needed to successfully use this construction, the most prototypical form of which is

be/get + intensifying premodifier + -ed-form.

Coming back to the theoretical questions asked at the end of Section 2, we can say that the wide range of words observed in the already existing lists of drunkonyms seems to support the view that there is a large amount of words that one could potentially use to creatively express drunkenness in English. The wording “any word” put forward by McIntyre appears slightly too general, though, as it is difficult to imagine words such as isthe or of to mean ‘drunk’. Also, for nouns such as carpark or gazebo, it is strictly speaking not the word itself but an -ed-form of it, i.e. carparked or gazeboed, that expresses the meaning of drunkenness in the relevant contexts.

Sanchez-Stockhammer, Christina and Uhrig, Peter. ““I’m gonna get totally and utterly X-ed.” Constructing drunkenness” Yearbook of the German Cognitive Linguistics Association 11, no. 1 (2023): 121-150. https://doi.org/10.1515/gcla-2023-0007

What is it about the Anglophone world that makes it so necessary to express drunkenness? Anglophone countries aren’t the top alcohol consumers, for instance. Nor do we have the most binge drinking. Yet somehow we talk about it with more variety and creativity, which is more evidence agaisnt the stronger versions of Whorf’s claims.

Most people are at least a little bit tempted by Sapir-Whorf just because it seems like it can’t be totally irrelevant to your experience that your first language has or lacks some concept or prioritizes or deprioritizes some structure. But since language comes alongside membership in a community, and communities are often pretty internally diverse, this gets caught up with the same old problems of strong cultural relativism. It’s at most an influence, not a determination–unless the determinants aren’t simple things like which words (and thus concepts) you have, but rather what kinds of grammatical structures it allows.

Philology and Logophilia

Here is where I wish there was more appreciation for Friedrich Nietzche’s philology than his philosophy. One of his major insights is that the concepts crystalized in our language often radically diverge from their origins, which I think makes the strong version of Whorf’s claims seem obviously wrong: there can’t be linguistic determinism if we’re constantly re-appropriating language for new purposes and forgetting its origins. (Even the “drunkenness” formations are evidence of this: there is very little of the “gazebo” left in our understanding of what is happening to the drunken undergraduate who describes himself as “being utterly gazeboed.”

Nietzsche delighted in the ways that phonemes diverge from morphemes. He even suggested that our forgetfulness about the origins of our language was part of its power, and how we resisted sedimentation–even as some survivals dangerously persist. So I get to drop my favorite line again here. Describing truth, which might be a quality that some words and concepts aspire to, he explains that it is:

A mobile army of metaphors, metonyms, and anthropomorphisms—in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically, and which after long use seem firm, canonical, and obligatory to a people: truths are illusions about which one has forgotten that this is what they are; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins.

Friedrich Nietzsche, “On Truth and Lie in an Extra-Moral Sense

I love the image of the philologist as numismatist, collecting old coins and discerning and perhaps repairing them to their original form. This is to say that Nietzsche’s genealogical practice was itself a kind of semantic re-saturation. His major works often pursue one or another concept through a historical and genealogical line not to restore an origin but to celebrate possibilities. You can’t think about “guilt” without thinking about “debts” after you read the Genealogy of Morals. Arguably some of his other efforts in this regard are less effective, but he was certainly trying to make us think differently and more fully when we hear cliched invocations of good and evil; faith and God; power and authority; creativity and individuality; pity and compassion; truth and honesty. Some of what he does is induce semantic satiation and then offer to re-saturate those meanings anew (and with his own preferred valences attached.) That particular effort has been one of the reasons that academic philosophers continue to read him, though it’s prone to embarrassing misuse!

So what?

Say all this is true:

  • repetition depletes meaning
  • linguistic puzzles resist this depletion
  • studying the origins of ideas can grant them a new lease on life
  • we’re not bound to those origins
  • we constantly engaged in creative rearrangements of our linguistic affordances.

What then?

It certainly seems like we’re entering a particularly dangerous period where a lot of text is being generated mechanically (ironically unlike Stein’s poetry). Emily Bender calls large language models “stochastic parrots.” We are approaching the zeitgeist of semantic saturation, and we’re going to have to work hard just to hold on to meaningfulness. (Probably we always have to do that kind of work, but it’s more obvious now.) In professional philosophy, we’ve seen the growth of normatively inflected “conceptual engineering” and “ameliorative analysis,” where conceptual analysis is being used to assert nakedly political projects, laying bare the ways that the very meanings of our words is subject to wrangling and contestation. In some future posts, I’d like to try to think both those trends together, perhaps via Heidegger’s frustratingly fascist etymological approach.

(Thank to my friend Michael Willems for discussions on this topic.)

Rascal News

Rascal News is an exciting new venture in tabletop games journalism. Building on the 00s’ New Games Journalism for videogames, the editors/authors are Lin Codega, Rowan Zeoli, and Chase Carter. A recent interview with Kimi Hughes discusses “How Has Actual Play Changed Game Design?

Some sources and inspirations:

Maybe the Horse Will Sing: On the Value of Putting Things Off

Nasreddin got himself into some serious legal trouble–the reasons are lost to time. Before the king sentenced him to death, Nasreddin asked for a delay because he was the only person in the world who could teach a horse to sing. The king was skeptical, but gave Nasreddin a horse and a year to teach it. “If that horse isn’t signing a year from today, you’re going to be put to death, and we’re going to get creative about it!”

Nasreddin’s cellmate asked him why he’d done such a foolish thing! “Even you know that a horse can’t sing!”

“True. But a lot of things can happen in a year. The king may die. I may die. And, who knows? Maybe the horse will sing.”

The preceding allegory is often attributed to Herodotus or Aesop in American science fiction stories, and I haven’t been able to track it down. To me it seems that the most plausible source is that this was originally a Sufi tale of Nasreddin Hodja, in part because sourcing is more difficult for Nasreddin stories and our folk tale philology is weaker for Muslim sources.

Regardless of the source, it’s surprising how much of life, work, and politics can respond well to this sort of lesson: keep trying and maybe things will be different later. Another science fiction author, Ray Cummings, captured this well: “Time is what keeps everything from happening at once.”

I’ve written about the problems with clichés often enough. They can be thought-defying and rule out further inquiry. (There’s surely more to time than Cummings’ joke.) Nonetheless, they often carry a little insight that’s needed often enough to justify the repetition, too.

Why Philosophy of Crime and Punishment, Now?

I am teaching this course again. Every year it changes, and this year I hope it changes a lot. Here’s what I said about this today, our first day of classes:

Any story about crime and punishment is bound to start with a few stylized facts. Until this year, I’ve started with the same number: 2.3 million people are incarcerated in the United States, which is roughly seven times as many people as our peer nations would incarcerate if they had the same population we do. But last year something extraordinary happened: somewhere around 8% of those people were released and not replaced. We can’t be very exact because prisons across the country are not very careful about counting and reporting the number of people they imprison, but any way you slice it the pandemic has started an unprecedented process of decarceration that we’re going to be talking about throughout the semester.

I’m starting with that stat, but I could equally well start with another one: 

The FBI says that—during the first six months of this year—the number of murders in 22 cities increased by 16% compared to the same period in 2020 and by 42% compared to the first six months of 2019.

(cite)

Incarceration is down. Crime is up. Could there be a connection?

First, an interlude: when I taught this course in 2019, I had a student stand up about halfway through class and loudly leave, commenting “I thought this was going to be a course on Dostoevsky’s novel.” That’s a different class. And while I can tell you that there is almost certainly not a connection between decreasing incarceration and increasing violence, that’s not really what this class is about, either.

What we will do together if you continue in this class is somewhat different. I want to remind you about something literally academic: departments and disciplines. This is a philosophy class. And there’s a difference between how we approach a problem like mass incarceration in this department, compared to how it might be approached in a political science class, a psychology class, or a history class. I have colleagues and collaborators in all of those departments—as well as Theology, Linguistics, English, Biology, Sociology, and others—who apply their disciplinary approach to this issue. Meanwhile, we all are truly colleagues and collaborators: we work together, learn from each other, and read each other’s work.

So what I want you to think about—a little bit—is what the specifically philosophical approach to crime and punishment might be. 

One specifically philosophical approach is the analysis of concepts: 

  • What is a crime? What is the relationship between crime and moral obligation? Is crime any violation of the criminal code? Or are some things outlined in the criminal code that shouldn’t be, and some things allowed by the code that ought to be criminal? 
  • Is mens rea always required for something to be criminal? What about all the edge cases in mens rea—mental illness, youth, disability, and addictions—do they excuse otherwise criminal conduct? 
  • What counts as legitimate punishment? Are capital punishment, torture, exile, incarceration, branding, public shaming, all legitimate? 
  • What obligations do we have to those we punish? 
  • What is punishment for, anyway?

I have lots of good things to read on all those questions, and lots of little lectures to give on the history of answers to these questions, and honestly I think we can’t avoid some of them. But I don’t think this is the only right way to do philosophy, and in this class I think it’s more important to start with the real questions we have about these issues and try to weave these questions into them. Maybe in the abstract caning is a potentially legitimate punishment—but in the US, given the strong association between torture and slavery, we can’t endorse it. 

Right now, today, we face multiple crises related to crime and punishment. And I believe we are obligated to think through what we are doing in response to those crises. I believe that sometimes the best philosophy on a specific question is done in some other department or discipline: the historians, psychologists, linguists, and sociologists are usually pretty good at some crucial philosophical steps that philosophers ourselves sometimes miss.

So I ask you: what questions are you bringing to the classroom?

My students’ questions

As you can see, primed with those analytic questions, my students ended up sounding pretty analytic! (Also, my handwriting is atrocious.) My humble observation is that often five groups of questions and themes emerge when I teach this class:

  1. What have we done? What is the current situation with prisons and policing? What are prisons like? What is it like to experience a violent crime? Who experiences that, and what do we do about it? What is it like to be stopped and frisked regularly? Does stopping and frisking millions of people every year reduce violence crime? What is it like to spend a month in prison? A year? The rest of your life?
  2. How did we get here? What is the cause or causes of mass incarceration—in the United States, but also in some other countries? Is it entirely white supremacy? Is it the War of Drugs?
  3. What should we hope for? Should we aim to abolish prisons and policing, or merely reform them? What are the alternatives? How do we enforce norms without an “or else”? What else has to change to change the legal punishment system?
  4. What can we do about it right now? What are the most promising policies and practices for ending mass incarceration in the United States? Do we need legal reforms, mass movements and protest, cultural and spiritual renewal, an end to capitalism, or something else entirely?
  5. What should we do with our anger, rage, and resentment?

Now look: we can’t duck the econometrics of crime and punishment. But we are going to ask some critical and philosophical questions about their methodologies.

I then led the class through some charts and graphs that debunk some of the major myths about mass incarceration in the US.

Next week? Danielle Allen’s Cuz. She’s a philosopher, after all!

Beyond Sociology 101

Strangers to Ourselves

There are a bunch of different ways to teach undergraduates about moral psychology in a philosophy department:

  1. Emphasize free will issues: weakness of will, nature of addiction (& mental illness), causal role of environment and genetics, reactive attitudes and the nature of blame and praise, etc. Is akrasia even possible? Is Kant’s moral psychology plausible?
  2. Emphasize the profession of psychology and the bifurcation of research and practice. How does therapy work? Is the DSM a useful guide to our psychic lives? Is mental illness a myth? Is the replication crisis a particular problem for psychology? Do forensic psychology and social work perpetuate white supremacy and mass incarceration?
  3. Hortatory and didactic: how can you be a good person? Are personality traits persistent over time? If we are character skeptics should we be virtue skeptics? What should we do with all these feelings and negative emotions? How can we overcome implicit and explicit bias? Also can literary classics actually make us finely aware and richly responsible?
  4. The specific psychology of evil and oppression: study bias, conformity, and the psychic forms of dehumanization, racism, sexism, and obedience to authority. Look at group membership as a primary source of the corrosion of morality. In a sense this is the liberal version of “how to be a good person” i.e. “how to be an anti-racist feminist,” “how to overcome our internalized misogyny and ableism,” etc. But a course like this is likely to focus on particular examples, take up issues closer to political psychology and moral foundation theory.
  5. Emphasize epistemic issues: the same organ psychology studies is the organ doing the studying. We are deeply inconsistent and self-contradictory. We are tribal. We make excuses for ourselves while judging others mercilessly. We want things we can’t admit. We have implicit biases we explicitly deny. We engage in motivated reasoning. We don’t live up to our principles, making excuses for ourselves we disallow to others. We are vulnerable to situations and peer pressure. Altruism is itself egotistical. F.A.B.R.E.A.M.–fundamental attribution bias rules everything around me.
  6. Focus on metaethics: what is personal identity? Is moral talk merely expressive boo/hurrah or is there a moral reality that our judgements can track or not? Do the psychological and motivational shortcomings of consequentialism render it a less plausible account of morality?
  7. Focused on the developmental story: take the debate between Lawrence Kohlberg and Carol Gilligan as the central debate, and try to figure out what it means that people are so different from each other. How much of morality is mere culture? Is there anything we can say about individualism versus collectivism as focal points of Western versus Eastern cultures? Is there something wrong with all the psychological research done on college students from Western, educated, industrialized, rich democracies? What do we truly have in common?

I love all these different versions of the class and I’ve taught them all, and probably there are more I haven’t thought of. But I think moral psychology courses shine when they emphasize the good old fashioned “hermeneutics of suspicion” I name-checked in my title. What if we are strangers to ourselves? What if we know not what we do? What if we live under conditions of radical self-deception? What if evolutionary psychology or an understanding of the human archetypes captured in Sophoclean tragedy can show us our true motivations and debunk our fake ones?

Basically, we have easy phenomenological access to some parts of our own psyches, but we also have plenty of evidence that there’s “more to it” than what we can easily see about ourselves. We are uneasy inhabitants of our own heads, and there’s something interesting and provocative in that experience of self-estrangement.

In contrast with most of the straight-ahead moral philosophy out there, moral psychology tends to call our motivations into question. I quite like this Regina Rini paper on psychological debunking of moral judgments, but she casts this as an objectionable experience of disunity. I’d argue that our continued fascination with psychological debunking comes both because it provides a satisfying opportunity to debunk others (to see them better than they see themselves) and to explain the parts of our own experience that don’t quite make sense and hold out hope of greater unity.

For my money the notion of self-estrangement is the fundamental insight of psychology as a discipline separated out from philosophy. (I follow Anthony Appiah on this: the growth of philosophy as it’s understood today largely tracks the creation of independent departments of psychology.) It’s precisely because we can’t see ourselves clearly and we engage in elaborate justifications and deceptions that we can’t study the mind from the arm chair with introspection and meditation. We need at least two arm chairs for the talking cure, or rather we need the analyst’s couch, the psychiatrist’s prescription pad, the neuroscientist’s fMRI, and the social psychologist’s tricksy experiments. Ironically, we even need some bad developmental psychology so we can do better social psychological experiments: Milgram’s experiment famously works because we trick people into thinking they’re researching BF Skinner’s behaviorism but really they’re proving that most of us are Good Germans underneath it all.

Anyway, much is made of the standard moral error theory—that we simply don’t understand what our moral talk is really about, because moral states are not about anything at all. But I prefer the kind of error analysis you get from the “heuristics and biases” literature, which is also embedded in the Rawlsian notion of reflective equilibrium: if there’s nothing at all that our moral judgments are about, then it doesn’t make sense to say that our judgments are biased or wrong. But if they are clearly inconsistent with each other, then at least greater consistency is possible. The evidence from psychological studies of moral development and the experiment philosophy results about our moral behavior can all help us identify candidate moral intuitions or judgments upon which to train that search for consistency.

That’s through-line between Freud’s psychoanalysis and Haidt’s moral foundations theory: that we can learn things beyond “bubba psychology,” i.e. the psychology of wise old women who have observed many generations of people and have finely attuned and richly aware assessments of themselves and others. Yet at the same time, because psychologists (inhabiting the same buildings in universities and the same sections of the book store) both research the mind and provide specific clinical support to suffering individuals, there have been some key bubba psychology insights that required this rejection of common sense to make clear.

Both therapeutically and philosophically, then, there’s something like a set of deep and potentially clichéd lessons to learn from moral psychology here:

  1. People are different. (There’s a lot more human variation than most people admit, and it’s not arranged in a clear hierarchy from bad to good or sick to healthy.)
  2. Don’t believe your gut, it’s full of shit. (Your emotions are not so smart.)
  3. Don’t believe everything your brain tells you, either. (Your rationality is really good at giving you the answers you want to hear.)
  4. Pick your friends and associates carefully. (The history and the psychological research clearly show that if those groups are cruel, or racist, or genocidal, then we are likely to be cruel, racist, and genocidal too.)

Probably it is more complicated than this. Probably there are more. Families matter. Community matters. Institutions and incentives matter. The prevalence of conformity and social normativity, for instance, makes me think that “pick your friends and associates carefully” is maybe a bit more difficult to implement than I’ve made it sound here. There are contributing factors that seem to make the very notion of “picking” a misnomer, since it is rarely under our conscious and fully autonomous control to decide with whom we are going to spend our time. And thus if you’re doing psychology, pretty soon you need to start doing sociology, and economics, and anthropology, and political science too. It really requires a kind of interdisciplinary social scientific approach. Or maybe we just call it philosophy?

Varieties of Stoicism

I have a bunch of colleagues who do serious work on stoicism, and, well, I don’t. (1) But I’m teaching moral psychology again this semester and there’s a lot of relevant insights from various debates that I think are stoicism adjacent. (2) So here’s a kind of typology:

There’s stuff-upper-lip stoicism (which I associate with Rudyard Kipling and pasty British imperialism) that engages in emotional compression in the name of power-wielding, declaims the passions as unmanly, and has this dangerous tendency to explode when challenged. It’s actually deeply sentimental in its denial of emotion—it puts emotions to work ordering the world towards a hierarchical goal, that requires the passionate pretend to dispassion. Kipling’s poem “If” captures it perfectly, (“If you can fill the unforgiving minute/With sixty seconds’ worth of distance run,/Yours is the Earth and everything that’s in it,/And—which is more—you’ll be a Man, my son!”) but so does Henley’s Invictus. (“I am the master of my fate/I am the captain of my soul.”) And to be clear I like both poems!

There’s the stoicism of the weak: don’t get angry or resentful even at outrageous injustice because these reactions are seen as challenges to powerful and punished. Nietzsche hated this kind of stoicism: it makes all kinds of external circumstances over into an unchangeable order of Nature, flattening mortality, morality, and mores into a single situation we must merely find a way to stomach. Of course this can be pragmatic but it denies so many possibilities in the name of survival. Ironically, a lot of Admiral Stockdale’s “Courage Under Fire” probably should be classified here, in large part because he embraces Epictetus over Marcus Aurelius, and because his main practice is in his incarceration rather than in his warfighting.

Finally there’s the mindfulness stoicism of modern western buddhists (lowercase because they tend to be anti-metaphysical) and Silicon Valley self-help gurus and “techbros.” This has a developed a few different strands, but the key as far as I can tell is that there is a mix of practices: meditate to be 10% happier, update your priors dispassionately, take bets when offered, and entertain provocative ideas without blinking.

That said, I think a crucial precursor to all these ideas is David Allen’s Getting Things Done. Allen was a heroin addict and he transformed his drug recovery and New Age mysticism into a project management method for translating projects into discrete tasks, and allowing the careful management of those tasks to free oneself of anxiety. It’s all about preventing perseveration and allowing easy flow states—but it also suits automation and software replacement really well. That’s why Tim Ferris claims stoicism is “an ideal ‘operating system’ for thriving in high-stress environments.”

I sometimes talk about the way that doing logic sets—or programming for that matter—can feel like turning off your brain. That’s obviously not true: it’s “cognitively loaded” work, and confusing if you don’t know where to start. But it’s also mechanical and somewhat pleasant for its ability to give you direct feedback. The word “flow” seems to apply here: GTD is all about how to reduce big projects into discrete tasks small enough to develop that meditative flow while completing. And meditation holds out the same promise: to relate to your life with a little less attachment, to do your work with less friction, and yet still be very productive. The “Bayesian” rationalists sometimes promise a similar experience of mechanically updating ones’ priors as new information comes in—but I’ll note that one of the first metacognitive strategies of the Less Wrong rationalists is the Stoic move of separating the world into facts and beliefs, external realities and internal states. Once that is done there’s no reason to stress about it any longer: the goal is to make one’s internal states (of belief) match up with the external realities, and to bracket the various anxieties and hopes that often lead us into false beliefs.

What worries me about Silicon Valley’s mindfulness stoicism is the sense that it combines all the worst elements of world mastery and manliness with the stoicism of the weak: acceptance of injustice, the embrace of a hostile natural (and social!) world to which we must conform, and a quietism that locates our agency in that compliance while praising it as mastery. Ironically, some of the Bayesian rationalists are not at all quietistic. They create new things and institutions, organize communities, and protect their interests. There are social norms and trendsetters. But I think it’s pretty obvious that these activities fall outside of their principles, and that to a certain extent they are setting aside their stoicism when they do it. (That’s particularly obvious when they take non-rationalists to task for putative betrayals.)

There’s a lot more to be said about the intersections of buddhism and stoicism (and indeed about the intersection of Buddhism and Stoicism) but I have been pondering the role of the subject. The detachment of the stoic is hyper-individualistic in some ways: it counsels the precarity and fragility of others and the external world, but in the name of self-discipline, personal honor, and a duty to integrity. The detachment of the buddhists (and Buddhists) can’t be so easily incorporated here: embracing no-self and emptiness can be a lot more challenging and disruptive. So while Silicon Valley has adopted meditative practice without a care for the metaphysics that might be smuggled in, it may also be that these practices carry their own phenomenological lessons and insights. Perhaps the techbros are going to learn some things in spite of themselves.

  • (2) I’ve run roughshod over a bunch of important differences here, and I’m not sorry for that.

Joshua Miller’s Top Ten Things that Arendt Got Right About Political Theory

I wrote this little primer at the bottom of a long discussion of the Schocken Books editions of Arendt’s work, and then reposted it a while back on Facebook. It’s been popular, so I’m reposting it again here so I can easily link to it without feeding social media.

  1. Race-thinking precedes racism. Arendt’s analysis of the rise of totalitarianism is a mammoth book, but the basic argument is simple: you can’t hate Jews for their race until you think of the fundamental flaw with Judaism in racial terms. Thus, race-thinking comes before racism.  Before race-thinking, Europeans hated Jews for completely different reasons: religion! You don’t exterminate other religions, you convert them. But once you have race-thinking, you can create justifications not just for anti-Semitism but for colonialism, imperialism, and chattel slavery of Africans. You can see strands like this in William Shakespeare’s The Merchant of Venice, which is (supposed to be) a comedy! Jessica and Lorenzo get a happy ending. Miscegenation fears require a biological theory of social differences.
  2. The Holocaust happened because Europeans started treating each other the way they treated indigenous peoples in the rest of the world. Europeans learned to think racially in the colonization of Africa, and the European model for dealing with resistance involved murderous concentration camps. Thus, when race-thinking eventually returned as a form of governance in the European continent, so too did the concentration camps. (Arendt makes this claim in the second volume of The Origins of Totalitarianism, though it’s built into the structure of the book. It’s often called the “boomerang” thesis: that the men (like Lord Cromer and Cecil Rhodes) who practiced racial imperialism brought those techniques back to Europe to found “continental imperialism.”)
  3. Totalitarianism is largely caused by the growth of a class of “superfluous” people who no longer have a role in their economy. In an industrializing society, much traditional work can now be done with fewer workers. One possible solution to this oversupply of workers is to put some of that surplus labor force to work monitoring, policing, and murdering the rest.
  4. Ideologues ignore counter-evidence. A very good way to understand ideology is as a logical system for avoiding falsification. There are alternatives theories of ideology that aren’t immune to counter-evidence but instead merely exert constant pressure: for instance, there’s a difference between what Fox News does and what Vox does, and Arendt’s account is more useful for criticizing Fox’s constant spinning than Vox’s technocratic neoliberalism. But Arendt supplied us a useful account of ideology that is closest to our standard use and our current need.
  5. The language of human rights is noble and aspirationally powerful. However, statelessness renders most rights claims worthless in practical terms and in most judicial institutions. Someone who must depend on her rights as a “man” or a human is usually worse off in legal terms than an ordinary criminal.
  6. You don’t discover yourself through introspection. You discover yourself through action. The main set of claims she made in The Human Condition about the role of public and political life strikes me as pretty important, especially insofar as it denigrates economic and racial identity politics. Think of the cocktail party version of this: on meeting a new person, some people will ask, “Where do you work?” in order to get to know them, identifying them through their profession, their economic role. Others will ask: “What are you into?” as if to say that our recreation and consumption is what defines us. But for Arendt, the appropriate question is: “What have you done? What do you stand for?”
  7. If you take that seriously, “identity” politics is frustratingly restrictive. A person is not defined merely by the class or race they come from: they are defined by the principles upon which they act. While it is often necessary to step into the political sphere as a representative of Jews or women–and Arendt acknowledged that this becomes unavoidable when one is attacked as a woman or a Jew–the best kind of politics allows us to enter the political sphere as ourselves, not knowing what we will discover about those selves until we have acted. So the need for identity politics is an indication of larger injustices: we respond as members of our groups when systematic and institutional forces oppress us as members of these groups.
  8. Revolutions that aim for political goals are more likely to succeed than revolutions that aim for economic goals. Misery is infinite and thus insatiable; political equality is comparatively easy to achieve. Thus it’s important to connect economic complaints to a deprivation of political equality: the important problem with white supremacy, for instance, is not that whites “have” more than Blacks, but that we count for more, that it is uncontroversial that “White Lives Matter.” (Though an important indication of that “counting for more” is that white people have more than Blacks because we continually plunder Black people and are able to get away with it systematically.)
  9. Philosophy as a discipline is fundamentally at odds with politics. This is a problem for political philosophy, and it helps to explain why so much of political philosophy is hostile to politics and tries to subsume the agonistic nitty-gritty of the public sphere under rules of coherence and expert knowledge. This is because thinking as an activity is a withdrawal from active life, and especially politics: the fundamental conflict between the eternal and the ephemeral is not one that can be usefully bridged, and most often those encounters are pernicious for both thinkers and doers.
  10. Work and labor are different. Some activities are repetitive and exhausting, and only biological necessity forces us to continue them. Some activities make the world and our lives within it meaningful and fruitful. Many people have economic roles that mix the two activities, but still and all they are distinct. What’s more, there’s not shame in wishing and working for a world without labor, perhaps a world of automation. But a world without work would be fundamentally meaningless.
  11. Evil is not complicated, so don’t overthink it. (No kidding, either. Arendt’s view is the opposite of the Spaceballs version of evil: “Evil will always triumph over good because good is dumb.” She seemed to think that evil is dumb and that’s why it can be so powerful.)

Is Deliberate Underpolicing a Problem?

Propublica thinks so: What Can Mayors Do When the Police Stop Doing Their Jobs?

Rises and falls in crime rates are notoriously hard to explain definitively. Scholars still don’t agree on the causes of a decades long nationwide decline in crime. Still, some academics who have studied the phenomenon in recent years see evidence that rising rates of violence in cities that have experienced high-profile incidents of police brutality are driven by police pullbacks. Many criminologists also cite the general deterioration of trust between the community and police, which leaves residents less likely to report crimes, call in tips or testify in court. Added to that are the dynamics that are now likely also driving a rise in violent crime, even in cities that have not witnessed recent high-profile deaths at police hands: the economic and social stresses of the pandemic lockdowns, including disruptions to illegal drug markets, and the usual seasonal rise in violence during summer.

I tend to discount the so-called “Ferguson Effect,” because the overall crime rates are already so noisy, and Michael Brown was killed while there was already a rising crime rate.

ProPublica acknowledge this evidence, but then cites anecdotes from Baltimore to raise the problem anew:

But the post-consent-decree pullback did not result in a rise in violent crime in the city, whose homicide rate remained very low compared with other large cities. In this, the city is representative of a broader trend, according to two recent de-policing studies. Richard Rosenfeld, a criminologist at the University of Missouri at St. Louis, and Joel Wallman, the director of research for the Harry Frank Guggenheim Foundation, examined the impact of arrest rates in 53 large cities on homicide rates from 2010 to 2015. They found that arrests, especially for less serious crimes such as loitering, public intoxication, drug possession and vagrancy, had already been dropping over that period, even prior to the rise of the Black Lives Matter movement in 2014. And they found that in nearly all of those cities, the declining arrest rates did not result in higher rates of violence. To put it another way: Over the first half of the past decade, many cities shifted away from the “broken windows” style of policing popularized in New York under former Mayor Rudy Giuliani, but even as they did so, violent crime continued to decline in most places, as it has since the early 1990s.

Here are the anecdotes about Baltimore:

In Baltimore in 2015, the underpolicing was so conspicuous that even some community activists who had long pushed for more restrained policing were left desperate as violence rose in their neighborhoods. “We saw a pullback in this community for over a month where it was up to the community to police the community. And quite frankly, we were outgunned,” the West Baltimore community organizer Ray Kelly told me in 2018. In fact, the violence got so out of hand — a 62% increase in homicides over the year before — that even some street-level drug dealers were pleading for greater police presence: One police commander, Melvin Russell, told New York magazine in 2015 that he’d been approached by a drug dealer in the same area where Gray had been arrested, who asked him to send a message back to the police commissioner. “We know they still mad at us,” the dealer said. “We pissed at them. But we need our police.”

I think there’s good reason to be skeptical (beyond the self-serving motivated reasoning inherent in a police commander’s report of a drug dealer’s plea): aggregate crime levels are a noisy phenomenon, and they’re unusually responsive to the agencies that are charged both with monitoring them and lowering them. We know precincts in NYC would “juke the stats” and we also know that a lot of crime is inexplicably random, or tied to the efforts of third parties. So if there are two cities where police pullback was associated with subsequnce increases in violent crime, and hundreds of cities where it wasn’t, it looks irresponsible of ProPublica to write this article, even if it’s ultimately a sympathetic one.

There’s some historical justification for this view, as well:

The Week Without Police: What We Can Learn from the 1971 NYC Police Strike

Over the course of the five day strike, there was no apparent increase in crime throughout the city. In fact, the only real differences noted by reporters were an increase in illegally parked cars and people running red lights, the actions of opportunistic motorists. Richard Reeves, writing for the New York Times, said “New Yorkers— ‘a special breed of cats’…went about their heads‐down business. There was no crime wave, no massive traffic jams, no rioting.” Some attributed all of this to Police Commissioner Patrick V. Murphy’s “visible presence” strategy of deploying superior officers and detectives in patrol cars in heavily populated areas, like Times Square. Others simply attributed it to the cold. However, the strike brought to light another very real possibility: maybe the city was able to function as normal with a much smaller number of police officers.

In New York, major crime complaints fell when cops took a break from ‘proactive policing’

Each week during the slowdown saw civilians report an estimated 43 fewer felony assaults, 40 fewer burglaries and 40 fewer acts of grand larceny. And this slight suppression of major crime rates actually continued for seven to 14 weeks after those drops in proactive policing — which led the researchers to estimate that overall, the slowdown resulted in about 2,100 fewer major-crimes complaints.

Here’s the underlying 2017 study. (Here’s where I predicted these results in 2014.)

At the same time, the version of policing reform that’s most commonly endorsed by leftist politicians like Alexandria Ocasio-Cortez is one where the simultaneous over- and under-policing of African-American communities is understood to be part of the same phenomenon, and corrected together such that Black Americans finally receive the same treatment as middle-class whites.

Image

If the ideal of policing abolitionists is that we should all have responsive, service-oriented police, then a very good way to get there would be community control boards. My colleague Olúfẹ́mi Táíwò has argued that this could just as easily be abolitionist as reformist:

By taking public control over the police who handle the bulk of arrests, we act before other parts of the system can get involved. Without community control, abolition just means asking a larger set of white supremacist institutions to restructure a smaller set. Instead, we are asking our neighbors.

Come Work in Prison Education at Georgetown University

We’re hiring two new staff for the education team at PJI, which I will supervise.

(We’re also hiring a Department Administrator!)

I’m incredibly proud of the work that we do at the Prisons and Justice Initiative–but this has been an especially powerful year. After the Pivot graduation this June, we all thought we had settled into a rhythm, until a confluence of events suggested that we’d be able to expand the Scholars program in the corrections system in the state of Maryland, with a bachelor’s degree. Now the Andrew W. Mellon Foundation has agreed to help fund that expansion.

Georgetown has been committed to teaching in prisons in one way or another for almost forty years. The support of the Andrew W. Mellon Foundation will allow us to redouble that commitment, with a bachelor’s degree and an expanded footprint in Maryland. We love to showcase the genius of students who are incarcerated—both their greatness and their goodness—because it points to the more fundamental fact that they are our neighbors and fellow citizens.

​​Because of mass incarceration, there are millions of people incarcerated in the US who would not be incarcerated in most of the rest of the world: generations of should-have-been undergraduates in prisons and jails who have been waiting for their chance to be that first year student in a philosophy class or to write that senior thesis on trade policy. With the support of the Andrew W. Mellon Foundation, Georgetown is going to educate the next generation of formerly-incarcerated leaders who will help to reverse the policies that trapped them.

​​I’ve always argued that punishment requires mutual responsibility, and that one form of that mutual responsibility is a willingness to both teach and learn. We need to respond to harm with something other than more harm. Georgetown gets this, in part because of the Christian commitment to “visit the prisoner” and the Jesuit ideal of cura personalis, “care for the whole person.” That pedagogical ideal ends up meaning more than just “a sound mind in a sound body.” It means a commitment to serious attention to others, even students and even those we tend to ignore. Ignatius—himself formerly incarcerated—put it this way: “be slow to speak and patient in listening to all.” It’s the model for what we’re trying to do with prison education.

If those sound like your values, please apply!