Semantic Insatiability and Logophilic Etymologies

Most people know the experience of saying a word over and over again until it loses its meaning and becomes a sound: this experience is called “semantic satiation.”

But what about the opposite experience? Can you puzzle over a word until it gains meanings? Can we re-saturate meanings? Can we make words resilient against this kind of loss? Call this semantic insatiability: a technique for bolstering an idea against the exhaustion of its meaning.

Here’s a literal example: Gertrude Stein’s “Rose is a rose is a rose is a rose.” She liked to say that the goal of that line was to steal roses back for poets to use again, to strip all the semantic goop of romance off of the flower. “I think in that line the rose is red for the first time in English poetry for a hundred years.” To make it useable again merely as a flower, to accrue new meanings, connotations, and implications.

If you read her poem Sacred Emily where this line appears, her claim becomes a lot less plausible–like most of her poetry, it reads like logorrhea, and the phrase is lost in a sea of repetitive noise. (Her poetic explanations tend to confirm this view.) I don’t like the poetry, but I like the claim. It suits my purposes even if it seems both false in context and is certainly wrong as a matter of causality. (“Sacred Emily” was published in 1913 and in 2024 the Society of American Florists estimated that we’d send 250 million roses domestically. If the poet really tried to save roses from their cultural baggage, she failed.)

Some people who love stories also love poetry. Some don’t. I love both narratives and the words, sentences, and paragraphs that help to build them. And so I want to talk about how I, personally, try to reverse semantic satiation–how I re-saturate semantics at this level.

Contronyms Resist Semantic Saturation

Consider the contronym (a word that is also its antonym). “Fast” can mean both “stable” (hold fast!) and “quick,” “sanction” can mean both “punish” and “authorize.” And as you’re messing around with these words trying them out in different sentences, semantic satiation might set in–it all starts feeling like gibberish. If you spend some time with the Oxford English Dictionary (or the Online Etymology Dictionary for folks without academic library access) though, you’ll learn a lot about how the meanings diverged, and at least in my experience, the ordinary usage meanings and distinctions will re-emerge. “Fast” began in Old English as “constant, secure, or watertight.” We can still hear that sense in “steadfast” and similar terms. But then we don’t know precisely how we got to the speedy sense of the word: perhaps from Old Norse/Scandinavian: “fast” could mean firmly (unmoving) but that sense of “strength” gets applied as an adjective to standing and holding, and then from there to fighting, drinking, or even running. To run firmly (telja fast in Scandinavian) then implies quick movement. Other things are quick by analogy.

Is that how it happened? Or did it come “fast” meaning “nearby, to hand, immediate”? “Stay fast by my side!” Then somehow from “nearby” to “immediately” to “quickly”?

What I want to note is how we don’t tend to trip over semantic satiation as we explore contronyms. Puzzling this out doesn’t exhaust the word’s meaning the way ordinary repetition might. I suspect this is because we’re often contextually cluing the varied meanings: tracing the linguistic origins and carefully noting the different senses of the word in different contexts can actually restore what mere repetition has depleted. I think this is one of the secret gifts of studying the languages from which English is derived, as well: by learning Latin, French, or Spanish; Greek, German, or Sanskrit, you get a deeper sense of the ways your own language is put together.

Semantic Satiation and Linguistic Determinism

Consider another contronym: if you don’t want to make an oversight, you may need oversight. The term indicates both the mistake and a technique for mistake prevention. And it doesn’t stop there: it helps for someone to look over your work so you don’t overlook something important. If you dig into these incorrect/correction pairings, it’s not clear which came first, etymologically: the error or the editor. And as you’re digging, you come to see that English has quite a lot of words for watching someone from above: survey, surveillance, supervisor, superintendent, overseer, overwatch, watch over, etc. English has lots of words, period, but it seems like it especially has a lot of hierarchical words for watching someone else work!

Does this tell us something about Anglophone culture? Is this like having lots of words for snow (and the attendant Sapir-Whorf theories of cultural determinism and linguistic relativity) an indicator that hierarchy, bureaucracy, or even colonization and enslavement have such deep roots in our history that they’ve found lots of different expressions over time?

I used to think that the Sapir-Whorf theory had been falsified, but it’s more that it’s always been ambiguous between two different versions: that (a) language provides the medium for thought and thus (b) affordances and resistances. It’s clear that we often think in words, and thus if there’s no word for an idea there will be some difficulty in thinking it. But what happens then? Perhaps the thought is defied and goes unthought, we stutter and give it up. Perhaps we misuse another word. Perhaps we create a neologism. And perhaps we use poetic language to eff the ineffable with metaphor and simile.

My sense is that Benjamin Whorf, at least, thought we couldn’t understand the sheer variety of the world if our language didn’t give us the tools. A language that lacked some term would be spoken by a group that couldn’t grasp the attendant concept. You hear examples occasionally: did the ancient Greeks lack a word for “blue”? Why did Homer call the sea “wine-dark”? Did he lack the ability to see its true color? Is there really an Amazonian tribe that lacks number terms beyond “one, two, and many“? How does that work? This is empirically testable, though, and while there’s some evidence for the mathematical incapacity of the Piraha there’s fairly clear counter-evidence on color: at best we perceive different colors a bit more quickly when we have a color term to match them. (It’s too bad, really, that we have managed to empirically disenchant this fanciful view that inhabitants of the ancient world look out on vistas with utterly different eyes.)

Is there anything left of the Sapir-Whorf hypothesis worth crediting? On one version of Whorf’s view, a language will gradually develop to capture distinctions and ideas that are especially relevant to its speakers. In short: if you don’t live in a highly hierarchical world, you won’t understand all the ways that bosses can come to be relevant to your life and your language won’t give you those tools. Meanwhile, if you’re surrounded by snow from a young age, and your culture and history have been built up around snow for generations, your language will give you the concepts to see how many different ways snow can be.

We have the same word for falling snow, snow on the ground, snow packed hard like ice, slushy snow, wind-driven flying snow — whatever the situation may be. To an Eskimo, this all-inclusive word would be almost unthinkable; he would say that falling snow, slushy snow, and so on, are sensuously and operationally different, different things to contend with; he uses different words for them and for other kinds of snow. The Aztecs go even farther than we in the opposite direction, with ‘cold,’ ‘ice,’ and ‘snow’ all represented by the same basic word with different terminations; ‘ice’ is the noun form; ‘cold,’ the adjectival form; and for ‘snow,’ “ice mist.”

Whorf, Benjamin Lee. “Science and linguistics,” Technology Review (MIT) 42, no. 6 (April 1940).

Another fun recent example is the observation that the English language has an absolutely gob-smacking number of words for being intoxicated by alcohol. As Christina Sanchez-Stockhammer and Peter Uhrig recently detailed, there are at least 546 synonyms for the word “drunk”!

A meaningful argument structure construction without any lexical content might be available to speakers, but we conjecture that additional contextual cues as to the topic of drunkenness will be needed to successfully use this construction, the most prototypical form of which is

be/get + intensifying premodifier + -ed-form.

Coming back to the theoretical questions asked at the end of Section 2, we can say that the wide range of words observed in the already existing lists of drunkonyms seems to support the view that there is a large amount of words that one could potentially use to creatively express drunkenness in English. The wording “any word” put forward by McIntyre appears slightly too general, though, as it is difficult to imagine words such as isthe or of to mean ‘drunk’. Also, for nouns such as carpark or gazebo, it is strictly speaking not the word itself but an -ed-form of it, i.e. carparked or gazeboed, that expresses the meaning of drunkenness in the relevant contexts.

Sanchez-Stockhammer, Christina and Uhrig, Peter. ““I’m gonna get totally and utterly X-ed.” Constructing drunkenness” Yearbook of the German Cognitive Linguistics Association 11, no. 1 (2023): 121-150. https://doi.org/10.1515/gcla-2023-0007

What is it about the Anglophone world that makes it so necessary to express drunkenness? Anglophone countries aren’t the top alcohol consumers, for instance. Nor do we have the most binge drinking. Yet somehow we talk about it with more variety and creativity, which is more evidence agaisnt the stronger versions of Whorf’s claims.

Most people are at least a little bit tempted by Sapir-Whorf just because it seems like it can’t be totally irrelevant to your experience that your first language has or lacks some concept or prioritizes or deprioritizes some structure. But since language comes alongside membership in a community, and communities are often pretty internally diverse, this gets caught up with the same old problems of strong cultural relativism. It’s at most an influence, not a determination–unless the determinants aren’t simple things like which words (and thus concepts) you have, but rather what kinds of grammatical structures it allows.

Philology and Logophilia

Here is where I wish there was more appreciation for Friedrich Nietzche’s philology than his philosophy. One of his major insights is that the concepts crystalized in our language often radically diverge from their origins, which I think makes the strong version of Whorf’s claims seem obviously wrong: there can’t be linguistic determinism if we’re constantly re-appropriating language for new purposes and forgetting its origins. (Even the “drunkenness” formations are evidence of this: there is very little of the “gazebo” left in our understanding of what is happening to the drunken undergraduate who describes himself as “being utterly gazeboed.”

Nietzsche delighted in the ways that phonemes diverge from morphemes. He even suggested that our forgetfulness about the origins of our language was part of its power, and how we resisted sedimentation–even as some survivals dangerously persist. So I get to drop my favorite line again here. Describing truth, which might be a quality that some words and concepts aspire to, he explains that it is:

A mobile army of metaphors, metonyms, and anthropomorphisms—in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically, and which after long use seem firm, canonical, and obligatory to a people: truths are illusions about which one has forgotten that this is what they are; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins.

Friedrich Nietzsche, “On Truth and Lie in an Extra-Moral Sense

I love the image of the philologist as numismatist, collecting old coins and discerning and perhaps repairing them to their original form. This is to say that Nietzsche’s genealogical practice was itself a kind of semantic re-saturation. His major works often pursue one or another concept through a historical and genealogical line not to restore an origin but to celebrate possibilities. You can’t think about “guilt” without thinking about “debts” after you read the Genealogy of Morals. Arguably some of his other efforts in this regard are less effective, but he was certainly trying to make us think differently and more fully when we hear cliched invocations of good and evil; faith and God; power and authority; creativity and individuality; pity and compassion; truth and honesty. Some of what he does is induce semantic satiation and then offer to re-saturate those meanings anew (and with his own preferred valences attached.) That particular effort has been one of the reasons that academic philosophers continue to read him, though it’s prone to embarrassing misuse!

So what?

Say all this is true:

  • repetition depletes meaning
  • linguistic puzzles resist this depletion
  • studying the origins of ideas can grant them a new lease on life
  • we’re not bound to those origins
  • we constantly engaged in creative rearrangements of our linguistic affordances.

What then?

It certainly seems like we’re entering a particularly dangerous period where a lot of text is being generated mechanically (ironically unlike Stein’s poetry). Emily Bender calls large language models “stochastic parrots.” We are approaching the zeitgeist of semantic saturation, and we’re going to have to work hard just to hold on to meaningfulness. (Probably we always have to do that kind of work, but it’s more obvious now.) In professional philosophy, we’ve seen the growth of normatively inflected “conceptual engineering” and “ameliorative analysis,” where conceptual analysis is being used to assert nakedly political projects, laying bare the ways that the very meanings of our words is subject to wrangling and contestation. In some future posts, I’d like to try to think both those trends together, perhaps via Heidegger’s frustratingly fascist etymological approach.

(Thank to my friend Michael Willems for discussions on this topic.)

Resisting the Fatalism of the Behavioral Revolution

I love Peter Levine’s latest post, “don’t let the behavioral revolution make you fatalistic.”

“Tversky’s and Kahneman’s revolutionary program spread across the behavioral sciences and constantly reveals new biases that are predictable enough to bear their own names. […] These phenomena are held to be deeply rooted in the cognitive limitations of human beings as creatures who evolved to hunt-and-gather in small bands on African plains. Not only has the burgeoning literature on cognitive biases challenged rational market models in economics, but it undermines the “folk theory” of democracy taught in civics textbooks and widely believed by citizens and pundits.”

I think Levine captures something important about the literature on cognitive biases and heuristics: that they tend to put people in labs and poke them in such a way as to show the ways in which individuals are prone to mistakes. Yet this is widely known, and many of the worst mistakes to which individuals are prone are things we have developed solutions for in ordinary life.

“Behavioral science would have predicted the demise of the independent newspaper–but about a century too soon. In fact, “the press” (reporters, editors, journalism educators, and others) sustained the newspaper as a tool for overcoming human cognitive limitations for decades.”

As such, the lab work undermines methodological individualism but doesn’t actually help us understand communities of inquiry or institutions of knowledge-production. We are extended minds, always dependent on cognitive “prosthetics.” We depend on watches and newspapers and Google and our friends to remember and process information. And yet I think Levine is perhaps too optimistic about the possibilities of “prosthetics.” (One of Levine’s finer qualities is that he regularly make “too optimistic” seem realistic in retrospect.)

I think we should especially push on the idea that journalism is or has been a solution to cognitive limitations. The golden age of journalism was a short period of time marked by very low elite disagreement on major issues as they joined forces against communism and to first suppress–and then manage–the Civil Rights revolution for women and Black people. This cynical potted history of the trustworthy news ignores much–but so does the optimistic one.

I’ve always thought that the main power of the “behavioral” revolution was to give scientific precision and credence to insights from earlier philosophy, political theory and psychology, as well as parsing the size of effects and the disagreements between cliches that would often emerge. So sure: you can find a lot of Tversky and Kahneman in Francis Bacon, Adam Smith, David Hume, and Friedrich Nietzsche, but you can also find a lot in those authors that has been overturned or rendered more carefully in later work.

And the big insights about democracy’s weaknesses–the ones that go back to Plato and Aristotle–those didn’t go away in the middle of the 20th Century. They were perhaps suppressed by the Cold War abroad and the race war here at home, but something big happened when the demographic models for redistricting got an order of magnitude better in 2010 than they had been in 2000. And those models are just getting better.

What’s more, the behavioral revolution can also be used for good: I’ve repeatedly defended these insights when applied to criminal justice, for instance. And one of the most famous “cognitive bias” studies come not from the lab but from the real world. Danziger, Levav, and Avnaim-Pesso showed that:

“experienced parole judges in Israel granted freedom about 65 percent of the time to the first prisoner who appeared before them on a given day. By the end of a morning session, the chance of release had dropped almost to zero.

After the same judge returned from a lunch break, the first prisoner once again had about a 65 percent chance at freedom. And once again the odds declined steadily.”

This is the kind of thing that we might have suspected before. Any professor with a stack of papers to grade might have suspected they were more lenient after dinner, for instance. But this is definitive, real world proof of a problematic bias, a result of the behavioral revolution.

Ironically, it doesn’t actually make me very fatalistic: it gives me hope. I hope that Israeli judges are reading this and worrying about it. I hope they are taking snacks to work. I hope that parole lawyers everywhere are taking note of these facts and acting to protect their clients from these biases. New information about our cognitive limitations doesn’t have to make us hopeless. And really, that’s Levine’s point.

Nietzsche and the Parable of the Talents

What, then, is truth? A mobile army of metaphors, metonyms, and anthropomorphisms—in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically, and which after long use seem firm, canonical, and obligatory to a people: truths are illusions about which one has forgotten that this is what they are; metaphors which are worn out and without sensuous power; coins which have lost their pictures and now matter only as metal, no longer as coins. (Friedrich Nietzsche, On Truth And Lie in the Extra-Moral Sense.)

I think most philosophers will be familiar with this famous essay by Nietzsche deflating our conception of truth into a kind stripped metaphor. This idea that words are like coins who have gotten so old and rubbed clean that they count only as weights of metal and not as coins captures the ways in which the etymologies of words can surprise and delight us, and give us an understanding of our history–and ultimately of human meanings–that we have not previously explored.

Yet it has always seemed to me that there was a direct reference hidden in these lines–almost certainly a well-known one that Nietzsche the philologist would have been expecting us to catch. The coins that become mere metal complete a transformation that began in the Gospel of Matthew, in the “parable of the talents.” The word “talent” in modern English means a natural skill or aptitude. It’s a term for innate competence or mastery. Yet for the Greeks it was a unit of measure, and for the Romans it was a unit specifically used for the measure of currency. How did this odd “worn out metaphor” come about?

In the parable, Jesus depicts a master leaving on a long trip: he leaves different sums of money to three different servants. When he returns, those with the most money had invested it. The servant with the least money had merely preserved the original loan. So the richer servants hand over increased wealth, while the poorest merely returns the principle. The master punishes the servant for not investing as the richer servants had done.

It gets worse:

But his master answered him, ‘You wicked and slothful servant! You knew that I reap where I have not sown and gather where I scattered no seed? Then you ought to have invested my money with the bankers, and at my coming I should have received what was my own with interest. So take the talent from him and give it to him who has the ten talents. For to everyone who has will more be given, and he will have an abundance. But from the one who has not, even what he has will be taken away. And cast the worthless servant into the outer darkness. In that place there will be weeping and gnashing of teeth.’

On their own, these lines from Matthew seem to be advocating for a kind of “success theology,” by which God demands that we grow rich or suffer punishment. If nothing else, it supports usury and interest-bearing loans, which the Church forbade.

But this passage is followed by a list of commandments that seem utterly at odds with the claim that “Them that’s got shall have/Them that’s not shall lose/So the Bible says/And it still is news” as Ella Fitzgerald sang. Thus the passage–or perhaps the compositor–already begins the transition in the meaning of the word (we see the same in Luke, but the term there is “mina,” which didn’t receive the same development.) How do we save the passage from the explicit reading?

As early as Augustine, the passage has been interpreted as an allegory: since the direct meaning is offensive and at odds with what follows, the implicit meaning must be otherwise. Augustine saw it as a passage on salvation, and not wasting the opportunity it supplies. Later commentators analogized the talents to God-given abilities, and later still we find ordinary language mentions of “talents” without the connection to the Biblical text, including the success theology idea.

But back to Nietzsche: it seems to me obvious that Nietzsche is referencing this particular history in his account of the coins returned to metal once again. How odd that we would embed meanings in innocent words, and have later generations read them back out again? We’re doing that all the time, at many different levels, mobilizing that army of metaphors in a way that takes crystallized human relations as if they were merely for expressing banal observations about the color of snow.

I call it “deflationist.” Nietzsche makes an effort to reduce Christian allegories to their constituent parts, to take all meanings and make them mere patterns of behavior, all while spinning out more allegories, parables, and poetic embellishments. In particular, explorations of metaphysics become etymological explorations into the play of metaphors. In a future post, I hope to detail the ways in which Hannah Arendt picks up this metaphysical deflation in her own work, and try to specify what it means for her conception of truth.