Francis Bacon on confirmation bias

Nowadays, we call it “confirmation bias:” our deep-seated tendency to prefer information that confirms our existing positions. A political controversy erupted in 2010 when the libertarian blogger Julian Sanchez accused conservatives of falling prey to “epistemic closure,” which was his preferred term for the same problem:

Reality is defined by a multimedia array of interconnected and cross promoting conservative blogs, radio programs, magazines, and of course, Fox News. Whatever conflicts with that reality can be dismissed out of hand because it comes from the liberal media, and is therefore ipso facto not to be trusted.

Edward Glaesser and Cass Sunstein call this phenomenon “asymmetric Bayesianism” and give examples from the left as well as the right. I argued earlier this week that Amy Chua’s and Jed Rubenfeld’s book The Triple Package suffers from the same problem.

But–as is often the case–Francis Bacon got there first. In his remarkable compendium of human cognitive frailties (published in 1620), he included this problem:

The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate. And therefore it was a good answer that was made by one who, when they showed him hanging in a temple a picture of those who had paid their vows as having escaped shipwreck, and would have him say whether he did not now acknowledge the power of the gods — “Aye,” asked he again, “but where are they painted that were drowned after their vows?” And such is the way of all superstition, whether in astrology, dreams, omens, divine judgments, or the like; wherein men, having a delight in such vanities, mark the events where they are fulfilled, but where they fail, though this happen much oftener, neglect and pass them by. But with far more subtlety does this mischief insinuate itself into philosophy and the sciences; in which the first conclusion colors and brings into conformity with itself all that come after, though far sounder and better. Besides, independently of that delight and vanity which I have described, it is the peculiar and perpetual error of the human intellect to be more moved and excited by affirmatives than by negatives; whereas it ought properly to hold itself indifferently disposed toward both alike. Indeed, in the establishment of any true axiom, the negative instance is the more forcible of the two.

 (Novum Organon, XLVI) 

Of course, Bacon’s goal was not lament our tendency to err but to help us fix it. Evidence has accumulated to reinforce his concerns about our blinkers and biases. Yet, as the Yale psychologist Paul Bloom recently wrote in The Atlantic, even the accumulated evidence does not show that we are fundamentally irrational. We very often make wise and deliberate choices. It all depends on the context of choice and the methods we use to deliberate:

So, yes, if you want to see people at their worst, press them on the details of those complex political issues that correspond to political identity and that cleave the country almost perfectly in half. But if this sort of irrational dogmatism reflected how our minds generally work, we wouldn’t even make it out of bed each morning.

As the heirs of Bacon’s scientific revolution, we should relentlessly investigate all systematic forms of human error–not to shake our faith in reason but to help us to reason better.

The post Francis Bacon on confirmation bias appeared first on Peter Levine.

what is privilege?

What do we mean when we say “privilege,” in a political or social context?

Here are some valid everyday uses of the word: “It is a real privilege to be here tonight.” “Playing football is a privilege, not a right.” “I feel privileged and grateful to be enrolled at this college.”

A privilege seems to be some kind of benefit or desirable standing that not everyone has. Some privileges are perfectly appropriate. They create meaningful and worthy categories, such as membership in a given organization or the right to practice a particular profession. According to Elinor Ostrom’s hugely valuable research on how people manage common pool resources (such as fisheries and forests), one of the general principles is the need for clear boundaries between insiders and outsiders. The insiders have the privilege to, for example, fish in a common pond. If everyone has that right, all the fish will be taken.

The problem is unjust privilege. Teaching Tolerance says, for example:

white skin privilege is a transparent preference for whiteness that saturates our society. White skin privilege serves several functions. First, it provides white people with “perks” that we do not earn and that people of color do not enjoy. Second, it creates real advantages for us. White people are immune to a lot of challenges. Finally, white privilege shapes the world in which we live — the way that we navigate and interact with one another and with the world.

Several empirical claims are implicit here: (1) certain advantages accompany whiteness in the US; (2) these advantages persist even when no one deliberately endorses them; and (3) whites tend not to acknowledge their privileges.

Built into those claims are moral premises: (1) It is OK to make distinctions, but not on the basis of race; (2) earned advantages are justifiable but unearned ones are not; (3) it is better to be conscious of privilege.

I happen to share these six propositions–on the whole–but they are controversial. From the left, Bill Mullen writes in Socialist Worker that the concept of white skin privilege divides working-class coalitions, makes racial identity look fixed and inevitable, conceals the underlying cause of racism, and blocks the only path that he believes in, which is economic revolution. A left critic might also reject the assumption that earned privileges are acceptable because they come from talent or hard work. Although there’s a big debate about what this statement implies, John Rawls insists that “no one deserves his place in the distribution of natural endowments” (Theory of Justice, 17).

From the opposite end of the spectrum, David Horowitz asserts that white skin privilege is a radical leftist myth, and “black skin privilege” is the real problem today because official policies that acknowledge race favor people of color.

Meanwhile, people who endorse the use of the phrase tend to talk about other forms of privilege as well. Race is said to “intersect” with gender, sexual orientation, citizenship status, and social class to create webs of privilege.

We will not soon conclude these debates; but some conceptual clarity may help. I think “privilege” is being used to mean unjust advantage, and that raises the question of what constitutes justice. Distributive justice is a whole topic unto itself. Allowing skin color to predict social outcomes is unjust, but preventing that does not fully satisfy justice. Getting what you earn (and only that) would be one definition of justice–not mine. Getting all that you need to meet your potential would be another definition–but I don’t think it’s possible, since human potential is unlimited. Having an equal share of the society’s rights and goods would also not be my definition, for a variety of reasons, including the fact that I don’t mind if other people have much more than I do (for I have plenty). Assuring everyone a reasonable minimum sounds good, but that it is compatible with profound and invidious inequality above the line.

Despite the difficulty, I’d argue that one must first develop a theory of justice before one can identify “privilege” in the negative sense of that word.

The post what is privilege? appeared first on Peter Levine.

horizon as a metaphor for culture

(Chicago) The philosophers Edmund Husserl, Hans-Georg Gadamer, and Jürgen Habermas use the metaphor of a horizon to describe the background or framework of experience. Without addressing thorny questions of interpretation involving these three disparate and difficult authors, I’d like to defend the metaphor in general terms:

Any person at any given moment has a unique visible horizon–the line that divides the objects on the earth from the sky. Yet if I stand right next to you, or stand where you were a minute ago, my horizon will closely resemble yours. Thus the metaphor captures the uniqueness of individual experience while making difference a matter of degree that is somewhat within our control.

It’s possible for two people to have entirely different horizons–they cannot see any of the same objects. Yet those two people could move until their horizons overlapped. A person could stand between two individuals whose horizons did not overlap and be seen by each. Or a whole chain of people could connect two remote individuals, allowing them to share vicarious experiences.

Your horizon is a function of the way the world is and how you see things. To a degree, you can change how you look and where you stand, but you must start from somewhere that you did not choose.

You have the capacity to see anything within your horizon. But you cannot see it all at once. You can describe and communicate anything within your horizon, but you cannot ever describe it all. You are aware of the horizon as a whole, but your attention focuses on objects within it.

I think if you replace “horizon” with “culture,” most of these sentences will ring true. At any rate, ever since my first book, I have been criticizing theories of culture that presume that everyone who belongs to Culture A shares the same structure of beliefs, which must be different from the structure that defines Culture B. That kind of model promotes unwarranted relativism and skepticism.

The post horizon as a metaphor for culture appeared first on Peter Levine.

when east and west were one

(Washington, DC) I am highly skeptical of distinctions between “eastern” and “western” thought, considering the enormous diversity within both domains, the thousands of years of interaction between the two, and the arbitrariness of any border. (Why, for example, should a body of water less than 2,000 feet across be considered to divide two continents, Europe from Asia?)

But if you are committed to the distinction, it’s worth taking a look at the Milinda Pandha.  This is a dialogue written originally in Sanskrit or Pali before the year 200 CE. It describes a dialogue between a Greek king of India named Milinda (who is probably Menander I) and a Buddhist sage named Nagasena.

Menander I is well attested in Greek and Indian texts and archaeological evidence as a Greek king who converted to, and patronized, Buddhism. Here he is on a silver coin that reads “King Menander the Just” in Greek on one side and “Great King Menander, follower of the Dharma” in the Kharosthi script of South Asia on the other.

Nagasena is not known outside of this dialogue, wherein he is described as the son of a Brahmin who, having quickly exhausted the Vedic scriptures, studied Buddhism under a Greek monk named Dhammarakkhita (“Protected by the Dharma”).

Compared to a Platonic dialogue, the Milinda Pandha includes more fantastical details. The gods, for example, are directly involved. But it paints appealing portraits of the human characters. Menander goes around stumping sages with metaphysical paradoxes until he meets Nagasena, who bests him on a couple of those exchanges. Then the King simply asks questions about Buddhist doctrine and Nagasena answers with illustrative similes. The King replies after each one, “Very good, Nagasena!”

I can read the text only through the 19th century English translation of T. W. Rhys Davids. As far as I can tell, the perspective of the original document is orthodox Theravada Buddhism. Thus I am glimpsing Menander I through layers of interpretation, Asian and English; and it is hard to imagine how this European-born king of ancient India might actually have thought. But he and Nagasena seem to share the same social and cultural horizon, and the differences between them stem only from their respective roles as monarch and sage:

The king said: ‘Reverend Sir, will you discuss with me again?’

‘If your Majesty will discuss as a scholar (pandit), well; but if you will discuss as a king, no.’

‘How is it then that scholars discuss?’

‘When scholars talk a matter over one with another then is there a winding up, an unravelling; one or other is convicted of error, and he then acknowledges his mistake; distinctions are drawn, and contra-distinctions; and yet thereby they are not angered. Thus do scholars, O king, discuss.’

‘And how do kings discuss?’

‘When a king, your Majesty, discusses a matter, and he advances a point, if any one differ from him on that point, he is apt to fine him, saying: “Inflict such and such a punishment upon that fellow!” Thus, your Majesty, do kings discuss.

‘Very well. It is as a scholar, not as a king, that I will discuss. Let your reverence talk unrestrainedly, as you would with a brother, or a novice, or a lay disciple, or even with a servant. Be not afraid!’

The post when east and west were one appeared first on Peter Levine.

what defines conservatism?

Popular political words like “liberalism” and “conservatism” often name disparate ideas and movements. No one controls their definitions, and therefore no one can complain if they are used for incompatible phenomena. For instance, “liberalism” can mean government interventionism or pro-market critiques of the state; and “conservatism” may be equally ambiguous. Yet we ought to be able to say something about the central tendencies of conservative thought. A helpful generalization should have three features:

1) It should be trans-historical and global, not limited to current US terminology.

In the US today, conservatives often present themselves as being critical of the nation-state and bureaucracy. But that has not been true of, for example, Christian Democrats in Europe.

2) It should be reasonably encompassing of the movements that have actually called themselves “conservative.”

For instance, if one defines conservatism as libertarianism, that omits traditionalist and communitarian conservative movements. If one defines conservatives as people who want to return to the past, that omits Newt Gingrich types who are utopians about technology and markets.

3) It should be charitable.

You should define a political idea in a way that a proponent would accept before you debate his or her views. For instance, although I have not read Corey Robin’s The Reactionary Mind, I am highly skeptical of Robin’s definition of “conservatism” as “animus against the agency of the subordinate classes.” According to Robin, “conservatism “provides the most consistent and profound argument as to why the lower orders should not be allowed to exercise their independent will, why they should not be allowed to govern themselves or the polity. Submission is their first duty, agency, the prerogative of the elite.” But what conservative would accept this characterization?

I would propose that the heart of conservative thought is resistance to intellectual arrogance. A conservative is highly conscious of the limitations of human cognition and virtue. From a conservative perspective, human arrogance may take several forms:

  • the ambition to plan a society from the center;
  • the willingness to scrap inherited norms and values in favor of ideas that have been conceived by theorists;
  • the preference for any given social outcome over the aggregate choices of free individuals;
  • the assertion that one may take property or rights away from another to serve any ideal; and/or
  • the elevation of human reasoning over God’s.

Now, these are separable claims. You can be an atheist conservative who has no objection to elevating human reason but deep concerns about state-planning. That is why conservatism is a field of debate, not a uniform movement. But it’s also possible to build coalitions, since, for example, Christian conservatives and market fundamentalists can unite against secular bureaucracies. Their reasons differ, but it is not only their practical objective that unites them. They also share a critique of the bureaucracy as arrogant.

If this is a fair description of conservatism, it should roughly describe the main tendencies of thinkers who have called themselves “conservative” over a broad sweep of time. I think it does. And some aspects of this definition are still visible in today’s Tea Party Republican Party. But our whole ideological spectrum is confused and atypical. If my definition of conservatism generally fails to describe the Republican Party, that may be because the GOP is not conservative in a deep sense. Meanwhile, as I argued in “Edmund Burke would vote Democratic,” it is sometimes the left in the US that seems most appreciative of local norms and traditions, most concerned about “sustainability” and fearful of human arrogance, and most resistant to planning and social control.

The post what defines conservatism? appeared first on Peter Levine.

on the moral dangers of cliché

Here are five brief studies of people who made heavy use of clichés: Francesca da Rimini, Madame Bovary, Adolf Eichmann, W.H. Auden, and Don Gately from David Foster Wallace’s novel Infinite Jest. I offer these portraits to explore the moral pitfalls of cliché and to investigate how our postmodern situation differs from the medieval, Romantic, and high-modern contexts of the first four examples. I end with the suggestion that in our time, the desire to shun cliché can also be a moral hazard.

In the days of moveable type, printers cast common phrases as single units of type to save laying them out one letter at a time. In France, typesetters called those units clichés. When we assign a phrase to a word processor’s keyboard command because we use it frequently, that is a modern version of the original printer’s cliché.

There is nothing wrong with repeating functional phrases: “To whom it may concern”; “On the other hand.” We skim over these formulas without cost. But the word “cliché” now has a pejorative sense, implying a fault in writing. A cliché is an expression that has been used so often that it has lost its impact. Using a recycled phrase can undermine the aesthetic value of a work. It can also be a moral failure, if the writer or speaker uses it to avoid a serious issue or problem.

Francesca da Rimini

Francesca is a favorite character from Dante’s Inferno, represented countless times in Romantic and modern literature and art. A particularly famous example is Rodin’s sculpture of “The Kiss,” which shows Francesca embracing her lover Paolo. In Romantic versions, she is depicted as a heroine who suffers because her authentic and natural impulse to love outside of her marriage is forbidden by artificial and conventional rules. As a character in his own book, Dante is so moved by her plight that he faints.

But Dante (the author) put her in hell. A careful reading of her two short speeches reveals, first, that she talks entirely in quotations or summaries of previous writing about love, and, second, that all of her references contain errors. Indeed, Barbara Vinken has claimed that every quote by a damned soul in the whole Inferno is in error.

For example, Francesca says (in my translation)

When we read that ‘the desired
Smile then was kissed by the ardent lover,’
he who ‘can never be torn away’ kissed
me, all atremble. A Gallehaut was the author
of that book, and seductive was his fancy.
On that day, we read no farther.
(Inf., v, 130-136)

Francesca is quoting here from the French prose romance Lancelot. But in the known versions of the roman, Lancelot never initiates the kiss. He is bashful and passive to the point of foolishness, and Queen Guinevere makes all the advances. Yet the ardent lover in Francesca’s quotation is male. She has confused this text with other episodes from the courtly love tradition, such as the one in which Tristan kisses Iseult while they play chess together. The details of the Lancelot story fade in her mind, to be replaced with a generic formula: damsel taken by ardent knight. Perhaps this is because she wants to shift the blame from Guinivere (the woman) to Lancelot (the man). Or perhaps it is because she reads literature as a set of clichés.

A cliché is that it is portable and recyclable—a ready-made scenario or sentiment that shows up in many contexts. When we employ clichés, we often commit what Alfred North Whitehead called the “fallacy of misplaced concreteness.” This is the fallacy of taking something specific that belongs in one context and applying it elsewhere. Francesca treats the love scene between Lancelot and Guinivere that way, and to do so, she must ignore its peculiarities.

The works that Francesca cites in virtually every line were so popular in the high Middle Ages that she is like a modern person who speaks entirely in phrases from top-forty songs. Even the air in the Circle of the Lustful (where she is condemned for eternity) is filled with quotations:

And as cranes will move, chanting lays in the air,
ordering themselves into one long file,
so I saw coming with a woeful clamor
shades that were borne by the stress of the squall.
(Inf. v, 46-49)

The word lai means any complaint, and also a particular form of Provençal poetry about lost love. The “lays” that are endlessly chanted in Hell must be repetitive to the point of meaninglessness, which makes them perfect symbols of cliché.

One topic that Francesca does not talk about is Paolo. She says nothing specific about him, not even his name. She only says that he has a gentle heart (a commonplace from the poetry of the dolce stil nuovo) and that he is attracted to her “bella figura.” When Francesca notices that Paolo is attracted to her, she immediately recalls scenes from old Romances. In her mind, Paolo becomes Sir Lancelot in the arbor with Guinivere—or Tristan at his chessboard with Iseult, or Floire looking at a book with Blancheflor, or Floris reading romances with Lyriopé. She thinks she’s in love with a real human being, but she really loves the idea of a courtly suitor, which has been put into her head by books.

Francesca speaks in clichés; she overlooks the specific details of stories in order to turn them into stereotypes; and she repeatedly uses euphemisms (“Amor,” instead of sex) and circumlocutions (“That day, we read no further …”). As a result, she never has to say that she cheated on her husband or that he killed her.

In one of the Old French texts that Francesca has read, Iseult says of Tristan:

He loves me not, nor I him,
except because of a potion I drank,
and he too; that was our sin.

In his classic book Love in the Western World, Denis de Rougemont comments: “Tristan and Iseult do not love one another. They say they don’t, and everything goes to prove it. What they love is love and being in love.”

Madame Bovary

The first clichés that Emma Bovary learns as a child are religious: “The similes of fiancé, spouse, heavenly lover and eternal marriage that recur in sermons aroused unforeseen sweetness in the depths of her soul.” But Emma loses interest in religion once an old maid smuggles novels into the convent where she lives. “They were about love, lovers, the beloved, persecuted ladies swooning away in solitary pavilions, postilions killed at every inn, horses ridden to death on every page, somber forests, troubles of the heart, oaths, sobs, tears and kisses, little boats by moonlight, nightingales in the copse, gentlemen brave as lions, sweet like lambs, as virtuous as no one is, always well appointed, and weeping like urns.” She has been reading the nineteenth-century equivalents of the Roman de Lancelot.

The narrator tells us that before Emma was married, “she thought that she had love; but since the happiness that should have resulted from this love didn’t come, she must have been deceived, she reflected. And Emma sought to know exactly what was meant in life by the words felicity, passion, and ecstasy, which has seemed so beautiful to her in books.”

Once she marries, she learns little about her husband’s interior life, doesn’t appreciate his tenderness, but realizes that he has nothing in common with the romantic heroes of fiction.

What is striking about Madame Bovary is Flaubert’s fresh, perceptive, sometimes sympathetic, and always precise way of depicting his characters’ hackneyed, vague, and self-serving thoughts (many of which he italicizes, to show that they are idées reçues). Likewise, Dante depicts Francesca as a person who thinks in clichés, but she is hardly a conventional character herself. On the contrary, she is a highly original creation.

Adolf Eichmann

Clichés are a mark of poor writing—an aesthetic failing—but Flaubert indicates that they are also morally dangerous. Emma Bovary is cruel to Charles because she sees the world in cliché terms. Pushing the argument much further, Hannah Arendt has described the power of clichés to excuse (or even to generate) true evil.

On trial in Jerusalem, Adolf Eichmann remarked that the Holocaust was “one of the greatest crimes in the history of humanity.” He also said that he wanted “to make peace with his former enemies,” and that he “would gladly hang [himself] in public as a warning example for all anti Semites on this earth.”

Arendt writes that these remarks were “self fabricated stock phrases” popular among Germans after 1945. They were as “devoid of reality as those [official Nazi] clichés by which the people had lived for twelve years; and you could almost see what an ‘extraordinary sense of elation’ it gave to the speaker the moment [each one] popped out of his mouth. His mind was filled to the brim with such sentences.” In fact, she writes, “he was genuinely incapable of uttering a single sentence that was not a cliché.”

Arendt stresses Eichmann’s “inability to think.” Although he wasn’t a very good student, he was an excellent organizer and negotiator, who had set up efficient, factory like operations for processing Jews. So presumably he was capable of thinking as well or better than most people. Nevertheless, when he told a “hard luck story” of slow advancement within the SS, he apparently expected his Israeli police interrogator to show “normal, human” sympathy for him. Similarly, when he visited a Jewish acquaintance named Storfer in Auschwitz, he recalled: “We had a normal, human encounter. He told me of his grief and sorrow: I said: ‘Well, my dear old friend, we [!] certainly got it! What rotten luck!’” He arranged relatively easy work for Storfer—sweeping gravel paths—and then asked: “‘Will that be all right, Mr. Storfer? Will that suit you?’ Whereupon he was very pleased, and we shook hands, and then he was given the broom and sat down on his bench. It was a great inner joy to me that I could at least see the man with whom I had worked for so many long years, and that we could speak with one another.” Six weeks after this normal, human encounter, Storfer was dead—not gassed, apparently, but shot.” If Arendt is to be believed, Eichmann’s total reliance on clichés permitted him to ignore the smoke from the Auschwitz ovens and to believe that Storfer was “very pleased.” Eichmann’s inability to think, she writes, was an “inability to look at anything from the other fellow’s point of view.”

Eichmann couldn’t see things much more clearly from his own perspective. Facing the gallows, he rejected the hood and spoke with complete self possession: “He began by stating emphatically that he was a Gottgläubiger, to express in common Nazi fashion that he was no Christian and did not believe in life after death. He then proceeded: ‘After a short while, gentlemen, we shall all meet again. Such is the fate of all men. Long live Germany, long live Argentina, long live Austria. I shall not forget them.’ In the fact of death, he had found the cliché used in funeral oratory. Under the gallows, … he was ‘elated’ and he forgot that this was his own funeral.”

In addition to relying heavily on clichés, Eichmann and his Nazi colleagues used euphemisms to describe crimes from which they might have recoiled if they had called them by other names. So “killing” was known as “evacuation,” “special treatment,” or the “final solution.” Deportation to Theresienstadt was called “change of residence,” whereas Jews were “resettled” to the other, more brutal, concentration camps. These phrases were not called “euphemisms,” of course, but rather “language-rules”—and even that term was (as Arendt notes) “a code name; it meant what in ordinary language would be called a lie.”

It is standard for a single act to have several potential names, each with a different moral implication. The dictionary will not tell us which name to use. For instance, it is not an incorrect use of language or logic to call mass murder “special treatment.” Nevertheless, some words are much more morally appropriate than others under particular circumstances. The Nazis’ euphemisms were extreme and telling examples of immoral language, for the crimes of the Holocaust had obvious names that the perpetrators studiously avoided using. By using euphemisms and circumlocutions, they avoided having to admit what they were doing—even privately.

Among Eichmann’s favorite clichés were lines from moral philosophy. In Jerusalem, he “suddenly declared with great emphasis that he had lived his whole life according to Kant’s moral precepts, and especially according to a Kantian definition of duty,” which he could paraphrase accurately. Clearly, Kant’s demanding principle had become an empty formula in Eichmann’s mind.

Arendt argues that Eichmann was no monster, that his evil was banal. The circumstances, however, were extraordinary, so we shouldn’t immediately conclude from his example that clichés and euphemisms are a widespread danger. It’s one thing to rely on stock phrases when you’re in love, and quite another thing when you’re the logistical mastermind of the Holocaust. Nevertheless, there is always a risk that clichés will prevent us from exercising judgment and seeing the details of the world around us.

W.H. Auden

“September 1, 1939” is a poetic and presumably fictional representation of the narrator’s thoughts on the night that World War II began. (My detailed notes are here.) The poem contains several very famous lines:

Those to whom evil is done / Do evil in return.

[We are] Children afraid of the night/ Who have never been happy or good

There is no such thing as the State

We must love one another or die.

Ironic points of light / Flash out wherever the Just / Exchange their messages

These are not precisely clichés, because Auden invented them for the poem. But he quickly decided that they resembled clichés, presumably because they were sentimental, tempting to memorize and quote, and false to his experience. For instance, it simply is not true that we must love one another or die–plenty of people live without loving, and those who love nevertheless die.

It might not have surprised Auden that Lyndon Johnson’s campaign borrowed “we must love one another or die” for his “Daisy” TV commercial in 1964, that George H.W. Bush quoted “points of light” in his 1988 Republican Convention speech, or that at least six newspapers printed the whole poem right after Sept. 11, 2001.

In any case, Auden repudiated “September 1, 1939” along with four other political poems, requiring that a note be added whenever they were anthologized: “Mr. W. H. Auden considers these five poems to be trash which he is ashamed to have written.”

I suppose my own opinion is that the quotable remarks from this poem are excellent within the overall network that the poem creates (diagrammed here). They are problematic when extracted from the work. Whether Auden should have blamed himself for writing epigrams that could be misused in that way is a tough question.

Don Gately

I must admit that I have not finished Infinite Jest–I am still reading it. (My excuse for writing about it anyway is that this is just a blog.) But it’s my understanding that Gately is the hero and moral center of the book. He uses the jargon of Alcoholics Anonymous, which a sophisticated, postmodern author like Wallace cannot believe literally. To say, for example, that we have “made a decision to turn our will and our lives over to the care of God as we understood Him” (step 3 of AA) is surely to repeat a cliché. And yet it takes courage and character in a postmodern world to insist on repeating just such phrases:

Gately’s found it’s got to be the truth, is the thing. … The thing is it has to be the truth to really go over, here. It can’t be a calculated crowd-pleaser, and it has to be the truth unslanted, unfortified. And maximally unironic. An ironist in a Boston AA meeting is a witch in church. Irony-free zone. Same with sly disingenuous manipulative pseudo-sincerity. Sincerity with an ulterior motive is something these tough ravaged people know and fear, all of them trained to remember the coyly sincere, ironic self-presenting fortifications they’d had to construct in order to carry on Out There, under the ceaseless neon bottle.

This doesn’t mean you can’t pay empty or hypocritical lip-service, however. Paradoxically enough. The desperate, newly sober White Flaggers are always encouraged to invoke and pay lip-service to slogans they don’t yet understand or believe–e.g., “Easy Does It!” and “Turn It Over!” and “One Day at a Time!” It’s called “Fake It Until You Make It,” itself an often-invoked slogan. Everybody on a Commitment who gets up publicly to speak starts out saying he’s an alcoholic, says it whether he believes it yet or not; then everybody up there says how Grateful he is to be sober today and how great it is to be Active and out on a Commitment with his Group, even if he’s not grateful or pleased about it at all. You’re encouraged to keep saying stuff like this until you start to believe it …

Note some echoes here: Flaubert italicizes received ideas; Wallace capitalizes them. Arendt writes that “language-rules” was “a code name; it meant what in ordinary language would be called a lie.” Gately says that “Fake It Until You Make It” is “itself an often-invoked slogan.” But Gately is the hero of the book just because he has the courage and compassion to resort to cliché.

These examples in historical context

In a pre-modern culture like Dante’s, the main role of the artist is present known truths, thereby serving a patron, buttressing the true religion, and decorating and entertaining. No points are awarded for originality or sincerity: truths come ultimately from God, and the only question is whether a fictional work captures those truths in its allegory. Cliché is not problematic, because there is nothing intrinsically wrong with repeating a well-known truth.

However, authors of Dante’s own time were discovering that using a rote phrase or image could interfere with an audience’s emotional engagement. A striking image of the Crucifixion would be more emotionally compelling than a highly conventional one, as Dante’s contemporary Giotto showed. Dante was also part of a literary milieu in which clichés about romantic, secular love were beginning to spread. He was alert to the moral pitfalls of that love culture (in general) and to the specific perils of its clichés. Meanwhile, he was such an astoundingly forceful and original author that, despite his commitment to the traditional truths of his faith, he created indelible characters like Francesca–sinners who have been admired most of all by atheists and freethinkers. The tension between Dante’s poetic originality and his theological doctrines account for some of the power of his work.

By Flaubert’s time, authors were much less confident that there were truths to be conveyed or that repeating them would have value. Flaubert, for example, decided after his sojourn in Egypt that all the conventional mores of Catholic and bourgeois France were arbitrary conventions. But he couldn’t simply tell people to become Egyptians, because that was also a conventional culture and not objectively better than the French one. To copy it would have been false. He sought authenticity and autonomy from all norms. Originality became a mark of excellence and freedom; and cliché, a fundamental fault. In Madame Bovary, the narrator does not express his own values, because those would have to be conventional, but he achieves autonomy by ridiculing his bourgeois characters for their clichés. The author vanishes, leaving a work that is meant to be perfectly original and free.

Auden and Arendt (who were friends in New York) were modernists and post-Romantics. They no longer believed that a work of genius could break free of conventions. Any description of reality–such as a 19th century novel–would have to be a product of some kind of conventional culture. Moreover, they no longer sought autonomy and authenticity alone. They were both serious moralists, looking for answers to the evils of totalitarianism and capitalist imperialism. Yet, like Flaubert, they still sought critical distance from mass culture, wanting to break “the strength of Collective Man.” Auden’s “points of light” are exchanged by “the Just”–individuals who say and do the right things. These people “show an affirming flame,” quite unlike Flaubert’s caustic fire that merely burns the society he describes. Yet the points of light are “ironic,” because the wise cannot just state moral truths. Those would be, or quickly become, clichés.

Postmodernists then arrive to say that cliché is unavoidable. No one can invent language from scratch; it is intrinsically conventional. Postmodernists no longer pretend to avoid cliché, but they try to battle it indirectly by means of irony and parody. David Foster Wallace came from that background but spoke powerfully to his generation (which is also mine) because he recognized that the escape from cliché is pretentious and arrogant. In a culture saturated with advertising slogans (Wallace’s “ceaseless neon bottle”), we need the courage to say–and mean–things that are good but not original and not wholly true.

(This post draws from my book Reforming the Humanities: Literature and Ethics from Dante through Modern Times.)

The post on the moral dangers of cliché appeared first on Peter Levine.

the place of argument in moral reasoning

Here’s an everyday moral issue. As she heads to your cousin’s wedding, your great aunt Sallie asks whether you like her hat. You find it strikingly ugly, yet you choose to say that you love it.

Here is an argument that you made the wrong decision: “All lies are morally wrong. Your statement was a lie. Therefore, your statement was morally wrong.” This line of thinking should be taken seriously. It is tight logically. Morally, it is weighty as well. Your statement was a lie by standard definitions, and lying is problematic.

If you need a reason that lying is wrong, several are available: We don’t want to be lied to, so we shouldn’t lie to others. Lying makes a convenient exception to the rule that we would want to apply in general: Tell the truth. Lying manipulates. If we view Aunt Sallie as a fully rational person, then we should assume that she can handle a truthful reply to her question. Lying is a vice, likely to form a habit and corrode virtues.

But I picked this example because the statement “I love your hat” can fit within other persuasive arguments as well. For instance, “Act so as to maximize the happiness of other people. Your statement made Aunt Sallie happy. Therefore, your statement was morally right.” Or “Express authentic and benign emotions. Your statement expressed your love for Auntie Sallie. Therefore, your statement was morally right.” Or “Be a good nephew and be nice to your aunt. Your statement was nice, coming from a nephew. Therefore, you did the right thing.”

To make matters even more complicated, your words were not just a statement about her hat (although they were that). They were also a step in a conventional conversation, a contribution to an ongoing relationship with your great aunt, and part of the flow of a day that was important to your cousin. In turn, your family relationships and events such as weddings are components of communities that give meaning to life.

I propose that we think about a case like this as a node in a network. It has many links: to abstractions such as lying and love, and to concrete realities such as your aunt’s feelings and your cousin’s wedding. In turn, each of those nodes is linked to many others, producing a complex network.

In reviewing the moral network that we perceive around us, I think we should avoid two common errors.

One mistake is to conclude that there is no right answer to the question, “What should I do?” Since most nodes have many links, we have the ability to choose which paths to follow. That does not mean that any choice is as good as any other. We should take the moral arguments seriously, even if they happen to conflict. They are not mere matters of opinion or preference. They have moral weight regardless of our preferences. For instance, I would much rather not tell an aged relative that I dislike her hat, but maybe I ought to. Our job is to decide what we should do.

The other mistake is to think that we can find determinative arguments that simply settle cases. In network terms, we would draw a line between each concrete judgment or choice and one governing principle. If a concrete situation must be linked to more than one principle (such as both lying and kindness, in this case), we would develop an algorithm for deciding which path to follow to reach a moral decision. The whole network would then become a decision tree or flowchart, with one outcome for each situation. For example, any statement that was a lie might simply be wrong; that would be the only link that counted.

I am highly skeptical of the instincts to delete moral links or to seek rules for steering our way through a moral network. Often, conflicting moral ideas represent genuine insights. Deleting or ranking them makes the world seem more comfortable and neater than it is.

Several methods are commonly proposed for simplifying a moral network to yield decisions:

  1. What Amartya Sen calls “informational constraints”: blocking information that ought to be irrelevant to a moral decision. For instance, according to Kantian moral theory, you should ignore any information about your aunt’s likely emotional reaction to your remark about her hat, because it is irrelevant. All that matters is whether your statement conforms to a valid general moral rule. But I agree with Sen that the last thing we should do is ignore information that can be seen, from any reasonable perspective, as relevant to the decision.*
  2. Thought experiments. Is it always wrong to lie? Well, what if the Gestapo is at your door asking about the little Jewish girl hidden in your attic? That example triggers a strong and valid negative reaction. If “Never lie” were a hypothesis, the Gestapo thought experiment disproves it. But that example is very remote from the case of your great aunt’s hat. If there is something problematic about misleading her, it doesn’t have much to do with lying to the Nazis. You may worry about insincerely praising her hat without assuming that lying is always wrong. You are concerned about certain aspects of lying whose relevance varies greatly depending on the situation. The thought experiment mainly confuses the issue.
  3. Reflective equilibrium. This is the method of forming intuitive judgments about concrete cases (such as your aunt’s hat) and about general concepts or issues (such as lying) and adjusting each until they are mutually consistent. I see merit to the method, but to rely on it presumes that our most serious problem is inconsistency, as if a more consistent network were always a better one. That cannot be right: a moral monster can be perfectly consistent. Better to retain a tension or contradiction than jettison it for convenience and neatness.
  4. Particularism: This is the idea that we can reason about concrete cases (like your aunt’s hat) without worrying at at all about the general principles. To the extent that we link nodes together, the links should connect concrete cases, and they should be analogies or rough similarities rather than inferences. Moral reasoning is like a pure form of common law in which the court decides each case without reference to law but with respect for precedent. I used to call myself a particularist, but it cannot be wise to ignore all general concepts, like lying and kindness. For one thing, that would make it impossible to distinguish between very serious matters and trivialities. Applying a weighty word like “lying” to a case is a valid move, even if some lies happen to be good.

Here is an alternative view:

Each situation belongs within a dense network of moral connections. An argument is a particular kind of structure within a network: a string of nodes connected by strong logical links. Much as a protein is a chain of molecules that contributes to the functioning of a cell, a genuine moral argument is a valuable contributor to the moral worldview to which it belongs. We should prize moral arguments and take them seriously. However, adamantine chains of reasoning are too rare in the moral realm for anyone to rely on them alone; particular cases are often embedded in contradictory arguments; and each argument is only persuasive if one grants its premise, which tends to be controversial.

Fortunately, moral worldviews are composed of more than “if … then” arguments. A reasonable person also links individual moral beliefs and commitments into networks by means of rules-of-thumb, causal and other empirical generalizations, and analogies. Marriage is a contract, but it is also a manifestation of love. Gay marriage is like heterosexual marriage. People want to love and be loved exclusively and durably. Marriage tends to benefit the children. These statements vary in terms of their grammar, their certainty, and the generality of their application, but all could be endorsed by a reasonable person and could form part of her overall moral network.

Thus, in addition to finding and testing arguments, we must also assess the overall structure of a moral worldview—our own or someone else’s. We start with judgments or principles that we find intuitively attractive and then try to build a system that displays appropriate formal properties.

The two most commonly cited criteria are consistency and coherence. I have been arguing that we need better ways to assess moral networks than these. Consistent networks are not, in general, better than richer but less consistent ones. I argue that a better network is one that enables moral deliberation with other people, and that will tend to be a network that is complex, dense, flat, but somewhat clustered.

More serious cases

The example with which I began may seem like a “First World problem.” Who really cares what you say about your great aunt’s hat? But the logic of the situation is similar when we address a much grander and more dire case.

For instance, in 2012, I visited the wall that Israel has unilaterally built between Jewish and Palestinian populations. The physical object was presented to me and my colleagues by Israeli Colonel Danny Tirzah, who had helped to plan and design it. Later that day, we crossed the wall and visited the Palestinian administrative center in Ramallah to meet with leaders of the Palestinian authority, who denounced it.

As I noted at the time, everything about the case was controversial, starting with the basic vocabulary. I asked myself, “is that thing that Israel built a wall, a fence, or a security barrier? … Is the region to my east right now Erez Israel, Judea and Samaria, Zone C of the Palestinian National Authority, a part of Palestine, the Holy Land, the West Bank, or the Occupied Territories?” If the wall is a node, we could link it to anti-Israeli terrorism or to Western imperialism. We could connect it indirectly to the Shoah or to Al-Nakba, the Palestinian catastrophe of 1948.

I happen to have a fairly firm view about what should be done, and I am not overly skeptical of my position. The point of this post is not to defend moral skepticism, relativism, or indeterminacy. My point is methodological. In order to reach the right decision about that wall (and about the broader questions of Mideast peace), we must navigate through a dense moral network. There are many valid connections between the wall and other issues, and they conflict. For example, there really is a connection between the wall and the Holocaust; deleting it would deny a truth. And yet the more important connection is between the wall and the welfare of the Arabs under occupation.

Moral judgment is all-things-considered evaluation leading to decisions under conditions of complexity and uncertainty. As one forms a conclusion, the most important question to ask is: What would other people think of it? For instance, what arguments would a Fatah leader or an Israeli settler make in response to my efforts to navigate the moral network map around that wall? I should not defer to either, but my job is to map out as many issues as I can, choose a path, and check my inevitably partial understanding with my fellow human beings.


*Sen introduced the idea of informational constraints in the context of social welfare theory, as a response to Arrow’s Impossibility Theorem. See Amartya Sen, “On Weights and Measures: Informational Constraints in Social Welfare Analysis,” Econometrica, Vol. 45, No. 7 (Oct., 1977), pp. 1539-1572. In moral theory, he has argued for relaxing or altering informational constraints so that deliberators can consider a wide range of available information. E.g., Sen, “Capability and Well-Being,” in Martha Nussbaum and Amartya Sen, eds., The Quality of Life (Oxford, 1993), p. 32.

The post the place of argument in moral reasoning appeared first on Peter Levine.

epistemic network analysis and morality: applying David Williamson Shaffer’s methods to ethics

David Williamson Shaffer and his colleagues are developing an influential approach to education and assessment that relies on the notion of “Epistemic Network Analysis.” They posit that a “profession or other socially valued practice” (e.g., engineering) has an “epistemic frame” that is composed of many facts, skills, values, identities, and other concepts that advanced practitioners link together in various ways. Thus you can diagram a professional’s epistemic frame as a network and measure it using tools that have been developed for measuring social networks. What nodes are most central? How dense is the whole network? How many clusters does it have?

One way to collect the data necessary for this kind of analysis is to ask a practitioner to write or talk about her work. Many of her sentences will invoke concepts and link them together. (“I did A because I knew that B.” “I recommend C because I believe in ethical norm D.”) By coding the text, one can produce a dataset that can be displayed and analyzed in network terms. As Shaffer and colleagues note, the graph is not the actual epistemic network; it is a representation of how the engineer’s mentality works under specific practical circumstances (Shaffer et al, 2009, p. 14).

If a profession is worthy, then learning its epistemic frame is desirable. As students experience a course, a project, or an internship, their epistemic frames can be diagrammed and quantified at regular intervals. The learners’ networks should grow more similar to those of advanced professionals. Measures of network structure can be used for “formative assessment” (giving feedback on what the student should study) and “summative assessment” (awarding a grade or credential).

I have posited that moral thinking is also an epistemic frame (to use Shaffer’s terminology). We hold many morally relevant ideas that we connect by various kinds of links, not just logical inferences but also causal theories, generalizations, analogies, etc. We can graph our own moral mentality as a network of ideas and connections. Moral learning means building a moral network map that resembles that of a good moral thinker. (I leave aside for now the question of whether good moral reasoning is related to good moral behavior.)

One can easily see that the moral network map of an average adult is more complex than that of a 2-year-old. It is uncontroversial that a toddler needs to learn to reason more maturely, in which case his network map will look more like yours and mine. But that leaves a lot of room for debate about what an ideal map looks like. Defining good moral thought is a normative, not an empirical, question.

To some extent, that is also true of engineering. It is not self-evident what makes a “good” engineer. However, as long as we assume that the profession is working reasonably well and fulfilling its social purposes adequately, then a “good” engineer is presumably a respected and successful one. We can identify such people empirically: they have high grades, awards, and responsible positions. Then we can diagram their epistemic frames and compare novices to exemplary professionals to assess their learning.

The situation is much harder with morality. We debate what specific moral concepts and relationships should be found on a person’s epistemic frame. For instance, should everyone’s graph show the existence of God, linked to a set of commandments? We also debate what formal properties any moral network should display. Should it be highly centralized around one fundamental truth? Classical utilitarians and some religious fundamentalists would say so. Or should it be very flat and complex, as certain liberals have held?

Here I would introduce a controversial–but not original–premise that makes the identification of good moral networks somewhat more empirical. No human being can have a fully adequate moral theory in place before she faces the various situations of life. The moral world is far too complex for that. It involves countless differently situated people interacting in countless situations in relation to institutions (like education, romance, politics, and punishment, to name a few) that have evolved to have manifold purposes and meanings. So to think well morally is not to apply a theory to each new case, but rather to learn constantly. Learning results from interactions with other people (whether face-to-face or vicariously). By “interaction,” I do not mean only communication, or the exchange of ideas. Groups of people can agree on thoroughly foolish ideas unless they try to put them into practice. So “interaction” means a combination of exchanging ideas, trying to work together, and reflecting on the results–what Dewey often called “conjoint activity.”

Who is good at that? This is not strictly an empirical question, because we might disagree about how to assess various styles of interaction. Should we admire the persuasive ideologue? The follower of fads? But although value-judgments are inescapable, I think it is partly an empirical question who participates constructively in conjoint activity. Good participants do not impose preexisting ideas and do not merely adopt the majority’s view, but shape the group’s beliefs while adjusting their own.

As I have written before, my own unsystematic observation suggests that people who are better at moral interaction have epistemic networks with these features:

  1. Lots of nodes and links, because each idea is an entry point for dialogue, and each reflects some prior learning.
  2. A degree of centrality, because some moral ideas are genuinely more important than others; and also because one should develop a set of prized values that constitute your character. Yet:
  3. No outright dependence on a small set of nodes to hold the whole network together, because then disagreement about those nodes must end a conversation, and doubt about them will plunge you into nihilism. You may believe in fundamental principles, but you should be able to reason around them. The network should be robust in that sense.

We might try to identify the actual epistemic frames of people who are good at collaboration and deliberation and see if they manifest the three features I listed above. We could then map the networks of children and other moral learners to see if they are developing to resemble the exemplary cases. Again, this would not be a value-neutral research program, but it would have a strong empirical component.

We can, in fact, pursue three levels of analysis.

  1. Each individual has an evolving and not-fully-conscious epistemic frame composed of many ideas and connections.
  2. The individual belongs to a community of other people who all have networks of their own. Their networks overlap and influence each other because moral learning is social. (Even a recluse got his ideas from someone else). Within a community, individuals’ maps intersect in a second way as well. If one person has a moral commitment to a specific other person, that other will appear on her map.
  3. Finally, the world is composed of many moral communities. But these are never fully separate and distinct. They are always complex, overlapping, and vaguely-bordered networks. Given two entities that we call “cultures,” no matter how remote, we will likely find common nodes and connections in their respective moral networks. I leave aside the possibility that all human beings share a set of ideas as our biological inheritance. That may be the case, but I do not rely on it. Rather, all communities interact (even the so-called “uncontacted peoples” who live deep in rain forests), and so the members of community A always share some nodes with members of community B nearby as a result of their “conjoint activity.”

At the individual, community, and global level, the process of moral reasoning is fundamentally the same. It is always a matter of developing a more satisfactory network of ideas and connections. This is not easy, conflict-free, or pretty. Individuals face deep internal conflicts among incompatible ideas, and people and communities often actually kill each other on account of such disagreements. Nevertheless, we can point to individuals and groups that are better at constructive engagement, and moral learning means becoming more like them.

Reference: David Williamson Shaffer, David Hatfield, Gina Navoa Svarovsky, Padraig Nash, Aran Nulty, Elizabeth Bagley, Ken Frank, Andre A. Rupp, and Robert Mislevy, “Epistemic Network Analysis: A Prototype for 21st Century Assessment of Learning,” International Journal of Learning and Media, vol. 1, no. 2 (2009), pp. 1-22.

The post epistemic network analysis and morality: applying David Williamson Shaffer’s methods to ethics appeared first on Peter Levine.

six types of freedom

(Ft. Lauderdale, FL, en route to Austin, TX) Here are six types of freedom. Isaiah Berlin cited the first two as part of his argument for pluralism. He believed that genuine goods were distinct and incommensurable. For instance, a reasonable person could value two types of freedom, but no social order could maximize both simultaneously. We would have to choose, not only between the two types of freedom that he described, but also among freedom, happiness, equality, and other worthy goods.

Much in the spirit of his work, I extend the list of freedoms to six:

1. Negative liberty: freedom from constraint in the form of tangible action against the person or her property or (much more commonly) the threat or fear of such. Because fellow human beings can threaten violence, anarchy poses dangers to negative liberty. (Think of failed states flooded with AK-47s.) Although parents must constrain the negative liberty of their children, they can abuse that power. To combat anarchy, intra-family abuse, and other forms of violence among citizens, states are probably necessary. Yet in most of the world, it is the state that can threaten violence most effectively and pervasively. It must be curtailed in the interest of negative liberty.

2. Positive liberty: the freedom to do something. You are not free to travel, for example, unless you can afford a fare. Positive liberty is a matter of degree, since human beings are simply not able to do everything we want. But there may be a list of fundamental capabilities that everyone should be able to exhibit, and they require external support. You can’t learn to read unless someone teaches you. If one has a meaningful right to a positive liberty (e.g., the right to read), then some other person or community has a duty to provide it; and the state may be the best means to enforce that duty. But if I must pay taxes for your kid’s education or face imprisonment, then my negative liberty has been curbed in the interest of her positive liberty.

3. Individuality: the freedom to develop and express a unique personality and life-story in both the public and private spheres. Individuality may require a degree of negative and positive liberty, but it also faces threats not yet mentioned. The social norms that are strongest in tight, traditional communities and the mass culture that dominates today’s global society both inhibit individuality. Mass culture already worried de Tocqueville, but it has been hypercharged by advertising and technology. The global mass exercises its power less through majority rule at the ballot box than through search algorithms, trendy catchphrases, and addictive tunes.

4. Freedom from manipulation: I am treated as a means to someone else’s ends when the other person sways, threatens, or pays me to do what he wants. I am treated as an end when the other person tries to decide with me what we should do. States and markets arrange people as means to each others’ ends, perhaps unavoidably. Freedom (in this fourth sense) exists in ethical communities whose members treat each other as ends in themselves. Neither positive nor negative liberty guarantees such communities.

5. Freedom to make the world (or to live in a world that we make). Society is an artifact. We are born into the society of our ancestors, with all its flaws. But we are not compelled to replicate it. We become freer in this fifth sense the more that we design and fashion the world that we inhabit. That is a collaborative task, so it requires some limitations on negative liberty. But it is also not the positive liberty of being given an education or an airplane ticket. It is a matter of active co-creation.

6. Equanimity: freedom from the dread, doubt, disquiet, and sorrow that are consequences of being vulnerable and mortal creatures who care about other fragile living things. Although it is harder to achieve equanimity under conditions of extreme duress (e.g., given a complete lack of negative or positive liberty), and although mass culture threatens equanimity, inner peace seems to have different conditions. Indeed, when positive liberty means incessantly choosing consumer goods, it is incompatible with equanimity, as is individuality when it turns into narcissism, or co-creation when it becomes a vain yearning to build wholly new and permanent things.

The post six types of freedom appeared first on Peter Levine.

Emerson’s mistake

Emerson’s Self-Reliance makes a provocative case for cultivating the self and shunning morality in the form of obligations to others. One famous paragraph begins, “Whoso would be a man must be a nonconformist. … Nothing is at last sacred but the integrity of your own mind.” The same paragraph ends with an argument against charity as an entanglement that damages integrity: “do not tell me, as a good man did to-day, of my obligation to put all poor men in good situations. Are they my poor? I tell thee, thou foolish philanthropist, that I grudge the dollar, the dime, the cent, I give to such men as do not belong to me and to whom I do not belong.”

Emerson strongly favors interacting with other minds, especially the geniuses who figure in the books that he devours in his private hours. Moses, Pythagoras, Plato, Socrates, Jesus, Luther, Milton, Copernicus, and Newton are just some of the names he invokes in Self-Reliance. He thinks these people (all men) had distinct and invariant characters. “For I suppose no man can violate his nature. All the sallies of his will are rounded in by the law of his being.” Thus, to understand an author is to grasp something unitary and unique about him that inspires you to enrich your own equally coherent character, not by sharing his truth but by creating your own. In Experience, Emerson writes:

Two human beings are like globes, which can touch only in a point, and, whilst they remain in contact, all other points of each of the spheres are inert; their turn must also come, and the longer a particular union lasts, the more energy of appetency the parts not in union acquire. Life will be imaged, but cannot be divided nor doubled. Any invasion of its unity would be chaos.

But this is false. To experience another person’s mind (whether through a brilliant book or an everyday interaction) is not just to pick out one idea that you think defines the other. It is to begin exploring his or her web of thinking while sharing your own. You both have unique webs, but each element of your thought is shared with many other people. You gain the most by exploring many of the other person’s moral nodes and their connections. This does not threaten your “unity” or risk chaos, because your own character was already a heterogeneous, evolving, and loosely connected web that you largely adopted from other people. Touching at just one point is a failure of communication and interpretation.

To be sure, you can strive to disentangle from everyday life and politics and prefer books to “dining out occasionally” (which, Thoreau found, interfered with his “domestic arrangements”), but you should not persuade yourself that you have thereby disconnected your network map from everyone else’s. Your self is still a social creation, and you are still mentally involved with others, even if you detach politically and economically.

References: Emerson, “Self-Reliance,” in The Essential Writings of Ralph Waldo Emerson, (New York: Random House, 2009) pp. 134-5, 138. Emerson, “Experience,” in ibid, p. 322. Thoreau, Walden: Or, Life in the Woods (New York: T.Y Crowell & Co., 1899)p. 62

The post Emerson’s mistake appeared first on Peter Levine.