on the pattern that progressives tend to be less happy

In the American Spectator, under the headline “Leftists Scramble to Explain Why Conservatives Are Happier Than Liberals,” Ann Hendershott quotes our research, which I’d previously reported on this blog:

In the Tufts study, the researchers conclude: “[I]f a liberal and a conservative have the same income, education, race, gender, age, marital status, and religious attendance, the conservative will feel more fortunate … liberals are people who—regardless of their actual social positions—rate their own circumstances relatively poorly, and that attitude drives their ideology and makes them unhappy or else reflects their unhappiness.”

The ellipsis in her quote replaced these words from our report: “A critic might say that …” We hedged because we can’t tell whether negative assessments of one’s own life affect one’s political views (a causal thesis). But Hendershott is a critic of liberals, and she adds a positive assessment of conservatives: “The reality is that true happiness, and a truly satisfying life, comes from caring for others—being physically present to them. Conservatives know that marriage and family life make one happy. … They know that true happiness comes from selflessness—living for others. And this is why they are happier.”

I am not sure about her causal claim. Progressives are less happy than conservatives regardless of marital status (and religious attendance). Using General Social Survey data, I ran a simple regression to predict divorce based on party identification, income, race, gender, and education. Before 2010, identifying as a Republican was associated with lower divorce when these other factors were considered. Since then, party identification has not been related to divorce. Indeed, Democrats and Republicans have had statistically indistinguishable divorce rates in the past decade. These statistics do not settle the issue, but I’d be surprised if progressives derive less happiness from their family involvement than conservatives do in the same social circumstances.

In our model, progressives who are politically active are almost as happy as conservatives who are politically active. Like intensive involvement with one’s family, being an activist is also a way of “living for others.”

Our empirical contribution was the finding that it’s the people with actual depression who make the progressive sample less happy than the conservative sample. Individuals with depression are a minority of those on the left, yet depressed Americans cluster on the progressive side of the aisle. In that case, generalizations about why progressives are less happy than conservatives may be misleading–they don’t apply to people without depression. A better question would be why depression is disproportionately concentrated on the left even once race, gender, and income are controlled for. Musa al-Gharbi has collected previous studies that also find links between ideology and depression. Is this link causal?

Leaving the empirical issues aside, I would pose a very general question about the relationship between mood and knowledge of the world. A standard view might be that any mood is a bias, a source of error; objectivity requires countering all moods. The metaphor of “rose-colored glasses” suggests that optimistic people would have more accurate perceptions if they took off their lenses. Indeed, science since the Renaissance has sought to develop methods and protocols that decrease the impact of the subjectivity of the observer. If scientific methods and instruments work, then a scientist’s mood shouldn’t matter.

An alternative view, which Michel Foucault labeled “spirituality,” is the idea that in order to perceive things correctly, we must first bring ourselves into the best possible mental state. A mood is not a bias; it is the basis of perception. The question is whether our mood is ideal for perceiving well. For example, to perceive what another person is saying, we should probably attain a mood of receptiveness and equanimity. Likewise to perceive that nature is beautiful or that God is good requires an appropriate state of mind. These states can be labeled virtues rather than (mere) moods.

Positive states may not be the only ones that help us to perceive well. Heidegger, for example, sees moods such as anxiety and boredom as advantageous for revealing types of truth. As Michael Wheeler explains:

According to Heidegger’s analysis, I am always in some mood or other. Thus say I’m depressed, such that the world opens up (is disclosed) to me as a sombre and gloomy place. I might be able to shift myself out of that mood, but only to enter a different one, say euphoria or lethargy, a mood that will open up the world to me in a different way. As one might expect, Heidegger argues that moods are not inner subjective colourings laid over an objectively given world (which at root is why ‘state-of-mind’ is a potentially misleading translation of Befindlichkeit, given that this term names the underlying a priori condition for moods). For Heidegger, moods (and disposedness) are aspects of what it means to be in a world at all, not subjective additions to that in-nes

This general view of moods would suggest that being unhappy is not necessarily bad for perceiving the state of our social world. It might reveal insights about current injustices. There is no objective mood, but we might be able to shift negative attitudes into positive ones, or vice-versa. The question is which moods to cultivate in ourselves when we assess society and politics.

One answer might be that we need all kinds of moods and a robust exchange among people who see things differently. Nietzsche is hardly a deliberative democrat, but this remark from his Genealogy of Morals supports the goal of bringing people with diverse moods into conversation: “The more emotions we can put into words about a thing, the more eyes, the more different eyes we can set over the same thing, the more complete is our ‘concept’ of that thing, our ‘objectivity’” (III:12, my trans.). We might then be glad that our society includes both liberals and conservatives who are prone to different states of mind.

I wouldn’t be fully satisfied with that conclusion because it leaves unaddressed the question for individuals. If I perceive the political world sadly, or even in a depressed state, should I strive to change that attitude? Or should I try to convince the positive people to be more concerned?

To answer that question may require a more nuanced sense of an individual’s mood than we can obtain by asking questions about overall happiness (as we did in the reported study). We need a bigger vocabulary, encompassing terms like righteous indignation, empathy for loved ones or for strangers, generalized compassion, sensitivity, caring, agape, technological optimism or pessimism, quietism, acceptance, responsibility, solidarity, nostalgia, utopianism, zeal, and more.

Al-Gharbi reports that the correlation between happiness and conservatism is “ubiquitous, not just in the contemporary United States but also historically (virtually as far back as the record goes) and in most other geographical contexts as well.” He has reviewed the literature, and I am sure he’s right. But I suspect that once we add nuance to the characterization of moods, we will find more change over time and more diversity within the large political camps.

For example, I recognize a current kind of progressive who is deeply pessimistic about economic and technological trends and their impact on the environment. I don’t believe that mood was pervasive on the left immediately after the Second World War, when progressives tended to believe that they could harness technology for beneficial social transformation–not only in Europe and North America but also in the countries that were then overthrowing colonialism. I also recognize a kind of deep cultural pessimism on the right today that seemed much less prevalent in the era of Reagan and Thatcher.

Perhaps measures of happiness have continued to correlate with self-reported ideology in the same way over time and space, but there’s a lot more going on. The content of the ideologies, the demographics of their supporters, the pressing issues of the day, and the most evident social trends have all shifted.

Here’s a possible conclusion: You can’t avoid viewing the world in some kind of mood. Some moods can be more virtuous than others, and it’s up to each of us to try to put ourselves in the best frame of mind.

Liberals and progressives might give some thought to why there is a strong statistical association between their ideologies and unhappiness. Does that mean that we are prone to certain specific vices, such as resignation, bootless anger, or discounting good news? Does it mean that our political messages are less persuasive than they could be? What might we learn from people who seem to be happier while they assess the social world? And what should we do to assist the substantial number of people on our side who are depressed?

Although those questions may be worth asking regularly, they do not imply that we should drop our general stance. People who are happy about the world should also ask themselves tough questions and consider the possibility that those who are unhappy–or even the depressed–might have insights.

See also: perhaps it’s not that conservatives are happier but that people with depression cluster on the left; philosophy of boredom (partly about Heidegger); Cuttings: A book about happiness; spirituality and science; and turning away from disagreement: the dialogue known as Alcibiades I.

turning away from disagreement: the dialogue known as Alcibiades I

The dialogue known as Plato’s Alcibiades I is now widely believed to have been written after Plato’s death, hence by someone else (Smith 2003). Perhaps that is why no one has ever told me to read it. But it is an indisputably ancient text, and it’s a valuable work of philosophy.

In several places, Michel Foucault discusses Alcibiades I as the earliest text that offers an explicit theory of what he calls “spirituality” (Foucault 1988, 23-28; Foucault 1981, 15-16). For Foucault, spirituality is the idea that reforming one’s soul is a necessary precondition for grasping truth. One way to summarize Alcibiades I might be with this thesis: You are not qualified to participate in politics until you have purified your soul enough that you can know what is just. That is an anti-democratic claim, although one that’s worth pondering.

At the beginning of the dialogue, we learn that Alcibiades will soon give a speech in the Athenian assembly about a matter of public policy. He is talented, rich, well-connected, and beautiful, and his fellow citizens are liable to do what he recommends. Athens is a rising power with influence over Greeks and non-Greeks in Europe and Asia; Alcibiades aspires to exercise his personal authority at a scale comparable to the Persian emperors Cyrus and Xerxes (105d). However, Alcibiades’ many lovers have deserted him, perhaps because he has behaved in a rather domineering fashion (104c). Only his first lover, Socrates, still cares for him and has sought him out—even as Alcibiades was looking for Socrates.

Alcibiades admits that you should not expect a person who is handsome and rich to give the best advice about technical matters, such as wrestling or writing; you should seek an expert (107a). Alcibiades plans to give a speech on public affairs because it is “something he knows better than [the other citizens] do.” (106d). In other words, he claims to be an expert about politics, not just a celebrity.

Socrates’ main task is to dissuade Alcibiades from giving that speech by demonstrating that he actually lacks knowledge of justice. Alcibiades even fails to know that he doesn’t know what justice is, and that is the most contemptible form of ignorance (118b).

Part of Socrates’ proof consists of questions designed to reveal that Alcibiades lacks clear and consistent definitions of words like “virtue” and “good.” The younger man has no coherent theory of justice. This is typical of Socrates’ method in the early dialogues.

A more interesting passage begins when Socrates asks Alcibiades where he has learned about right and wrong. Alcibiades says he learned it from “the many”–the whole community–much as he learned to speak Greek (110e, 111a). Socrates demonstrates that it is fine to learn a language from the many, because they agree about the correct usages, they retain the same ideas over time, and they agree from city to city (111b). Not so for justice, which is the main topic of controversy among citizens and among cities and which even elicits contradictory responses from the same individuals. The fact that the Assembly is a place of disagreement shows that citizens lack knowledge of justice.

In the last part of the dialogue, Socrates urges Alcibiades to turn away from public affairs and rhetoric and instead make a study of himself. That is because a good city is led by the good, and the good are people who have the skill of knowing themselves, so that they can improve themselves. For Foucault, this is the beginning of the long tradition that holds that in order to have knowledge–in this case, knowledge of justice–one must first improve one’s soul.

Socrates verges into metaphysics, offering an argument that the self is not the observable body but rather the soul, which ought to be Alcibiades’ only concern. This is also why Socrates is Alcibiades’ only true lover, for only Socrates has loved Alcibiades’ soul, when others were after a mere form of property, his body.

The dialogue between the two men has been a conversation between two souls (130d), not a sexual encounter or a public speech, which is an effort to bend others’ wills to one’s own ends. Indeed, Socrates maintains from the beginning of the dialogue that he will make no long speeches to Alcibiades (106b), but will rather permit Alcibiades to reveal himself in response to questions. Their dialogue is a meant, I think, as a model of a loving relationship.

Just to state a very different view: I think there is rarely one certain answer to a political question, nor is there a decisive form of expertise about justice. However, good judgment (phronesis) is possible and is much better than bad judgment. Having a clear and structured theory of justice might be helpful for making good judgments, but it is often overrated. Fanatics also have clear theories. What you need is a wise assessment of the particular situation. For that purpose, it is often essential to hear several real people’s divergent perspectives on the same circumstances, because each individual is inevitably biased.

Socrates and Alcibiades say that friendship is agreement (126c) and the Assembly evidently lacks wisdom because it manifests disagreement. I say, in contrast, that disagreement is good because it addresses the inevitable limitations of any individual.

Fellow citizens may display civic friendship by disagreeing with each other in a constructive way. Friendship among fellow citizens is not exclusive or quasi-erotic, like the explicitly non-political relationship between Alcibiades and Socrates, but it is worthy. We need democracy because of the value of disagreement. If everyone agreed, democracy would be unnecessary. (Compare Aristotle’s Nic. Eth. 1155a3, 20.)

Despite my basic orientation against the thesis of Alcibiades I, I think its author makes two points that require attention. One is that citizens are prone to be influenced by celebrities–people, like Alcibiades, who are rich and well-connected and attractive. The other is that individuals need to work on their own characters in order to be the best possible participants in public life. But neither point should lead us to reject the value of discussing public matters with other people.

References: Smith, Nicholas D. “Did Plato Write the Alcibiades I?.” Apeiron 37.2 (2004): 93-108; Foucault, “Technologies of the Self: A Seminar with Michel Foucault,” edited by Luther H. Martin, Huck Gutman and Patrick H. Hutton (Tavistock Press, 1988); Foucault, The Hermeneutics of the Subject, Lectures at the College de France 1981-2, translated by Graham Burchell (Palgrave, 2005). I read the dialogue in the translation by David Horan, © 2021, version dated Jan 1 2023, but I translated the quoted phrases from the Greek edition of John Burnet (1903) via Project Perseus. See also friendship and politics; the recurrent turn inward; Foucault’s spiritual exercises

philosophy of boredom

This article is in production and should appear soon: Levine P (2023) Boredom at the border of philosophy: conceptual and ethical issues. Frontiers of Sociology 8:1178053. doi: 10.3389/fsoc.2023.1178053.

(And yes, I anticipate and appreciate jokes about writing yet another boring article–this time, about boredom.)

Abstract:

Boredom is a topic in philosophy. Philosophers have offered close descriptions of the experience of boredom that should inform measurement and analysis of empirical results. Notable historical authors include Seneca, Martin Heidegger, and Theodor Adorno; current philosophers have also contributed to the literature. Philosophical accounts differ in significant ways, because each theory of boredom is embedded in a broader understanding of institutions, ethics, and social justice. Empirical research and interventions to combat boredom should be conscious of those frameworks. Philosophy can also inform responses to normative questions, such as whether and when boredom is bad and whether the solution to boredom should involve changing the institutions that are perceived as boring, the ways that these institutions present themselves, or individuals’ attitudes and choices.

An excerpt:

It is worth asking whether boredom is intrinsically undesirable or wrong, not merely linked to bad outcomes (or good ones, such as realizing that one’s current activity is meaningless). One reason to ask this question is existential: we should investigate how to live well as individuals. Are we obliged not to be bored? Another reason is more pragmatic. If being bored is wrong, we might look for effective ways to express that fact, which might influence people’s behaviors. For instance, children are often scolded for being bored. If being bored is not wrong, then we shouldn’t—and probably cannot—change behavior by telling people that it’s wrong to be bored. Relatedly, when is it a valid critique of an organization or institution to claim that it causes boredom or is boring? Might it be necessary and appropriate for some institutions … to be boring?

I have not done my own original work on this topic. I wrote this piece because I was asked to. I tried to review the literature, and a peer reviewer helped me improve that overview substantially.

I especially appreciate extensive and persuasive work by Andreas Elpidorou. He strikes me as an example of a positive trend in recent academic philosophy, also exemplified by Amia Srinivasan and others of their generation. These younger philosophers (whom I do not know personally) address important and thorny questions, such as whether and when it’s OK to be bored and whether one has a right to sex under various circumstances. They are deeply immersed in relevant social science. They also read widely in literature and philosophy and find insights in unexpected places. Srinivasan likes nineteenth-century utopian socialists and feminists; Elpidorou is an analytical philosopher who can also offer insightful close readings of Heidegger.

Maybe it was a bias on my part–or the result of being taught by specific professors–but I didn’t believe that these combinations were possible while I pursued my BA and doctorate in philosophy. In those days, analytical moral and political philosophers paid some attention to macroeconomic theory but otherwise tended not to notice current social science. Certainly, they didn’t address details of measurement and method, as Elpidorou does. Continental moral and political philosophers wrote about the past, but they understood history very abstractly, and their main sources were canonical classics. Most philosophers addressed either the design of overall political and economic systems or else individual dilemmas, such as whether to have an abortion (or which people to kill with an out-of-control trolley).

To me, important issues almost always combine complex and unresolved empirical questions with several–often conflicting–normative principles. Specific problems cannot be abstracted from other issues, both individual and social. Causes and consequences vary, depending on circumstances and chance; they don’t follow universal laws.

My interest in the empirical aspects of certain topics, such as civic education and campaign finance, gradually drew me from philosophy into political science. I am now a full professor of the latter discipline, also regularly involved with the American Political Science Association. However, my original training often reminds me that normative and conceptual issues are relevant and that positivist social science cannot stand alone.

Perhaps the main lesson you learn by studying philosophy is that it’s possible to offer rigorous answers to normative questions (such as whether an individual or an institution should change when the person is bored), and not merely to express opinions about these matters. I don’t have exciting answers of my own to specific questions about boredom, but I have reviewed current philosophers who do.

Learning to be a social scientist means not only gaining proficiency with the kinds of methods and techniques that can be described in textbooks, but also knowing how to pitch a proposed study so that it attracts funding, how to navigate a specific IRB board, how to find collaborators and share work and credit with them, and what currently interests relevant specialists. These highly practical skills are essential but hard to learn in a classroom.

If I could convey advice to my 20-year-old self, I might suggest switching to political science in order to gain a more systematic and rigorous training in the empirical methods and practical know-how that I have learned–incompletely and slowly–during decades on the job. But if I were 20 now, I might stick with philosophy, seeing that it is again possible to combine normative analysis, empirical research, and insights from diverse historical sources to address a wide range of vital human problems.

See also: analytical moral philosophy as a way of life; six types of claim: descriptive, causal, conceptual, classificatory, interpretive, and normative; is all truth scientific truth? etc.

when does a narrower range of opinions reflect learning?

John Stuart Mill’s On Liberty is the classic argument that all views should be freely expressed–by people who sincerely hold them–because unfettered debate contributes to public reasoning and learning. For Mill, controversy is good. However, he acknowledges a complication:

The cessation, on one question after another, of serious controversy, is one of the necessary incidents of the consolidation of opinion; a consolidation as salutary in the case of true opinions, as it is dangerous and noxious when the opinions are erroneous (Mill 1859/2011, 81)

In other words, as people reason together, they may discard or marginalize some views, leaving a narrower range to be considered. Whether such narrowing is desirable depends on whether the range of views that remains is (to quote Mill) “true.” His invocation of truth–as opposed to the procedural value of free speech–creates some complications for Mill’s philosophical position. But the challenge he poses is highly relevant to our current debates about speech in academia.

I think one influential view is that discussion is mostly the expression of beliefs or opinions, and more of that is better. When the range of opinions in a particular context becomes narrow, this can indicate a lack of freedom and diversity. For instance, the liberal/progressive tilt in some reaches of academia might represent a lack of viewpoint diversity.

A different prevalent view is that inquiry is meant to resolve issues, and therefore, the existence of multiple opinions about the same topic indicates a deficit. It means that an intellectual problem has not yet been resolved. To be sure, the pursuit of knowledge is permanent–disagreement is always to be expected–but we should generally celebrate when any given thesis achieves consensus.

Relatedly, some people see college as something like a debate club or editorial page, in which the main activity is expressing diverse opinions. Others see it as more like a laboratory, which is mainly a place for applying rigorous methods to get answers. (Of course, it could be a bit of both, or something entirely different.)

In 2015, we organized simultaneous student discussions of the same issue–the causes of health disparities–at Kansas State University and Tufts University. The results are here. At Kansas State, students discussed–and disagreed about–whether structural issues like race and class and/or personal behavioral choices explain health disparities. At Tufts, students quickly rejected the behavioral explanations and spent their time on the structural ones. Our graphic representation of the discussions shows a broader conversation at K-State and what Mill would call a “consolidated” one at Tufts.

A complication is that Tufts students happened to hear a professional lecture about the structural causes of health disparities before they discussed the issue, and we didn’t mirror that experience at K-State. Some Tufts students explicitly cited this lecture when rejecting individual/behavioral explanations of health disparities in their discussion.

Here are two competing reactions to this experiment.

First, Kansas State students demonstrated more ideological diversity and had a better conversation than the one at Tufts because it was broader. They also explicitly considered a claim that is prominently made in public–that individuals are responsible for their own poor health. Debating that thesis would prepare them for public engagement, regardless of where they stand on the issue. The Tufts conversation, on the other hand, was constrained, possibly due to the excessive influence of professors who hold contentious views of their own. The Tufts classroom was in a “bubble.”

Alternatively, the Tufts students happened to have a better opportunity to learn than their K-State peers because they heard an expert share the current state of research, and they chose to reject certain views as erroneous. It’s not that they were better citizens or that they know more (in general) than their counterparts at KSU, but simply that their discussion of this topic was better informed. Insofar as the lecture on public health found a receptive audience in the Tufts classroom, it was because these students had previously absorbed valid lessons about structural inequality from other sources.

I am not sure how to adjudicate these interpretations without independently evaluating the thesis that health disparities are caused by structural factors. If that thesis is true, then the narrowing reflected at Tufts is “salutary.” If it is false, then the narrowing is “dangerous and noxious.”

I don’t think it’s satisfactory to say that we can never tell, because then we can never believe that anything is true. But it can be hard to be sure …

See also: modeling a political discussion; “Analyzing Political Opinions and Discussions as Networks of Ideas“; right and left on campus today; academic freedom for individuals and for groups; marginalizing odious views: a strategy; vaccination, masking, political polarization, and the authority of science etc.

the difference between human and artificial intelligence: relationships

A large-language model (LLM) like ChatGPT works by identifying trends and patterns in huge bodies of text previously generated by human beings.

For instance, we are currently staying in Cornwall. If I ask ChatGPT what I should see around here, it suggests St Ives, Land’s End, St Michael’s Mount, and seven other highlights. It derives these ideas from frequent mentions in relevant texts. The phrases “Cornwall,” “recommended” (or synonyms thereof), “St Ives,” “charming,” “art scene,” and “cobbled streets” probably occur frequently in close proximity, because ChatGPT uses them to construct a sentence for my edification.

We human beings behave in a somewhat similar way. We also listen to or read a lot of human-generated text, look for trends and patterns in it, and repeat what we glean. But if that is what it means to think, then LLM has clear advantages over us. A computer can scan much more language than we can and uses statistics rigorously. Our generalizations suffer from notorious biases. We are more likely to recall ideas we have seen most recently, those that are most upsetting, those that confirm our prior assumptions, etc. Therefore, we have been using artificial means to improve our statistical inferences ever since we started recording possessions and tallying them by category thousands of years ago.

But we also think in other ways. Specifically, as intensely social and judgmental primates, we frequently scan our environments for fellow human beings whom we can trust in specific domains. A lot of what we believe comes from what a relatively small number of trusted sources have simply told us.

In fact, to choose what to see in Cornwall, I looked at the recommendations in The Lonely Planet and Rough Guide. I have come to trust those sources over the years–not for every kind of guidance (they are not deeply scholarly), but for suggestions about what to see and how to get to those places. Indeed, both publications offer lists of Cornwall’s highlights that resemble ChatGPT’s.

How did these publishers obtain their knowledge? First, they hired individuals whom they trusted to write about specific places. These authors had relevant bodily experience. They knew what it feels like to walk along a cliff in Cornwall. That kind of knowledge is impossible for a computer. But these authors didn’t randomly walk around the county, recording their level of enjoyment and reporting the places with the highest scores. Even if they had done that, the sites they would have enjoyed most would have been the ones that they had previously learned to understand and value. They were qualified as authors because they had learned from other people: artists, writers, and local informants on the ground. Thus, by reading their lists of recommendations, I gain the benefit of a chain of interpersonal relationships: trusted individuals who have shared specific advice with other individuals, ending with the guidebook authors whom I have chosen to consult.

In our first two decades of life, we manage to learn enough that we can go from not being able to speak at all to writing books about Cornwall or helping to build LLMs. Notably, we do not accomplish all this learning by storing billions of words in our memories so that we can analyze the corpus for patterns. Rather, we have specific teachers, living or dead.

This method for learning and thinking has drawbacks. For instance, consider the world’s five biggest religions. You probably think that either four or five of them are wrong about some of their core beliefs, which means that you see many billions of human beings as misguided about some ideas that they would call very important. Explaining why they are wrong, from an outsider’s perspective, you might cite their mistaken faith in a few deeply trusted sources. In your opinion, they would be better off not trusting their scriptures, clergy, or people like parents who told them what to believe.

(Or perhaps you think that everyone sees the same truth in their own way. That’s a benign attitude and perhaps the best one to hold, but it’s incompatible with what billions of people think about the status of their own beliefs.)

Our tendency to believe select people may be an excellent characteristic, since the meaning of life is more about caring for specific other humans than obtaining accurate information. But we do benefit from knowing truths, and our reliance on fallible human sources is a source of error. However, LLMs can’t fully avoid that problem because they use text generated by people who have interests and commitments.

If I ask ChatGPT “Who is Jesus Christ?” I get a response that draws exclusively from normative Christianity but hedges it with this opening language: “Jesus Christ is a central figure in Christianity. He is believed to be … According to Christian belief. …” I suspect that ChatGPT’s answers about religious topics have been hard-coded to include this kind of disclaimer and to exclude skeptical views. Otherwise, a statistical analysis of text about Jesus might present the Christian view as true or else incorporate frequent critiques of Christianity, either of which would offend some readers.

In contrast, my query about Cornwall yields confident and unchallenged assessments, starting with this: “Cornwall is a beautiful region located in southwestern England, known for its stunning coastline, picturesque villages, and rich cultural heritage.” This result could be prefaced with a disclaimer, e.g., “According to many English people and Anglophiles who choose to write about the region, Cornwall is …:” A ChatGPT result is always a summary of what a biased sample of people have thought, because choosing to write about something makes you unusual.

For human beings who want to learn the truth, having new tools that are especially good at scanning large bodies of text for statistical patterns should prove useful. (Those who benefit will probably include people who have selfish or even downright malicious goals.) But we have already learned a fantastic amount without LLMs. The secret of our success is that our brains have always been networked, even when we have lived in small groups of hunter-gatherers. We intentionally pass ideas to other people and are often pretty good at deciding whom to believe about what.

Moreover, we have invented incredibly complex and powerful techniques for improving how many brains are connected. Posing a question to someone you know is helpful, but attending a school, reading an assigned book, finding other books in the library, reading books translated from other languages, reading books that summarize previous books, reading those summaries on your phone–these and many other techniques dramatically extend our reach. Prices send signals about supply and demand; peer-review favors more reliable findings; judicial decisions allow precedents to accumulate; scientific instruments extend our senses. These are not natural phenomena; we have invented them.

Seen in that context, LLMs are the latest in a long line of inventions that help human beings share what they know with each other, both for better and for worse.

See also: the design choice to make ChatGPT sound like a human; artificial intelligence and problems of collective action; how intuitions relate to reasons: a social approach; the progress of science.

analytical moral philosophy as a way of life

(These thoughts are prompted by Stephen Mulhall’s review of David Edmonds’ book, Parfit: A Philosopher and His Mission to Save Morality, but I have not read that biography or ever made a serious study of Derek Parfit.)

The word “philosophy” is ancient and contested and has labeled many activities and ways of life. Socrates practiced philosophy when he went around asking critical questions about the basis of people’s beliefs. Marcus Aurelius practiced philosophy when he meditated daily on well-worn Stoic doctrines of which he had made a personal collection. The Analects of Confucius may be “a record of how a group of men gathered around a teacher with the power to elevate [and] created a culture in which goals of self-transformation were treated as collaborative projects. These people not only discussed the nature of self-cultivation but enacted it as a relational process in which they supported one another, reinforced their common goals, and served as checks on each other in case they went off the path, the dao” (David Wong).

To practice philosophy, you don’t need a degree (Parfit didn’t complete his), and you needn’t be hired and paid to be a philosopher. However, it’s a waste of the word to use it for activities that aren’t hard and serious.

Today, most actual moral philosophers are basically humanities educators. We teach undergraduates how to read, write, and discuss texts at a relatively high level. Most of us also become involved in administration, seeking and allocating resources for our programs, advocating for our discipline and institutions, and serving as mentors.

Those are not, however, the activities implied by the ideal of analytic moral philosophy. In that context, being a “philosopher” means making arguments in print or oral presentations. A philosophical argument is credited to a specific individual (or, rarely, a small team of co-authors). It must be original: no points for restating what has already been said. It should be general. Philosophy does not encompass exercises of practical reasoning (deciding what to do about a thorny problem). Instead, it requires justifying claims about abstract nouns, like “justice,” “happiness,” or “freedom.” And an argument should take into consideration all the relevant previous points published by philosophers in peer-reviewed venues. The resulting text or lecture is primarily meant for philosophers and students of philosophy, although it may reach other audiences as well.

Derek Parfit held a perfect job for this purpose. As a fellow of All Souls College, he had hardly any responsibilities other than to write philosophical arguments and was entitled to his position until his mandatory retirement. He did not have to obtain support or resources for his work. He did not have to deliberate with other people and then decide what to say collectively. Nor did he have to listen to undergraduates and laypeople express their opinions about philosophical issues. (Maybe he did listen to them–I would have to read the biography to find out–but I know that he was not obliged to do so. He could choose to interact only with highly prestigious peers.)

Very few other people hold similar roles: the permanent faculty of the Institute for Advanced Study, the professors of the Collège de France, and a few others. Such opportunities could be expanded. In fact, in a robust social welfare state, anyone can opt not to hold a job and can instead read and write philosophy, although whether others will publish or read their work is a different story. But whether this form of life is worthy of admiration and social support is a good question–and one that Parfit was not obliged to address. He certainly did not have to defend his role in a way that was effective, persuading a real audience. His fellowship was endowed.

Mulhall argues that Parfit’s way of living a philosophical life biased him toward certain views of moral problems. Parfit’s thought experiments “strongly suggest that morality is solely or essentially a matter of evaluating the outcomes of individual actions–as opposed to, say, critiquing the social structures that deeply shape the options between which individuals find themselves having to choose. … In other words, although Parfit’s favoured method for pursuing and refining ethical thinking presents itself as open to all whatever their ethical stance, it actually incorporates a subtle but pervasive bias against approaches to ethics that don’t focus exclusively or primarily on the outcomes of individual actions.”

Another way to put this point is that power, persuasion, compromise, and strategy are absent in Parfit’s thought, which is instead a record of what one free individual believed about what other free individuals should do.

I am quite pluralistic and inclined to be glad that Parfit lived the life he did, even as most other people–including most other moral philosophers–live and think in other ways. Even if Parfit was biased (due to his circumstances, his chosen methods and influences, and his personal proclivities) in favor of certain kinds of questions, we can learn from his work.

But I would mention other ways of deeply thinking about moral matters that are also worthy and that may yield different kinds of insights.

You can think on your own about concrete problems rather than highly abstract ones. Typically the main difficulty is not defining the relevant categories, such as freedom or happiness, but rather determining what is going on, what various people want, and what will happen if they do various things.

You can introduce ethical and conceptual considerations to elaborate empirical discussions of important issues.

You can deliberate with other people about real decisions, trying to persuade your peers, hearing what they say, and deciding whether to remain loyal to the group or to exit from it if you disagree with its main direction.

You can help to build communities and institutions of various kinds that enable their members to think and decide together over time.

You can identify a general and relatively vague goal and then develop arguments that might persuade people to move in that direction.

You can strive to practice the wisdom contained in clichés: ideas that are unoriginal yet often repeated because they are valid. You can try to build better habits alone or in a group of people who hold each other accountable.

You can tentatively derive generalizations from each of these activities, whether or not you choose to publish them.

Again, as a pluralist, I do not want to suppress or marginalize the style that Parfit exemplified. I would prefer to learn from his work. But my judgment is that we have much more to learn from the other approaches if our goal is to improve the world. That is because the hard question is usually not “How should things be?” but rather “What should we do?”

See also: Cuttings: A book about happiness; the sociology of the analytic/continental divide in philosophy; does doubting the existence of the self tame the will?

defining state, nation, regime, government

As a political philosopher by training, and now political scientist by appointment, I have long been privately embarrassed that I am not sure how to define “state,” “government,” “regime,” and “nation.” On reflection, these words are used differently in various academic contexts. To make things more complicated, the discussion is international, and we are often dealing with translations of words that don’t quite match up across languages.

For instance, probably the most famous definition of “the state” is from Max Weber’s Politics as Vocation (1919). He writes:

Staat ist diejenige menschliche Gemeinschaft, welche innerhalb eines bestimmten Gebietes – dies: das „Gebiet“, gehört zum Merkmal – das Monopol legitimer physischer Gewaltsamkeit für sich (mit Erfolg) beansprucht.

[The] state is the sole human community that, within a certain territory–thus: territory is intrinsic to the concept–claims a monopoly of legitimate physical violence for itself (successfully).

Everyone translates the keyword here as “state,” not “government.” But this is a good example of how words do not precisely match across languages. The English word “government” typically means the apparatus that governs a society. The German word commonly translated as “government” (“der Regierung“) means an administration, such as “Die Regierung von Joe Biden” or a Tory Government in the UK. (In fact, later in the same essay, Weber uses the word Regierung that way while discussing the “typical figure of the ‘grand vizier'” in the Middle East.) Since “government” has a wider range of meanings in English, it wouldn’t be wrong to use it to translate Weber’s Staat.

Another complication is Weber’s use of the word Gemeinschaft inside his definition of “the State.” This is a word with such specific associations that it is occasionally used in English in place of our vaguer word “community.” A population is not a Gemeinschaft, but a tight association can be. Thus to translate Weber’s phrase as “A state is a community …” is misleading.

For Americans, a “state” naturally means one of our fifty subnational units, but in Germany those are Länder (cognate with “lands”). The word “state” derives from the Latin status, which is as “vague a word as ratio, res, causa” (Paul Shorey, 1910) but can sometimes mean a constitution or system of government. Cognates of that Latin word end up as L’État, el Estado and similar terms that have a range of meanings, including the subnational units of Mexico and Brazil. In 1927, Mussolini said, “Tutto nello Stato, niente al di fuori dello Stato, nulla contro lo Stato (“Everything in the State, nothing outside the State, nothing against the State”). I think he basically meant that he was in charge of everything he could get his hands on. Louis XIV is supposed to have said “L’État c’est moi,” implying that he was the government (or the nation?), but that phrase may be apocryphal; an early documented use of L’État to mean the national government dates to 1799. In both cases, the word’s ambiguity is probably one reason it was chosen.

“Regime” can have a negative connotation in English, but political theorists typically use it to mean any government plus such closely related entities as the press and parties and prevailing political norms and traditions. Regimes can be legitimate, even excellent.

If these words are used inconsistently in different contexts, then we can define them for ourselves, as long as we are clear about our usage. I would tend to use the words as follows:

  • A government: either the legislative, executive, and judicial authority of any entity that wields significant autonomous political power (whether it’s territorial or not), or else a specific group that controls that authority for a time. By this definition, a municipality, the European Union, and maybe even the World Bank may be a government.

(A definitional challenge is deciding what counts as “political” power. A company, a church, a college, an insurgent army, or a criminal syndicate can wield power and can use some combination of legislative, executive, and/or judicial processes to make its own decisions. Think of canon law in the Catholic Church or an HR appeals processes inside a corporation. Weber would say that the fundamental question is whether an entity’s power depends on its own monopolistic use of legitimate violence. For instance, kidnapping is a violent way to extract money, but it does not pretend to be legitimate and it does not monopolize violence. Taxation is a political power because not paying your taxes can ultimately land you, against your will, in a prison that presents itself as an instrument of justice. Not paying a private bill can also land you in jail, but that’s because the government chooses to enforce contracts. Your creditor is not a political entity; the police force is. However, when relationships between a government and private entities are close, or when legitimacy is controversial, or when–as is typical–governments overlap, these distinctions can be hard to maintain and defend.)

  • A state: a government plus the entities that it directly controls, such as the military, police, or public schools. For example, it seems most natural to say that a US government controls the public schools, but not that a given school is part of the government. Instead, it is part of the state. Likewise, an army can be in tension with the government, yet both are components of the state.
  • A regime: the state plus all non-state entities that are closely related to it, e.g., political parties, the political media, and sometimes the national clergy, powerful industries, etc. We can also talk about abstract characteristics, such as political culture and values, as components of a regime. A single state may change its regime, abruptly or gradually.
  • A country: a territory (not necessarily contiguous) that has one sovereign state. It may have smaller components that also count as governments but not as countries.
  • A nation: a category of people who are claimed (by the person who is using this word) to deserve a single state that reflects their common identity and interests. Individuals can be assigned to different nations by different speakers.
  • A nation-state: a country with a single functioning and autonomous state whose citizens widely see themselves as constituting a single nation. Some countries are not nations, and vice-versa. People may disagree about whether a given country is a nation-state, depending on which people they perceive to form a nation.

See also: defining capitalism; avoiding a sharp distinction between the state and the private sphere; the regime that may be crumbling; what republic means, revisited etc.

does doubting the existence of the self tame the will?

I like the following argument, versions of which can be found in many traditions from different parts of the world:

  1. A cause of many kinds of suffering is the will (when it is somehow excessive or misplaced).
  2. Gaining something that you desire does not reduce your suffering; you simply will something else.
  3. However, one’s will can be tamed.
  4. Generally, the best way to manage the will is to focus one’s mind on other people instead of oneself. Thus,
  5. Being ethical reduces one’s suffering.

In some traditions, notably in major strands of Buddhism and in Pyrrhonism, two additional points are made:

  1. The self does not actually exist. Therefore,
  2. It is irrational to will things for oneself.

Point #7 is supposed to provide both a logical and a psychological basis for #4. By realizing that I do not really exist, I reduce my attachment to my (illusory) self and make more space to care about others, which, in turn, makes me happier.

Point #6 is perfectly respectable. Plenty of philosophers (and others) who have considered the problem of personal identity have concluded that an ambitious form of the self does not really exist. (For instance, David Hume.)

But if the self doesn’t exist, does it really follow that we should pay more attention to other people? We might just as well reason as follows:

  1. The self does not really exist. Therefore,
  2. a. Other people do not really exist as selves. Therefore,
  3. a. It is irrational to be concerned about them.

Or

  1. The self does not really exist. Therefore,
  2. b. It is impossible for me to change my character in any lasting way. Therefore,
  3. b. There is no point in trying to make myself more ethical.

Striving to be a better or happier person is not a sound reason for doubting the existence of the self. This doubt may do more harm than good. If there actually is no self, that is a good reason not to believe in one. But then we are obliged to incorporate skepticism about personal identity into a healthy overall view. The best way might be some version of this:

  1. The self does not really exist. Nevertheless,
  2. c. I would be wise to treat other people as if they were infinitely precious, durable, unique, and persistent things (selves).

I think it is worth getting metaphysics right, to the best of our ability. For example, it is worth trying to reason about what kind of a thing (if anything) a self is. However, I don’t believe that metaphysical beliefs entail ways of life in a straightforward way, with monotonic logic.

Any given metaphysical view is usually compatible with many different ways of being. It may even strongly encourage several different forms of life, depending on how a person absorbs the view. Thus I am not surprised that some people (notably, thoughtful Buddhists) have gained compassion and equanimity by adopting the doctrine of no-self, even though the same doctrine could encourage selfishness in others, and some people may become more compassionate by believing in the existence of durable selves. In fact, many have believed in the following argument:

  1. Each person (or sentient being) has a unique, durable, essential being
  2. I am but one out of billions of these beings. Therefore,
  1. It is irrational to will things for myself.

The relationship between an abstract idea and a way of being is mediated by “culture,” meaning all our other relevant beliefs, previous examples, stories, and role-models. We cannot assess the moral implications of an idea without understanding the culture in which it is used. For instance, the doctrine of no-self will have different consequences in a Tibetan monastery versus a Silicon Valley office park.

We cannot simply adopt or join a new culture. That would require shedding all our other experiences and beliefs, which is impossible. Therefore, we are often in the position of having to evaluate a specific idea as if it were a universal or culturally neutral proposition that we could adopt all by itself. For instance, that is what we do when we read Hume and Kant (or Nagarjuna) on the question of personal identity and try to decide what to think about it. This seems a respectable activity; I only doubt that, on its own, it will make us either better or worse people.

See also: notes on religion and cultural appropriation: the case of US Buddhism; Buddhism as philosophy; how to think about the self (Buddhist and Kantian perspectives); individuals in cultures: the concept of an idiodictuon. And see “The Philosophic Buddha” by Kieran Setiya, which prompted these thoughts.

how the structure of ideas affects a conversation

According to the “interactionist” theory of Mercier & Sperber 2017 (which I discussed on Monday), human beings evolved to make smart decisions in groups, and that requires us to exchange reasons. We naturally want to express reasons for our intuitions and critically assess other people’s reasons for their beliefs. It matters how well we perform these two tasks.

One familiar kind of person frustrates discussion by constantly linking every belief that he endorses back to one foundational principle, whether it is a constitutional right to individual freedom, God’s will, or equality for all. The problem is not the core belief itself but the way his whole network of beliefs is structured; it prevents reasoning around his core idea if you don’t happen to share it. 

A different familiar figure is the person who offers many ideas but cannot provide a reason for most of them. If we think of a reason as a link between two ideas, then this person’s network has no links. Whereas the first network was too centralized, the second is too disconnected.

We don’t literally possess networks of beliefs; rather, a network graph is a way of representing our reasoning. I conjecture that the formal features of such a network can predict whether the person will deliberate well. To illustrate (but not to prove) that conjecture, I will discuss two Kansas State students who participated in an online discussion as part of the research that led to Levine, Eagan & Shaffer 2022.

Before discussing how socioeconomic factors affect health, all the students in this study wrote short passages describing their personal views. Adèle and Beth are pseudonyms for white undergraduate KSU women, aged 21 and 22, whose mothers had not attended college.

Adèle wrote:

In my opinion, race should not influence the human health and well-being because every person should have an opportunity to succeed no matter what race they are. Social class influences the health because a healthy lifestyle is more expensive, but also a healthy lifestyle means physical activity and that does not depend on the social class, it depends on individual motivation. A social factor would be the people that we surround ourselves with. If we interact with people that live a healthy lifestyle, we can get influenced and borrow some of their habits. But is also true that for the lower class is the hardest to live a healthy lifestyle because they cannot afford one.

I think we could informally diagram her view with the graph below. The nodes are her stated beliefs. The arrows are her causal claims, except where I’ve denoted them as normative implications. This graph does not explain why Adèle formed her intuitions (i.e., why these beliefs formed in her mind) but rather represents the explicit reasons that she offers to explain her beliefs to others.

Beth wrote:

My thoughts on the impact of race, class and social factors on determinants on human health and wellbeing are that no matter what your race or social class is, you should be treated equally because the color of a person’s skin should not affect the way you view them. Both a black and white person can put in the same amount of hard work and effort to be able to reap the benefits of a happy healthy life. However, with that being said, there are some people who do not work as hard and that puts them in a lower social class than others. possibly making their overall health and wellbeing less than someone who is up in a higher social class.

I think we could informally diagram Beth’s view like this:

I propose that Adèle will be a better conversation-partner than Beth, not because her beliefs are superior but because of the structures represented by these two graphs.

Adèle has generated several independent intuitions (moral and empirical) that push in somewhat different directions. They are all connected to the idea of health, because that is the assigned topic, but there is some wiggle room between her beliefs, and she has identified several causes of the same outcomes. Adèle and I could talk about several of her intuitions. I could ask her to offer reasons for each one, and we could turn to another issue if we found that we disagreed on that point.

Meanwhile, Beth only offers reasons for one conclusion: that people should not pay attention to racial differences. I would worry that we couldn’t engage once she had made that point. Her sentence that begins, “However, with that being said,” does not actually present a conflicting point but elaborates on her main argument.

When asked whether “Everyday people from different parties can have civil, respectful conversations about politics,” Adèle agreed, but Beth strongly disagreed. Adèle also rated the online discussion more highly than Beth did as a learning experience. This is suggestive evidence that Adèle was more deliberative (in this context) than Beth.

Beth did participate in the online discussion four times and explicitly referred to previous commenters with openings that look civil, such as: “Although I agree with everything you have said, I think. …” However, all four of her contributions were variations on her basic point that success is due to hard work.

In contrast, when Adèle saw a post in the online discussion that recounted a story about a white woman who had succeeded in life due to her own hard work, she responded deliberatively, trying to connect to the previous writer’s ideas. She began: “I saw you talked about how hard work and effort can help you achieve a better lifestyle and I agree with it.” She had expressed this belief in her personal statement prior to the discussion, and it is represented in the first graph above.

She added, “But we also need to have in mind the people that grow up in less fortunate families and have different aspirations that some people have.” Here she introduced another belief that she had already held. She supported it with reasons: “For some of us, going to college is a thing that we knew it’s going to happen in our lives and we never question if we might go or no. But for some people they do not have this opportunity to afford college. … I believe that some people even if they are willing to put hard work and effort, not all of them are guaranteed to succeed.” She then acknowledged a criticism and addressed it: “Of course, there are people who succeeded but I believe that there are a lot of them who did not. And for a person who is less fortunate is not too easy to live a healthy lifestyle.”

My claim is not that Adèle formed better beliefs by reasoning. She may have developed her beliefs intuitively, as we generally do. Nor is there evidence that she revised her beliefs in response to objections, any more than Beth did. My claim is that Adèle contributed better to the group’s discussion because the structure of her reasons permitted more interaction.

(Two limitations of this post: First, I chose the examples to illustrate my main point. That does not prove the general pattern. Second, my diagrams could be biased. For instance, Beth’s belief that “motivation determines health,” which I depicted above as one node in her network, could be unpacked to look like this:

Adding those four nodes to her map would make her whole graph look almost as complex as Adèle’s. I am still looking for less subjective approaches to mapping text. In a lot of my current work, I elicit network structures by asking people multiple-choice questions, rather than graphing their open-ended statements, because the quantitative data seems more reliable for making interpersonal comparisons.)

Sources: Mercier, Hugo and Dan Sperber, The Enigma of Reason, Cambridge, MA: Harvard University Press 2017; Levine, Peter & Eagan, Brendan & Shaffer, David. (2022). Deliberation as an Epistemic Network: A Method for Analyzing Discussion. 10.1007/978-3-030-93859-8_2. See also modeling a political discussion; individuals in cultures: the concept of an idiodictuon; and how intuitions relate to reasons: a social approach.

how intuitions relate to reasons: a social approach

We might like to think that when we form a belief, it comes after we have reviewed reasons. We canvass all the relevant reasons (pro and con), evaluate each of them, weigh and combine them, and choose the belief that follows best. In that case, our reasons cause our beliefs, whether they are about facts or about about values. We might also like to think that when someone offers a strong critique of our reasons, we will be motivated to change our beliefs.

A wealth of empirical evidence suggests that this process is exceedingly rare. Much more often, each of us forms our beliefs intuitively, in the specific sense that we are not conscious of reasons. It feels as if we just have the beliefs. Then, if someone asks why we formed a given belief, we come up with reasons that justify our intuition. We may even ask ourselves for reasons when we wonder why we had a thought.

This process is retrospective. We did not already have reasons. We find them to justify or rationalize our intuitions to ourselves or to other people (Graham, Nosek, Haidt, Iyer, Koleva, & Ditto 2011, p. 368; Haidt 2012, pp. 27-51; Swidler 2001, pp. 147-8; Thiele 2006).

This theory is concerning. For one thing, if you are already sure of any belief, then you can probably find a plausible reason to justify it and a plausible rebuttal to any critique. We don’t sound like very rational or reasonable creatures, ones who are good at assessing and combining reasons and drawing appropriate conclusions. We sound like lawyers in our own defense, finding grounds to justify what we merely assumed even when the evidence is weak.

Further, all the explicit discourse that we observe around us–all those meetings and articles and legal briefs and speeches and sermons–begins to seem like a foolish waste of effort. It’s the noise of people rationalizing what they already thought.

Some would add that intuitions about moral and political matters often seem to reflect self-interest. The rich are intuitively favorable to markets; men are biased for patriarchy; Americans, for the USA. In that case, reasoning is moot. The causal pathway runs from interests to intuitions and from there to justifications, and the justifications accomplish very little.

This highly skeptical view of reason results from focusing on individual human beings at the moment (or within a short timeframe) when they form beliefs. When we look outward from such moments, we find human beings meaningfully exchanging reasons that matter.

First, let’s look before the moment of intuition. Where did it come from? Let’s consider, for example, a person who hears about a proposed new federal program and has a strong intuition against it. This person may not explicitly consider reasons for and against federal intervention; the intuition may arise automatically. But before the moment of intuition, this person had probably heard many explicit critiques of government. Maybe a parent made a memorable complaint about the government when this person was still a child, and that attitude was reinforced by speeches, articles, stories, and other forms of intentional discourse. This individual may also have had direct, personal experiences, such as facing a burdensome regulation or having an unpleasant encounter with a bureaucrat. However, we must interpret such experiences using larger categories (such as “regulation” and “bureaucrat”) that we get from other people.

This process of forming an attitude that then generates an intuition may be more or less reasonable. The attitude might be wise and well-substantiated or else a mere prejudice. My point is not that people reason well but that discourse is often prior to intuition (cf. Cushman 2020). People’s intuitions about matters like government programs are influenced by the prevailing discourses of their communities. It matters whether you came of age in a neoliberal market economy, under collapsing state communism, in a stable social democracy, or in a Shiite theocracy. In this sense, reasons influence intuitions, but they may not be reasons that an individual explicitly considers before forming a belief. Rather, reasons circulate in the discourse of a community.

Now let’s look after the moment of intuition, to the time when a person answers the question: Why? For instance, why are you–or why am I–against this government program? The individual begins to generate reasons for the original intuition: Government never works well. Taxes are already too high. It’s not fair that lazy people should get a free ride — etc. These reasons were not in the person’s head prior to the intuition, and they did not cause it, yet they matter in several ways.

First, as intensely social creatures, we use reasons to establish our value to groups. People who can offer reasons that sound plausible, consistent, or even insightful emerge as respected and have influence. Sometimes, glib sophists gain respect with reasons that shouldn’t impress others, but to a significant extent, we judge other people’s reasons well (Mercier & Sperber 2017). We did not evolve to be magnificently rational thinkers who reach our individual judgments solely on the basis of evidence, but we did evolve to be remarkably perceptive social mammals who are pretty good at judging each other.

Hugo Mercier and Dan Sperber write:

Whereas reason is commonly viewed as a superior means to think better on one’s own, we argue that it is mainly used in our interactions with others. We produce reasons in order to justify our thoughts and actions to others and to produce arguments to convince others to think and act as we suggest. We also use reason to evaluate not so much our own thought as the reasons others produce to justify themselves or to convince us (Mercier & Sperber 2017, pp. 7-8).

Because we want to be respected for our judgments, we will go to considerable effort to rationalize our intuitions. That effort can take the form of skillful defense-lawyering: cleverly finding tendentious grounds for our positions. But it can also lead us to revise our beliefs because we do not want to be caught with inconsistencies, selfish biases, or blatant factual errors. Revision is less common than we would assume if people were ideally rational, but it occurs because we care about our reputations (Mercier & Sperber 2017, p. 146).

We are also motivated to find and express good reasons because doing so affords influence. For one thing, we can shape other people’s intuitions later on. The parent who told her child not to trust the government was trying to influence that child’s intuitions, as were other people who spoke on both sides of the issue later in the same person’s life.

Human beings can assess the cogency of other people’s explicit reasons, not with perfect reliability, but often pretty well. That means that if you have an intuition that you want other people to share, a good strategy is to come up with coherent reasons that support it. Good reasons may not overcome bias against you or favoritism for others. They may not overcome raw power, money, or lies. But they raise one’s odds of being influential, which is a source of power. Whether valid reasons persuade depends on how well our institutions are designed, and properly deliberative institutions are ones that reward the giving of good reasons (Habermas 1962, 1973).

Finally, when reasons are widely expressed and rarely opposed, they become norms, and norms are crucial for cooperation. For instance, in large swaths of an advanced liberal society, there are now explicit norms against gender discrimination. These norms are compatible with: 1) privately held explicit sexist opinions, 2) unrecognized or implicit sexist biases, 3) blatantly sexist subcultures, and 4) bad reasoning, such as neglecting to consider gender when reaching a conclusion about a policy. Nevertheless, the norm against gender discrimination exists and matters. Official policies are measured against it, and bodies like courts and school systems are affected by it. In turn their policies affect people’s mentalities over time.

Norms have the additional advantage that they do not have to be stated. This is crucial because we are not able to express all the premises of our arguments. In justifying what we believe, every point we can make depends on other points, which depend on others, pretty much ad infinitum; and we cannot address that whole web. Instead, we must assume agreement about most of the context. In Paul Grice’s terminology, we “implicate” beliefs that we assume others share (Grice 1967).

You can recognize the power of implicature when it goes wrong because the speaker’s belief is not working as a norm. For example, the phrase “Black Lives Matter” implicates the (true) premise that black lives have not been valued and that black people face pervasive discrimination and violence. Surveys show that a majority of white Americans do not believe this and even think the opposite, that whites face more discrimination. I suspect that at least some of them really don’t hear the implicature. They take “Black Lives Matter” to mean that only black lives should matter.

This is a troubling example at several levels, but it does not suggest that we lack norms entirely. On the contrary, our discourse is pervaded with shared assumptions, many of which developed over time as a result of intentional argumentation and persuasion. We can now implicate that violence is undesirable or that nature is precious even though both beliefs were widely rejected a century ago. Again, the norm that violence is bad can coexist with much implicit and hypocritical support for violence and with subcultures that openly promote violence. But it still has power as a belief that most people in many contexts can assume most other people publicly endorse, i.e., as a norm.

In turn, forming and shaping norms is a matter of grave significance, and reasons are tools for influencing norms. The paradigmatic case of reasoning is not an individual who identifies reasons and reaches an appropriate conclusion, but a community that shifts its norms when they are explicitly contested in public speech.

As Mercier and Sperber write:

Invocations and evaluations of reasons are contributions to a negotiated record of individuals’ ideas, actions, responsibilities, and commitments. This partly consensual, partly contested social record of who thinks what and who did what for which reasons plays a central role in guiding cooperative or antagonistic interactions, in influencing reputations, and in stabilizing social norms. Reasons are primarily for social consumption (Mercier & Sperber 2017, p. 127).

This is basically an empirical account of what the species homo sapiens does when we offer what we call “reasons.” It does not tell us how good this process is, in other words, whether it is a procedure that generates sound or valid results. Assessing human reasoning feels increasingly urgent now that we also have machines that generate sentences that convey reasons by using different procedures from ours.

Assessing our activity of reasoning requires epistemology, not empirical psychology alone. Fortunately, we have a sophisticated philosophical account of reasoning that is broadly consistent with the empirical account of Mercier and Sperber (Fenton 2019).

Robert Brandom argues that any claim (any thought that can be expressed in a sentence) has both antecedents and consequences: “upstream” and “downstream” links “in a network of inferences.” For instance, if you say, “It is morning,” you must have reasons for that claim (e.g., the alarm bell rang or the sun is low in the eastern sky) and you can draw inferences from it, such as, “It is time for breakfast.” (This is my example, not Brandom’s.) Reasoning is a matter of making these connections explicit. When making a claim, we propose that others can also use it “as a premise in their reasoning.” That means that we implicitly promise to divulge our own reasons and implications. “Thus one essential aspect of this model of discursive practice is communication: the interpersonal, intra-content inheritance of entitlement to commitments.” In sum, “The game of giving and asking for reasons is an essentially social practice.”

For Brandom, the same logic applies to “ought” claims and other normative sentences as to factual claims: whenever we make a statement, including a value-judgment, we owe a discussion of its reasons and implications. Brandom suggests that communication can be inward: we can reason in a “hypothetically” social way by thinking in our heads. But I would add—based on a safe empirical generalization about human beings—that we are quite bad at seeing the reasons for and the implications of our judgments about ethical and political matters that affect other people. Therefore, we badly need actual social reasoning: giving reasons and listening to the real reactions from other people (Brandom 2000, Kindle locs. 1733, 1746-7, 1767-9, 2060, 1799).    

See also opinion is dynamic and relational; in defense of (some) implicit bias. Sources: Cushman, Fiery. “Rationalization is rational.” Behavioral and Brain Sciences 43 (2020); Graham, Jesse, Nosek, Brian A., Haidt, Jonathan, Iyer, Ravi, Koleva. Spassena, & Ditto, Peter H. 2011. Mapping the Moral Domain. Journal of Personality and Social Psychology, 101; Fenton, William (2019), “On the Philosophy and Psychology of Reasoning and Rationality,” Kent State MA thesis; Grice, Paul, “Logic and Conversation” (1967), in Grice, Studies in the Ways of Words (Harvard, 1989), pp. 22-44; Habermas, Jürgen. (1962) 1991. The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Translated by Thomas Burger. Cambridge, MA: MIT Press; Habermas, Jürgen. (1973) 1975. Legitimation Crisis. Translated by Thomas McCarthy. Boston: Beacon Press; Haidt, Jonathan. 2012. The Righteous Mind: Why Good People Are Divided by Politics and Religion. New York: Vintage; Mercier, Hugo and Dan Sperber, The Enigma of Reason, Cambridge, MA: Harvard University Press 2017; Swidler, Ann. 2001. Talk of Love: How Culture Matters. Chicago: University of Chicago Press; Thiele, Leslie Paul. 2006. The Heart of Judgment: Practical Wisdom, Neuroscience, and Narrative Cambridge University Press.