the age of cybernetics

A pivotal period in the development of our current world was the first decade after WWII. Much happened then, including the first great wave of decolonization and the solidification of democratic welfare states in Europe, but I’m especially interested in the intellectual and technological developments that bore the (now obsolete) label of “cybernetics.”

I’ve been influenced by reading Francisco Varela, Evan Thompson, and Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience (first ed. 1991, revised ed., 2017), but I’d tell the story in a somewhat different way.

The War itself saw the rapid development of entities that seemed analogous to human brains. Those included the first computers, radar, and mechanisms for directing artillery. They also included extremely complex organizations for manufacturing and deploying arms and materiel. Accompanying these pragmatic breakthroughs were successful new techniques for modeling complex processes mathematically, plus intellectual innovations such as artificial neurons (McCullouch & Pitts 1943), feedback (Rosenblueth, Wiener, and Bigelow 1943), game theory (von Neumann & Morgenstern, 1944), stored-program computers (Turing 1946), information theory (Shannon 1948), systems engineering (Bell Labs, 1940s), and related work in economic theory (e.g., Schumpeter 1942) and anthropology (Mead 1942).

Perhaps these developments were overshadowed by nuclear physics and the Bomb, but even the Manhattan Project was a massive application of systems engineering. Concepts, people, money, minerals, and energy were organized for a common task.

After the War, some of the contributors recognized that these developments were related. The Macy Conferences, held regularly from 1942-1960, drew a Who’s Who of scientists, clinicians, philosophers, and social scientists. The topics of the first post-War Macy Conference (March 1946) included “Self-regulating and teleological mechanisms,” “Simulated neural networks emulating the calculus of propositional logic,” “Anthropology and how computers might learn how to learn,” “Object perception’s feedback mechanisms,” and “Deriving ethics from science.” Participants demonstrated notably diverse intellectual interests and orientations. For example, both Margaret Mead (a qualitative and socially critical anthropologist) and Norbert Wiener (a mathematician) were influential.

Wiener (who had graduated from Tufts in 1909 at age 14) argued that the central issue could be labeled “cybernetics” (Wiener & Rosenblueth 1947). He and his colleagues derived this term from the ancient Greek word for the person who steers a boat. For Wiener, the basic question was how any person, another animal, a machine, or a society attempts to direct itself while receiving feedback.

According to Varela, Thompson, and Rosch, the ferment and diversity of the first wave of cybernetics was lost when a single model became temporarily dominant. This was the idea of the von Neumann machine:

Such a machine stores data that may symbolize something about the world. Human beings write elaborate and intentional instructions (software) for how those data will be changed (computation) in response to new input. There is an input device, such as a punchcard reader or keyboard, and an output mechanism, such as a screen or printer. You type something, the processor computes, and out comes a result.

One can imagine human beings, other animals, and large organizations working like von Neumann machines. For instance, we get input from vision, we store memories, we reason about what we experience, and we say and do things as a result. But there is no evident connection between this architecture and the design of the actual human brain. (Where in our head is all that complicated software stored?) Besides, computers designed in this way made disappointing progress on artificial intelligence between 1945 and 1970. The 1968 movie 2001: A Space Odyssey envisioned a computer with a human personality by the turn of our century, but real technology has lagged far behind that.

The term “cybernetics” had named a truly interdisciplinary field. After about 1956, the word faded as the intellectual community split into separate disciplines, including computer science.

This was also the period when behaviorism was dominant in psychology (presuming that all we do is to act in ways that independent observers can see–there is nothing meaningful “inside” us). It was perhaps the peak of what James C. Scott calls “high modernism” (the idea that a state can accurately see and reorganize the whole society). And it was the heyday of “pluralism” in political science (which assumes that each group that is part of a polity automatically pursues its own interests). All of these movements have a certain kinship with the von Neumann architecture.

An alternative was already considered in the era of cybernetics: emergence from networks. Instead of designing a complex system to follow instructions, one can connect numerous simple components into a network and give them simple rules for changing their connections in respond to feedback. The dramatic changes in our digital world since ca. 1980 have used this approach rather than any central design, and now the analogy of machine intelligence to neural networks is dominant. Emergent order can operate at several levels at once; for example, we can envision individuals whose brains are neural networks connecting via electronic networks (such as the Internet) to form social networks and culture.

I have sketched this history–briefly and unreliably, because it’s not my expertise–without intending value-judgments. I am not sure to what extent these developments have been beneficial or destructive. But it seems important to understand where we’ve come from to know where we should go from here.

See also: growing up with computers; ideologies and complex systems; The truth in Hayek; the progress of science; the human coordination involved in AI; the difference between human and artificial intelligence: relationships

thinking both sides of the limits of human cognition

My dog knows many things about me, like whether I’m about to take him out for a walk and even what I mean when I say the words “dog park.” He has questions about me–for instance, when will I come home?–and sometimes gets answers. These are his “known unknowns.” He can let me know he has questions by cocking his head.

There are also some things he doesn’t know that he isn’t even aware of not knowing. For example, he’s allowed to run off-leash in the park because the city of Cambridge, Massachusetts has licensed him as a resident pet. That status is designated by the tag under his neck. He knows a lot about the park, and he’s aware of the tag (at least when it’s being put on him for the first time), but there’s no path to his understanding that a city is a political jurisdiction that derives power from the state to grant and withhold rights to dogs, which is why he’s running around in the grass.

To use the vocabulary pioneered by Jakob von Uexküll–which has been influential in very disparate intellectual traditions–my dog has an “umwelt,” a model of his world that is shaped, or perhaps “enacted,” by his biophysical characteristics (such as his sensitive nose and inability to speak) and their interactions with the objects he encounters (Varela, Thompson & Rosch 1991). I have a different umwelt, even though the two of us may be walking together through the same space at the same time. For me, we are in a city park, because I use words and concepts about social organization. For Luca, we are in a field luxuriously supplied with interestingly stinky smells and other dogs.

I know many things about Luca, such as his preference for the park over regular city streets. I know that his sense of smell is at least 10,000 times more acute than mine, and I can infer that he is much more interested than I am in the scents around the perimeter of the dog park because he derives far more information from them than I could. I could learn more about what specifically he smells there and even which chemical compounds are involved.

Some would say that I will never feel what it’s like to smell as well as he does. Others would reply that anything true about what he senses could be captured in my language and tested empirically by human beings, and it’s empty to say that we cannot know what he experiences.

I might have “unknown unknowns” about my dog. They could be unknown from my particular historical position, in the same way that people hundreds of year ago didn’t know to wonder about mammals’ neural networks. Or they could be permanently unknown to homo sapiens because we have a different experience from a dog’s and we don’t even know what to ask.

One view of that last statement is that it’s false, because dogs and people are highly similar. But what would we say about bats (Nagel 1974), or extraterrestrials with far bigger brains than ours? Maybe we miss aspects of their world, much as Luca misses the legal significance of the tag on his collar.

Another view is that talking about permanently unknown-unknowns is empty, or even nonsense. But nonsense is not necessarily bad for one’s character and state of mind. We might ask whether it is wise or foolish to reflect on the abstract possibility of thought beyond our capacity to think. A classic text for that discussion is the Preface to Wittgenstein’s Tractatus, where he says:

The book will, therefore, draw a limit to thinking, or rather—not to thinking, but to the expression of thoughts; for, in order to draw a limit to thinking we should have to be able to think both sides of this limit (we should therefore have to be able to think what cannot be thought).

The limit can, therefore, only be drawn in language and what lies on the other side of the limit will be simply nonsense.

Wittgenstein does not attempt to write about what lies beyond the limit because he does not write nonsense. But I think it remains debated whether he advises us to reflect on the limit “from both sides.” One way to do that would be to grasp and truly feel that we inhabit an umwelt that is not the same as the world in-itself–in other words, that there are things beyond our ken.

On one hand, I am a little suspicious of intimations about the actual nature of what lies beyond the line. I suspect that those vague ideas are generated by our very human hopes and fears and don’t represent signals from beyond our umwelt. On the other hand, I find it consoling that there is a limit to the field in which our sense can run (even with technical assistance), and that there must be much beyond it–just as a whole city begins outside the fence of our park.

This aphorism by Dogen (who lived 1200-1253 CE) suggests a similar idea:

Birth is just like riding in a boat. You raise the sails and you steer. Although you maneuver the sail and the pole, the boat gives you a ride, and without the boat you couldn’t ride. But you ride in the boat, and your riding makes the boat what it is. Investigate a moment such as this. At just such a moment, there is nothing but the world of the boat. The sky, the water, and the shore are all the boat’s world, which is not the same as a world that is not the boat’s. Thus you make birth what it is; you make birth your birth. When you ride in a boat, your body, mind, and environs together are the undivided activity of the boat. The entire earth and the entire sky are both the undivided activity of the boat. Thus birth is nothing but you; you are nothing but birth (p. 115).

References: Varela, Francisco J., Evan Thompson, and Eleanor Rosch. The embodied mind, revised edition: Cognitive science and human experience. MIT press, 2017; Nagel, Thomas. “What is it like to be a bat?” The Philosophical Review 83 (1974): 435-50; Wittgenstein, Ludwig, Tractatus Logico-Philosophicus, English trans. (London, 1922); The Essential Dogen: Writings of the Great Zen Master edited by Kazuaki Tanahashi and Peter Levitt, Shambhala 2013. See also: joys and limitations of phenomenology; let’s go for a walk

on the pattern that progressives tend to be less happy

In the American Spectator, under the headline “Leftists Scramble to Explain Why Conservatives Are Happier Than Liberals,” Ann Hendershott quotes our research, which I’d previously reported on this blog:

In the Tufts study, the researchers conclude: “[I]f a liberal and a conservative have the same income, education, race, gender, age, marital status, and religious attendance, the conservative will feel more fortunate … liberals are people who—regardless of their actual social positions—rate their own circumstances relatively poorly, and that attitude drives their ideology and makes them unhappy or else reflects their unhappiness.”

The ellipsis in her quote replaced these words from our report: “A critic might say that …” We hedged because we can’t tell whether negative assessments of one’s own life affect one’s political views (a causal thesis). But Hendershott is a critic of liberals, and she adds a positive assessment of conservatives: “The reality is that true happiness, and a truly satisfying life, comes from caring for others—being physically present to them. Conservatives know that marriage and family life make one happy. … They know that true happiness comes from selflessness—living for others. And this is why they are happier.”

I am not sure about her causal claim. Progressives are less happy than conservatives regardless of marital status (and religious attendance). Using General Social Survey data, I ran a simple regression to predict divorce based on party identification, income, race, gender, and education. Before 2010, identifying as a Republican was associated with lower divorce when these other factors were considered. Since then, party identification has not been related to divorce. Indeed, Democrats and Republicans have had statistically indistinguishable divorce rates in the past decade. These statistics do not settle the issue, but I’d be surprised if progressives derive less happiness from their family involvement than conservatives do in the same social circumstances.

In our model, progressives who are politically active are almost as happy as conservatives who are politically active. Like intensive involvement with one’s family, being an activist is also a way of “living for others.”

Our empirical contribution was the finding that it’s the people with actual depression who make the progressive sample less happy than the conservative sample. Individuals with depression are a minority of those on the left, yet depressed Americans cluster on the progressive side of the aisle. In that case, generalizations about why progressives are less happy than conservatives may be misleading–they don’t apply to people without depression. A better question would be why depression is disproportionately concentrated on the left even once race, gender, and income are controlled for. Musa al-Gharbi has collected previous studies that also find links between ideology and depression. Is this link causal?

Leaving the empirical issues aside, I would pose a very general question about the relationship between mood and knowledge of the world. A standard view might be that any mood is a bias, a source of error; objectivity requires countering all moods. The metaphor of “rose-colored glasses” suggests that optimistic people would have more accurate perceptions if they took off their lenses. Indeed, science since the Renaissance has sought to develop methods and protocols that decrease the impact of the subjectivity of the observer. If scientific methods and instruments work, then a scientist’s mood shouldn’t matter.

An alternative view, which Michel Foucault labeled “spirituality,” is the idea that in order to perceive things correctly, we must first bring ourselves into the best possible mental state. A mood is not a bias; it is the basis of perception. The question is whether our mood is ideal for perceiving well. For example, to perceive what another person is saying, we should probably attain a mood of receptiveness and equanimity. Likewise to perceive that nature is beautiful or that God is good requires an appropriate state of mind. These states can be labeled virtues rather than (mere) moods.

Positive states may not be the only ones that help us to perceive well. Heidegger, for example, sees moods such as anxiety and boredom as advantageous for revealing types of truth. As Michael Wheeler explains:

According to Heidegger’s analysis, I am always in some mood or other. Thus say I’m depressed, such that the world opens up (is disclosed) to me as a sombre and gloomy place. I might be able to shift myself out of that mood, but only to enter a different one, say euphoria or lethargy, a mood that will open up the world to me in a different way. As one might expect, Heidegger argues that moods are not inner subjective colourings laid over an objectively given world (which at root is why ‘state-of-mind’ is a potentially misleading translation of Befindlichkeit, given that this term names the underlying a priori condition for moods). For Heidegger, moods (and disposedness) are aspects of what it means to be in a world at all, not subjective additions to that in-nes

This general view of moods would suggest that being unhappy is not necessarily bad for perceiving the state of our social world. It might reveal insights about current injustices. There is no objective mood, but we might be able to shift negative attitudes into positive ones, or vice-versa. The question is which moods to cultivate in ourselves when we assess society and politics.

One answer might be that we need all kinds of moods and a robust exchange among people who see things differently. Nietzsche is hardly a deliberative democrat, but this remark from his Genealogy of Morals supports the goal of bringing people with diverse moods into conversation: “The more emotions we can put into words about a thing, the more eyes, the more different eyes we can set over the same thing, the more complete is our ‘concept’ of that thing, our ‘objectivity’” (III:12, my trans.). We might then be glad that our society includes both liberals and conservatives who are prone to different states of mind.

I wouldn’t be fully satisfied with that conclusion because it leaves unaddressed the question for individuals. If I perceive the political world sadly, or even in a depressed state, should I strive to change that attitude? Or should I try to convince the positive people to be more concerned?

To answer that question may require a more nuanced sense of an individual’s mood than we can obtain by asking questions about overall happiness (as we did in the reported study). We need a bigger vocabulary, encompassing terms like righteous indignation, empathy for loved ones or for strangers, generalized compassion, sensitivity, caring, agape, technological optimism or pessimism, quietism, acceptance, responsibility, solidarity, nostalgia, utopianism, zeal, and more.

Al-Gharbi reports that the correlation between happiness and conservatism is “ubiquitous, not just in the contemporary United States but also historically (virtually as far back as the record goes) and in most other geographical contexts as well.” He has reviewed the literature, and I am sure he’s right. But I suspect that once we add nuance to the characterization of moods, we will find more change over time and more diversity within the large political camps.

For example, I recognize a current kind of progressive who is deeply pessimistic about economic and technological trends and their impact on the environment. I don’t believe that mood was pervasive on the left immediately after the Second World War, when progressives tended to believe that they could harness technology for beneficial social transformation–not only in Europe and North America but also in the countries that were then overthrowing colonialism. I also recognize a kind of deep cultural pessimism on the right today that seemed much less prevalent in the era of Reagan and Thatcher.

Perhaps measures of happiness have continued to correlate with self-reported ideology in the same way over time and space, but there’s a lot more going on. The content of the ideologies, the demographics of their supporters, the pressing issues of the day, and the most evident social trends have all shifted.

Here’s a possible conclusion: You can’t avoid viewing the world in some kind of mood. Some moods can be more virtuous than others, and it’s up to each of us to try to put ourselves in the best frame of mind.

Liberals and progressives might give some thought to why there is a strong statistical association between their ideologies and unhappiness. Does that mean that we are prone to certain specific vices, such as resignation, bootless anger, or discounting good news? Does it mean that our political messages are less persuasive than they could be? What might we learn from people who seem to be happier while they assess the social world? And what should we do to assist the substantial number of people on our side who are depressed?

Although those questions may be worth asking regularly, they do not imply that we should drop our general stance. People who are happy about the world should also ask themselves tough questions and consider the possibility that those who are unhappy–or even the depressed–might have insights.

See also: perhaps it’s not that conservatives are happier but that people with depression cluster on the left; philosophy of boredom (partly about Heidegger); Cuttings: A book about happiness; spirituality and science; and turning away from disagreement: the dialogue known as Alcibiades I.

turning away from disagreement: the dialogue known as Alcibiades I

The dialogue known as Plato’s Alcibiades I is now widely believed to have been written after Plato’s death, hence by someone else (Smith 2003). Perhaps that is why no one has ever told me to read it. But it is an indisputably ancient text, and it’s a valuable work of philosophy.

In several places, Michel Foucault discusses Alcibiades I as the earliest text that offers an explicit theory of what he calls “spirituality” (Foucault 1988, 23-28; Foucault 1981, 15-16). For Foucault, spirituality is the idea that reforming one’s soul is a necessary precondition for grasping truth. One way to summarize Alcibiades I might be with this thesis: You are not qualified to participate in politics until you have purified your soul enough that you can know what is just. That is an anti-democratic claim, although one that’s worth pondering.

At the beginning of the dialogue, we learn that Alcibiades will soon give a speech in the Athenian assembly about a matter of public policy. He is talented, rich, well-connected, and beautiful, and his fellow citizens are liable to do what he recommends. Athens is a rising power with influence over Greeks and non-Greeks in Europe and Asia; Alcibiades aspires to exercise his personal authority at a scale comparable to the Persian emperors Cyrus and Xerxes (105d). However, Alcibiades’ many lovers have deserted him, perhaps because he has behaved in a rather domineering fashion (104c). Only his first lover, Socrates, still cares for him and has sought him out—even as Alcibiades was looking for Socrates.

Alcibiades admits that you should not expect a person who is handsome and rich to give the best advice about technical matters, such as wrestling or writing; you should seek an expert (107a). Alcibiades plans to give a speech on public affairs because it is “something he knows better than [the other citizens] do.” (106d). In other words, he claims to be an expert about politics, not just a celebrity.

Socrates’ main task is to dissuade Alcibiades from giving that speech by demonstrating that he actually lacks knowledge of justice. Alcibiades even fails to know that he doesn’t know what justice is, and that is the most contemptible form of ignorance (118b).

Part of Socrates’ proof consists of questions designed to reveal that Alcibiades lacks clear and consistent definitions of words like “virtue” and “good.” The younger man has no coherent theory of justice. This is typical of Socrates’ method in the early dialogues.

A more interesting passage begins when Socrates asks Alcibiades where he has learned about right and wrong. Alcibiades says he learned it from “the many”–the whole community–much as he learned to speak Greek (110e, 111a). Socrates demonstrates that it is fine to learn a language from the many, because they agree about the correct usages, they retain the same ideas over time, and they agree from city to city (111b). Not so for justice, which is the main topic of controversy among citizens and among cities and which even elicits contradictory responses from the same individuals. The fact that the Assembly is a place of disagreement shows that citizens lack knowledge of justice.

In the last part of the dialogue, Socrates urges Alcibiades to turn away from public affairs and rhetoric and instead make a study of himself. That is because a good city is led by the good, and the good are people who have the skill of knowing themselves, so that they can improve themselves. For Foucault, this is the beginning of the long tradition that holds that in order to have knowledge–in this case, knowledge of justice–one must first improve one’s soul.

Socrates verges into metaphysics, offering an argument that the self is not the observable body but rather the soul, which ought to be Alcibiades’ only concern. This is also why Socrates is Alcibiades’ only true lover, for only Socrates has loved Alcibiades’ soul, when others were after a mere form of property, his body.

The dialogue between the two men has been a conversation between two souls (130d), not a sexual encounter or a public speech, which is an effort to bend others’ wills to one’s own ends. Indeed, Socrates maintains from the beginning of the dialogue that he will make no long speeches to Alcibiades (106b), but will rather permit Alcibiades to reveal himself in response to questions. Their dialogue is a meant, I think, as a model of a loving relationship.

Just to state a very different view: I think there is rarely one certain answer to a political question, nor is there a decisive form of expertise about justice. However, good judgment (phronesis) is possible and is much better than bad judgment. Having a clear and structured theory of justice might be helpful for making good judgments, but it is often overrated. Fanatics also have clear theories. What you need is a wise assessment of the particular situation. For that purpose, it is often essential to hear several real people’s divergent perspectives on the same circumstances, because each individual is inevitably biased.

Socrates and Alcibiades say that friendship is agreement (126c) and the Assembly evidently lacks wisdom because it manifests disagreement. I say, in contrast, that disagreement is good because it addresses the inevitable limitations of any individual.

Fellow citizens may display civic friendship by disagreeing with each other in a constructive way. Friendship among fellow citizens is not exclusive or quasi-erotic, like the explicitly non-political relationship between Alcibiades and Socrates, but it is worthy. We need democracy because of the value of disagreement. If everyone agreed, democracy would be unnecessary. (Compare Aristotle’s Nic. Eth. 1155a3, 20.)

Despite my basic orientation against the thesis of Alcibiades I, I think its author makes two points that require attention. One is that citizens are prone to be influenced by celebrities–people, like Alcibiades, who are rich and well-connected and attractive. The other is that individuals need to work on their own characters in order to be the best possible participants in public life. But neither point should lead us to reject the value of discussing public matters with other people.

References: Smith, Nicholas D. “Did Plato Write the Alcibiades I?.” Apeiron 37.2 (2004): 93-108; Foucault, “Technologies of the Self: A Seminar with Michel Foucault,” edited by Luther H. Martin, Huck Gutman and Patrick H. Hutton (Tavistock Press, 1988); Foucault, The Hermeneutics of the Subject, Lectures at the College de France 1981-2, translated by Graham Burchell (Palgrave, 2005). I read the dialogue in the translation by David Horan, © 2021, version dated Jan 1 2023, but I translated the quoted phrases from the Greek edition of John Burnet (1903) via Project Perseus. See also friendship and politics; the recurrent turn inward; Foucault’s spiritual exercises

philosophy of boredom

This article is in production and should appear soon: Levine P (2023) Boredom at the border of philosophy: conceptual and ethical issues. Frontiers of Sociology 8:1178053. doi: 10.3389/fsoc.2023.1178053.

(And yes, I anticipate and appreciate jokes about writing yet another boring article–this time, about boredom.)

Abstract:

Boredom is a topic in philosophy. Philosophers have offered close descriptions of the experience of boredom that should inform measurement and analysis of empirical results. Notable historical authors include Seneca, Martin Heidegger, and Theodor Adorno; current philosophers have also contributed to the literature. Philosophical accounts differ in significant ways, because each theory of boredom is embedded in a broader understanding of institutions, ethics, and social justice. Empirical research and interventions to combat boredom should be conscious of those frameworks. Philosophy can also inform responses to normative questions, such as whether and when boredom is bad and whether the solution to boredom should involve changing the institutions that are perceived as boring, the ways that these institutions present themselves, or individuals’ attitudes and choices.

An excerpt:

It is worth asking whether boredom is intrinsically undesirable or wrong, not merely linked to bad outcomes (or good ones, such as realizing that one’s current activity is meaningless). One reason to ask this question is existential: we should investigate how to live well as individuals. Are we obliged not to be bored? Another reason is more pragmatic. If being bored is wrong, we might look for effective ways to express that fact, which might influence people’s behaviors. For instance, children are often scolded for being bored. If being bored is not wrong, then we shouldn’t—and probably cannot—change behavior by telling people that it’s wrong to be bored. Relatedly, when is it a valid critique of an organization or institution to claim that it causes boredom or is boring? Might it be necessary and appropriate for some institutions … to be boring?

I have not done my own original work on this topic. I wrote this piece because I was asked to. I tried to review the literature, and a peer reviewer helped me improve that overview substantially.

I especially appreciate extensive and persuasive work by Andreas Elpidorou. He strikes me as an example of a positive trend in recent academic philosophy, also exemplified by Amia Srinivasan and others of their generation. These younger philosophers (whom I do not know personally) address important and thorny questions, such as whether and when it’s OK to be bored and whether one has a right to sex under various circumstances. They are deeply immersed in relevant social science. They also read widely in literature and philosophy and find insights in unexpected places. Srinivasan likes nineteenth-century utopian socialists and feminists; Elpidorou is an analytical philosopher who can also offer insightful close readings of Heidegger.

Maybe it was a bias on my part–or the result of being taught by specific professors–but I didn’t believe that these combinations were possible while I pursued my BA and doctorate in philosophy. In those days, analytical moral and political philosophers paid some attention to macroeconomic theory but otherwise tended not to notice current social science. Certainly, they didn’t address details of measurement and method, as Elpidorou does. Continental moral and political philosophers wrote about the past, but they understood history very abstractly, and their main sources were canonical classics. Most philosophers addressed either the design of overall political and economic systems or else individual dilemmas, such as whether to have an abortion (or which people to kill with an out-of-control trolley).

To me, important issues almost always combine complex and unresolved empirical questions with several–often conflicting–normative principles. Specific problems cannot be abstracted from other issues, both individual and social. Causes and consequences vary, depending on circumstances and chance; they don’t follow universal laws.

My interest in the empirical aspects of certain topics, such as civic education and campaign finance, gradually drew me from philosophy into political science. I am now a full professor of the latter discipline, also regularly involved with the American Political Science Association. However, my original training often reminds me that normative and conceptual issues are relevant and that positivist social science cannot stand alone.

Perhaps the main lesson you learn by studying philosophy is that it’s possible to offer rigorous answers to normative questions (such as whether an individual or an institution should change when the person is bored), and not merely to express opinions about these matters. I don’t have exciting answers of my own to specific questions about boredom, but I have reviewed current philosophers who do.

Learning to be a social scientist means not only gaining proficiency with the kinds of methods and techniques that can be described in textbooks, but also knowing how to pitch a proposed study so that it attracts funding, how to navigate a specific IRB board, how to find collaborators and share work and credit with them, and what currently interests relevant specialists. These highly practical skills are essential but hard to learn in a classroom.

If I could convey advice to my 20-year-old self, I might suggest switching to political science in order to gain a more systematic and rigorous training in the empirical methods and practical know-how that I have learned–incompletely and slowly–during decades on the job. But if I were 20 now, I might stick with philosophy, seeing that it is again possible to combine normative analysis, empirical research, and insights from diverse historical sources to address a wide range of vital human problems.

See also: analytical moral philosophy as a way of life; six types of claim: descriptive, causal, conceptual, classificatory, interpretive, and normative; is all truth scientific truth? etc.

when does a narrower range of opinions reflect learning?

John Stuart Mill’s On Liberty is the classic argument that all views should be freely expressed–by people who sincerely hold them–because unfettered debate contributes to public reasoning and learning. For Mill, controversy is good. However, he acknowledges a complication:

The cessation, on one question after another, of serious controversy, is one of the necessary incidents of the consolidation of opinion; a consolidation as salutary in the case of true opinions, as it is dangerous and noxious when the opinions are erroneous (Mill 1859/2011, 81)

In other words, as people reason together, they may discard or marginalize some views, leaving a narrower range to be considered. Whether such narrowing is desirable depends on whether the range of views that remains is (to quote Mill) “true.” His invocation of truth–as opposed to the procedural value of free speech–creates some complications for Mill’s philosophical position. But the challenge he poses is highly relevant to our current debates about speech in academia.

I think one influential view is that discussion is mostly the expression of beliefs or opinions, and more of that is better. When the range of opinions in a particular context becomes narrow, this can indicate a lack of freedom and diversity. For instance, the liberal/progressive tilt in some reaches of academia might represent a lack of viewpoint diversity.

A different prevalent view is that inquiry is meant to resolve issues, and therefore, the existence of multiple opinions about the same topic indicates a deficit. It means that an intellectual problem has not yet been resolved. To be sure, the pursuit of knowledge is permanent–disagreement is always to be expected–but we should generally celebrate when any given thesis achieves consensus.

Relatedly, some people see college as something like a debate club or editorial page, in which the main activity is expressing diverse opinions. Others see it as more like a laboratory, which is mainly a place for applying rigorous methods to get answers. (Of course, it could be a bit of both, or something entirely different.)

In 2015, we organized simultaneous student discussions of the same issue–the causes of health disparities–at Kansas State University and Tufts University. The results are here. At Kansas State, students discussed–and disagreed about–whether structural issues like race and class and/or personal behavioral choices explain health disparities. At Tufts, students quickly rejected the behavioral explanations and spent their time on the structural ones. Our graphic representation of the discussions shows a broader conversation at K-State and what Mill would call a “consolidated” one at Tufts.

A complication is that Tufts students happened to hear a professional lecture about the structural causes of health disparities before they discussed the issue, and we didn’t mirror that experience at K-State. Some Tufts students explicitly cited this lecture when rejecting individual/behavioral explanations of health disparities in their discussion.

Here are two competing reactions to this experiment.

First, Kansas State students demonstrated more ideological diversity and had a better conversation than the one at Tufts because it was broader. They also explicitly considered a claim that is prominently made in public–that individuals are responsible for their own poor health. Debating that thesis would prepare them for public engagement, regardless of where they stand on the issue. The Tufts conversation, on the other hand, was constrained, possibly due to the excessive influence of professors who hold contentious views of their own. The Tufts classroom was in a “bubble.”

Alternatively, the Tufts students happened to have a better opportunity to learn than their K-State peers because they heard an expert share the current state of research, and they chose to reject certain views as erroneous. It’s not that they were better citizens or that they know more (in general) than their counterparts at KSU, but simply that their discussion of this topic was better informed. Insofar as the lecture on public health found a receptive audience in the Tufts classroom, it was because these students had previously absorbed valid lessons about structural inequality from other sources.

I am not sure how to adjudicate these interpretations without independently evaluating the thesis that health disparities are caused by structural factors. If that thesis is true, then the narrowing reflected at Tufts is “salutary.” If it is false, then the narrowing is “dangerous and noxious.”

I don’t think it’s satisfactory to say that we can never tell, because then we can never believe that anything is true. But it can be hard to be sure …

See also: modeling a political discussion; “Analyzing Political Opinions and Discussions as Networks of Ideas“; right and left on campus today; academic freedom for individuals and for groups; marginalizing odious views: a strategy; vaccination, masking, political polarization, and the authority of science etc.

the difference between human and artificial intelligence: relationships

A large-language model (LLM) like ChatGPT works by identifying trends and patterns in huge bodies of text previously generated by human beings.

For instance, we are currently staying in Cornwall. If I ask ChatGPT what I should see around here, it suggests St Ives, Land’s End, St Michael’s Mount, and seven other highlights. It derives these ideas from frequent mentions in relevant texts. The phrases “Cornwall,” “recommended” (or synonyms thereof), “St Ives,” “charming,” “art scene,” and “cobbled streets” probably occur frequently in close proximity, because ChatGPT uses them to construct a sentence for my edification.

We human beings behave in a somewhat similar way. We also listen to or read a lot of human-generated text, look for trends and patterns in it, and repeat what we glean. But if that is what it means to think, then LLM has clear advantages over us. A computer can scan much more language than we can and uses statistics rigorously. Our generalizations suffer from notorious biases. We are more likely to recall ideas we have seen most recently, those that are most upsetting, those that confirm our prior assumptions, etc. Therefore, we have been using artificial means to improve our statistical inferences ever since we started recording possessions and tallying them by category thousands of years ago.

But we also think in other ways. Specifically, as intensely social and judgmental primates, we frequently scan our environments for fellow human beings whom we can trust in specific domains. A lot of what we believe comes from what a relatively small number of trusted sources have simply told us.

In fact, to choose what to see in Cornwall, I looked at the recommendations in The Lonely Planet and Rough Guide. I have come to trust those sources over the years–not for every kind of guidance (they are not deeply scholarly), but for suggestions about what to see and how to get to those places. Indeed, both publications offer lists of Cornwall’s highlights that resemble ChatGPT’s.

How did these publishers obtain their knowledge? First, they hired individuals whom they trusted to write about specific places. These authors had relevant bodily experience. They knew what it feels like to walk along a cliff in Cornwall. That kind of knowledge is impossible for a computer. But these authors didn’t randomly walk around the county, recording their level of enjoyment and reporting the places with the highest scores. Even if they had done that, the sites they would have enjoyed most would have been the ones that they had previously learned to understand and value. They were qualified as authors because they had learned from other people: artists, writers, and local informants on the ground. Thus, by reading their lists of recommendations, I gain the benefit of a chain of interpersonal relationships: trusted individuals who have shared specific advice with other individuals, ending with the guidebook authors whom I have chosen to consult.

In our first two decades of life, we manage to learn enough that we can go from not being able to speak at all to writing books about Cornwall or helping to build LLMs. Notably, we do not accomplish all this learning by storing billions of words in our memories so that we can analyze the corpus for patterns. Rather, we have specific teachers, living or dead.

This method for learning and thinking has drawbacks. For instance, consider the world’s five biggest religions. You probably think that either four or five of them are wrong about some of their core beliefs, which means that you see many billions of human beings as misguided about some ideas that they would call very important. Explaining why they are wrong, from an outsider’s perspective, you might cite their mistaken faith in a few deeply trusted sources. In your opinion, they would be better off not trusting their scriptures, clergy, or people like parents who told them what to believe.

(Or perhaps you think that everyone sees the same truth in their own way. That’s a benign attitude and perhaps the best one to hold, but it’s incompatible with what billions of people think about the status of their own beliefs.)

Our tendency to believe select people may be an excellent characteristic, since the meaning of life is more about caring for specific other humans than obtaining accurate information. But we do benefit from knowing truths, and our reliance on fallible human sources is a source of error. However, LLMs can’t fully avoid that problem because they use text generated by people who have interests and commitments.

If I ask ChatGPT “Who is Jesus Christ?” I get a response that draws exclusively from normative Christianity but hedges it with this opening language: “Jesus Christ is a central figure in Christianity. He is believed to be … According to Christian belief. …” I suspect that ChatGPT’s answers about religious topics have been hard-coded to include this kind of disclaimer and to exclude skeptical views. Otherwise, a statistical analysis of text about Jesus might present the Christian view as true or else incorporate frequent critiques of Christianity, either of which would offend some readers.

In contrast, my query about Cornwall yields confident and unchallenged assessments, starting with this: “Cornwall is a beautiful region located in southwestern England, known for its stunning coastline, picturesque villages, and rich cultural heritage.” This result could be prefaced with a disclaimer, e.g., “According to many English people and Anglophiles who choose to write about the region, Cornwall is …:” A ChatGPT result is always a summary of what a biased sample of people have thought, because choosing to write about something makes you unusual.

For human beings who want to learn the truth, having new tools that are especially good at scanning large bodies of text for statistical patterns should prove useful. (Those who benefit will probably include people who have selfish or even downright malicious goals.) But we have already learned a fantastic amount without LLMs. The secret of our success is that our brains have always been networked, even when we have lived in small groups of hunter-gatherers. We intentionally pass ideas to other people and are often pretty good at deciding whom to believe about what.

Moreover, we have invented incredibly complex and powerful techniques for improving how many brains are connected. Posing a question to someone you know is helpful, but attending a school, reading an assigned book, finding other books in the library, reading books translated from other languages, reading books that summarize previous books, reading those summaries on your phone–these and many other techniques dramatically extend our reach. Prices send signals about supply and demand; peer-review favors more reliable findings; judicial decisions allow precedents to accumulate; scientific instruments extend our senses. These are not natural phenomena; we have invented them.

Seen in that context, LLMs are the latest in a long line of inventions that help human beings share what they know with each other, both for better and for worse.

See also: the design choice to make ChatGPT sound like a human; artificial intelligence and problems of collective action; how intuitions relate to reasons: a social approach; the progress of science.

analytical moral philosophy as a way of life

(These thoughts are prompted by Stephen Mulhall’s review of David Edmonds’ book, Parfit: A Philosopher and His Mission to Save Morality, but I have not read that biography or ever made a serious study of Derek Parfit.)

The word “philosophy” is ancient and contested and has labeled many activities and ways of life. Socrates practiced philosophy when he went around asking critical questions about the basis of people’s beliefs. Marcus Aurelius practiced philosophy when he meditated daily on well-worn Stoic doctrines of which he had made a personal collection. The Analects of Confucius may be “a record of how a group of men gathered around a teacher with the power to elevate [and] created a culture in which goals of self-transformation were treated as collaborative projects. These people not only discussed the nature of self-cultivation but enacted it as a relational process in which they supported one another, reinforced their common goals, and served as checks on each other in case they went off the path, the dao” (David Wong).

To practice philosophy, you don’t need a degree (Parfit didn’t complete his), and you needn’t be hired and paid to be a philosopher. However, it’s a waste of the word to use it for activities that aren’t hard and serious.

Today, most actual moral philosophers are basically humanities educators. We teach undergraduates how to read, write, and discuss texts at a relatively high level. Most of us also become involved in administration, seeking and allocating resources for our programs, advocating for our discipline and institutions, and serving as mentors.

Those are not, however, the activities implied by the ideal of analytic moral philosophy. In that context, being a “philosopher” means making arguments in print or oral presentations. A philosophical argument is credited to a specific individual (or, rarely, a small team of co-authors). It must be original: no points for restating what has already been said. It should be general. Philosophy does not encompass exercises of practical reasoning (deciding what to do about a thorny problem). Instead, it requires justifying claims about abstract nouns, like “justice,” “happiness,” or “freedom.” And an argument should take into consideration all the relevant previous points published by philosophers in peer-reviewed venues. The resulting text or lecture is primarily meant for philosophers and students of philosophy, although it may reach other audiences as well.

Derek Parfit held a perfect job for this purpose. As a fellow of All Souls College, he had hardly any responsibilities other than to write philosophical arguments and was entitled to his position until his mandatory retirement. He did not have to obtain support or resources for his work. He did not have to deliberate with other people and then decide what to say collectively. Nor did he have to listen to undergraduates and laypeople express their opinions about philosophical issues. (Maybe he did listen to them–I would have to read the biography to find out–but I know that he was not obliged to do so. He could choose to interact only with highly prestigious peers.)

Very few other people hold similar roles: the permanent faculty of the Institute for Advanced Study, the professors of the Collège de France, and a few others. Such opportunities could be expanded. In fact, in a robust social welfare state, anyone can opt not to hold a job and can instead read and write philosophy, although whether others will publish or read their work is a different story. But whether this form of life is worthy of admiration and social support is a good question–and one that Parfit was not obliged to address. He certainly did not have to defend his role in a way that was effective, persuading a real audience. His fellowship was endowed.

Mulhall argues that Parfit’s way of living a philosophical life biased him toward certain views of moral problems. Parfit’s thought experiments “strongly suggest that morality is solely or essentially a matter of evaluating the outcomes of individual actions–as opposed to, say, critiquing the social structures that deeply shape the options between which individuals find themselves having to choose. … In other words, although Parfit’s favoured method for pursuing and refining ethical thinking presents itself as open to all whatever their ethical stance, it actually incorporates a subtle but pervasive bias against approaches to ethics that don’t focus exclusively or primarily on the outcomes of individual actions.”

Another way to put this point is that power, persuasion, compromise, and strategy are absent in Parfit’s thought, which is instead a record of what one free individual believed about what other free individuals should do.

I am quite pluralistic and inclined to be glad that Parfit lived the life he did, even as most other people–including most other moral philosophers–live and think in other ways. Even if Parfit was biased (due to his circumstances, his chosen methods and influences, and his personal proclivities) in favor of certain kinds of questions, we can learn from his work.

But I would mention other ways of deeply thinking about moral matters that are also worthy and that may yield different kinds of insights.

You can think on your own about concrete problems rather than highly abstract ones. Typically the main difficulty is not defining the relevant categories, such as freedom or happiness, but rather determining what is going on, what various people want, and what will happen if they do various things.

You can introduce ethical and conceptual considerations to elaborate empirical discussions of important issues.

You can deliberate with other people about real decisions, trying to persuade your peers, hearing what they say, and deciding whether to remain loyal to the group or to exit from it if you disagree with its main direction.

You can help to build communities and institutions of various kinds that enable their members to think and decide together over time.

You can identify a general and relatively vague goal and then develop arguments that might persuade people to move in that direction.

You can strive to practice the wisdom contained in clichés: ideas that are unoriginal yet often repeated because they are valid. You can try to build better habits alone or in a group of people who hold each other accountable.

You can tentatively derive generalizations from each of these activities, whether or not you choose to publish them.

Again, as a pluralist, I do not want to suppress or marginalize the style that Parfit exemplified. I would prefer to learn from his work. But my judgment is that we have much more to learn from the other approaches if our goal is to improve the world. That is because the hard question is usually not “How should things be?” but rather “What should we do?”

See also: Cuttings: A book about happiness; the sociology of the analytic/continental divide in philosophy; does doubting the existence of the self tame the will?

defining state, nation, regime, government

As a political philosopher by training, and now political scientist by appointment, I have long been privately embarrassed that I am not sure how to define “state,” “government,” “regime,” and “nation.” On reflection, these words are used differently in various academic contexts. To make things more complicated, the discussion is international, and we are often dealing with translations of words that don’t quite match up across languages.

For instance, probably the most famous definition of “the state” is from Max Weber’s Politics as Vocation (1919). He writes:

Staat ist diejenige menschliche Gemeinschaft, welche innerhalb eines bestimmten Gebietes – dies: das „Gebiet“, gehört zum Merkmal – das Monopol legitimer physischer Gewaltsamkeit für sich (mit Erfolg) beansprucht.

[The] state is the sole human community that, within a certain territory–thus: territory is intrinsic to the concept–claims a monopoly of legitimate physical violence for itself (successfully).

Everyone translates the keyword here as “state,” not “government.” But this is a good example of how words do not precisely match across languages. The English word “government” typically means the apparatus that governs a society. The German word commonly translated as “government” (“der Regierung“) means an administration, such as “Die Regierung von Joe Biden” or a Tory Government in the UK. (In fact, later in the same essay, Weber uses the word Regierung that way while discussing the “typical figure of the ‘grand vizier'” in the Middle East.) Since “government” has a wider range of meanings in English, it wouldn’t be wrong to use it to translate Weber’s Staat.

Another complication is Weber’s use of the word Gemeinschaft inside his definition of “the State.” This is a word with such specific associations that it is occasionally used in English in place of our vaguer word “community.” A population is not a Gemeinschaft, but a tight association can be. Thus to translate Weber’s phrase as “A state is a community …” is misleading.

For Americans, a “state” naturally means one of our fifty subnational units, but in Germany those are Länder (cognate with “lands”). The word “state” derives from the Latin status, which is as “vague a word as ratio, res, causa” (Paul Shorey, 1910) but can sometimes mean a constitution or system of government. Cognates of that Latin word end up as L’État, el Estado and similar terms that have a range of meanings, including the subnational units of Mexico and Brazil. In 1927, Mussolini said, “Tutto nello Stato, niente al di fuori dello Stato, nulla contro lo Stato (“Everything in the State, nothing outside the State, nothing against the State”). I think he basically meant that he was in charge of everything he could get his hands on. Louis XIV is supposed to have said “L’État c’est moi,” implying that he was the government (or the nation?), but that phrase may be apocryphal; an early documented use of L’État to mean the national government dates to 1799. In both cases, the word’s ambiguity is probably one reason it was chosen.

“Regime” can have a negative connotation in English, but political theorists typically use it to mean any government plus such closely related entities as the press and parties and prevailing political norms and traditions. Regimes can be legitimate, even excellent.

If these words are used inconsistently in different contexts, then we can define them for ourselves, as long as we are clear about our usage. I would tend to use the words as follows:

  • A government: either the legislative, executive, and judicial authority of any entity that wields significant autonomous political power (whether it’s territorial or not), or else a specific group that controls that authority for a time. By this definition, a municipality, the European Union, and maybe even the World Bank may be a government.

(A definitional challenge is deciding what counts as “political” power. A company, a church, a college, an insurgent army, or a criminal syndicate can wield power and can use some combination of legislative, executive, and/or judicial processes to make its own decisions. Think of canon law in the Catholic Church or an HR appeals processes inside a corporation. Weber would say that the fundamental question is whether an entity’s power depends on its own monopolistic use of legitimate violence. For instance, kidnapping is a violent way to extract money, but it does not pretend to be legitimate and it does not monopolize violence. Taxation is a political power because not paying your taxes can ultimately land you, against your will, in a prison that presents itself as an instrument of justice. Not paying a private bill can also land you in jail, but that’s because the government chooses to enforce contracts. Your creditor is not a political entity; the police force is. However, when relationships between a government and private entities are close, or when legitimacy is controversial, or when–as is typical–governments overlap, these distinctions can be hard to maintain and defend.)

  • A state: a government plus the entities that it directly controls, such as the military, police, or public schools. For example, it seems most natural to say that a US government controls the public schools, but not that a given school is part of the government. Instead, it is part of the state. Likewise, an army can be in tension with the government, yet both are components of the state.
  • A regime: the state plus all non-state entities that are closely related to it, e.g., political parties, the political media, and sometimes the national clergy, powerful industries, etc. We can also talk about abstract characteristics, such as political culture and values, as components of a regime. A single state may change its regime, abruptly or gradually.
  • A country: a territory (not necessarily contiguous) that has one sovereign state. It may have smaller components that also count as governments but not as countries.
  • A nation: a category of people who are claimed (by the person who is using this word) to deserve a single state that reflects their common identity and interests. Individuals can be assigned to different nations by different speakers.
  • A nation-state: a country with a single functioning and autonomous state whose citizens widely see themselves as constituting a single nation. Some countries are not nations, and vice-versa. People may disagree about whether a given country is a nation-state, depending on which people they perceive to form a nation.

See also: defining capitalism; avoiding a sharp distinction between the state and the private sphere; the regime that may be crumbling; what republic means, revisited etc.

does doubting the existence of the self tame the will?

I like the following argument, versions of which can be found in many traditions from different parts of the world:

  1. A cause of many kinds of suffering is the will (when it is somehow excessive or misplaced).
  2. Gaining something that you desire does not reduce your suffering; you simply will something else.
  3. However, one’s will can be tamed.
  4. Generally, the best way to manage the will is to focus one’s mind on other people instead of oneself. Thus,
  5. Being ethical reduces one’s suffering.

In some traditions, notably in major strands of Buddhism and in Pyrrhonism, two additional points are made:

  1. The self does not actually exist. Therefore,
  2. It is irrational to will things for oneself.

Point #7 is supposed to provide both a logical and a psychological basis for #4. By realizing that I do not really exist, I reduce my attachment to my (illusory) self and make more space to care about others, which, in turn, makes me happier.

Point #6 is perfectly respectable. Plenty of philosophers (and others) who have considered the problem of personal identity have concluded that an ambitious form of the self does not really exist. (For instance, David Hume.)

But if the self doesn’t exist, does it really follow that we should pay more attention to other people? We might just as well reason as follows:

  1. The self does not really exist. Therefore,
  2. a. Other people do not really exist as selves. Therefore,
  3. a. It is irrational to be concerned about them.

Or

  1. The self does not really exist. Therefore,
  2. b. It is impossible for me to change my character in any lasting way. Therefore,
  3. b. There is no point in trying to make myself more ethical.

Striving to be a better or happier person is not a sound reason for doubting the existence of the self. This doubt may do more harm than good. If there actually is no self, that is a good reason not to believe in one. But then we are obliged to incorporate skepticism about personal identity into a healthy overall view. The best way might be some version of this:

  1. The self does not really exist. Nevertheless,
  2. c. I would be wise to treat other people as if they were infinitely precious, durable, unique, and persistent things (selves).

I think it is worth getting metaphysics right, to the best of our ability. For example, it is worth trying to reason about what kind of a thing (if anything) a self is. However, I don’t believe that metaphysical beliefs entail ways of life in a straightforward way, with monotonic logic.

Any given metaphysical view is usually compatible with many different ways of being. It may even strongly encourage several different forms of life, depending on how a person absorbs the view. Thus I am not surprised that some people (notably, thoughtful Buddhists) have gained compassion and equanimity by adopting the doctrine of no-self, even though the same doctrine could encourage selfishness in others, and some people may become more compassionate by believing in the existence of durable selves. In fact, many have believed in the following argument:

  1. Each person (or sentient being) has a unique, durable, essential being
  2. I am but one out of billions of these beings. Therefore,
  1. It is irrational to will things for myself.

The relationship between an abstract idea and a way of being is mediated by “culture,” meaning all our other relevant beliefs, previous examples, stories, and role-models. We cannot assess the moral implications of an idea without understanding the culture in which it is used. For instance, the doctrine of no-self will have different consequences in a Tibetan monastery versus a Silicon Valley office park.

We cannot simply adopt or join a new culture. That would require shedding all our other experiences and beliefs, which is impossible. Therefore, we are often in the position of having to evaluate a specific idea as if it were a universal or culturally neutral proposition that we could adopt all by itself. For instance, that is what we do when we read Hume and Kant (or Nagarjuna) on the question of personal identity and try to decide what to think about it. This seems a respectable activity; I only doubt that, on its own, it will make us either better or worse people.

See also: notes on religion and cultural appropriation: the case of US Buddhism; Buddhism as philosophy; how to think about the self (Buddhist and Kantian perspectives); individuals in cultures: the concept of an idiodictuon. And see “The Philosophic Buddha” by Kieran Setiya, which prompted these thoughts.