why social scientists should pay attention to metaphysics

Yesterday, I introduced the substance of Brian Epstein’s book The Ant Trap. Epstein analyzes the metaphysics of social phenomena, such as groups. Here I want to argue that social scientists should be more attuned to metaphysical issues in general.

In social science, we think naturally of certain relationships, such as correlation and causation, and of certain kinds of objects, such as individuals and groups. But other relationships are present although less explicit in our work. For instance, the members of the US Congress do not cause the Congress; they compose it. Composition is a relationship that is named (but rarely explored) in standard social science.

One can ask, more generally, what kinds of relationships exist and what kinds of things are related to each other. Constitution and causality are two different relationships. Groups, moments in time, and ethical qualities are three different kinds of things. These types and relationships can go together in many ways. We can ask about their logic or their epistemology, but when we ask specifically, “What kinds of things are there and how do they go together?” we are putting the question in terms of metaphysics.

Social scientists should be concerned with metaphysics for two big reasons. First, in our actual writing and modeling, we often use some metaphysical terms (e.g., object, composition, causation), but only a few of those get explicit critical attention. In my experience, most of the meta-discussion is about what constitutes causality and how you can prove it—but there are equally important questions about the other relationships used in social science.

Second, professional philosophers have developed a whole set of other types and relationships that are typically not mentioned in social science but that can be powerful analytical tools if one is aware of them: supervenience, grounding, and anchoring being three that play important roles in The Ant Trap.

Since metaphysics is a subfield of philosophy, and since philosophers are probably outnumbered 50-to-one by social and behavioral scientists, it’s easy for the latter to overlook metaphysics. In fact, I suspect that the word “metaphysics” (as modern academic philosophers use it) is not well known. If you Google “metaphysical relationships,” you will see New Age dating tips. But all scientific programs involve metaphysics, and it is important to understand that discourse–not only to be more critical of the science but also to develop more powerful models.

is social science too anthropocentric?

Consider these statements: “A group just is the people who make it up.” “If a group can be said to have intentions at all, its intentions must somehow be the intentions of its members.” Or: “When a convention arises, such as the convention that a dollar has value, it must exist because the people who use dollars have imposed some meaning on material reality.”

In The Ant Trap: Rebuilding the Foundations of the Social Sciences, Brian Epstein criticizes an assumption that is implicit in these statements (which are mine, not his): that social phenomena can be fully explained by talking about people. It’s obvious that non-human phenomena–from evolution to climate change–influence or shape human beings. But the thesis that people fully determine social phenomena is worth critical scrutiny.

Epstein’s book is methodical and not subject to a short paraphrase, but some examples may give a flavor of the argument. For instance, is Starbucks composed of the people who work for it? Clearly not, because the coffee beans and water, the physical buildings, the company’s stock value, the customers and vendors, the rival coffee shops in the same markets, and many other factors make it the company that we know, just as much as its own people do. Indeed, its personnel could all turn over through an orderly process and it would still be Starbucks.

Likewise, if the Supreme Court intended to overturn the ban on corporate campaign contributions, was its intention a function of the preferences of the nine individual justices? No, because in order for them to intend to overturn the ban, they had to be legitimate Supreme Court justices within a legal system that presented them with this decision at a given moment. I could form an opinion of the Citizens United case, but I could not “intend” to rule for the government in that case, because I am not a justice. And what makes someone a justice at the moment when the Citizens United case comes before the court is a whole series of decisions by people not on the court, going back to founding era.

In general, Epstein writes, “facts about a group are not determined just by facts about its members.” And it’s not just other people who get involved. Non-human phenomena can be implicated in complicated ways. For instance, the Supreme Court is in session on certain days, and on all other days, a “vote” by a justice would not really be a vote. What makes us say that a certain day has arrived is the movement of the earth around the sun. So the motion of a heavenly body is implicated in the existence and the intentions of the Supreme Court. That is an apt example, because Epstein calls for a Copernican Revolution in which we stop seeing the social world as “anthropocentric.”

Note that we are talking here about grounding relations, not causation. Public opinion may influence the composition of the Supreme Court and its decisions. The movement of the earth does not influence or affect the Court, and you wouldn’t model it that way (with the earth as an independent variable). Rather, the court is in session on certain dates, and the calendar is grounded in facts about the solar system. Likewise, a president can influence the court, and you could model the president’s ideology as an independent variable. But the composition of the court is grounded in decisions by presidents and senates in a more fundamental way than causation. To be a justice is (in part) to have been nominated and confirmed.

When people criticize anthropocentrism, usually they mean to take human beings down a peg. But in this case, the critique is a testament to our creativity and agency. Human beings can create groups in limitless ways. We can intentionally ground facts about groups in circumstances beyond the control of their members, or indeed in facts that are under no human’s control (like the motion of the earth). It can be wise to limit the power of group members in just these ways. Epstein writes, “Our ability to anchor social facts to have nearly arbitrary grounds is the very thing that makes the social world so flexible and powerful. Why would we deprive ourselves of that flexibility?” But the same flexibility that empowers the human beings who design and operate groups also creates headaches for the analysts who try to model their work. “Compared to the social sciences, the ontology of natural science is a walk in the park.”

The Ant Trap does not offer one model as an alternative to the standard anthropocentric ones, because social phenomena are diverse as well as complex. But if we narrow the focus a bit from the whole social world and look at groups, they tend to require (in Epstein’s analysis), a two-level model. Various facts about each group are grounded in other facts. For instance, the fact that the Supreme Court is in session is grounded in facts about the calendar (as well as many other kinds of facts). In turn, these grounding relationships are anchored in different facts–for instance, facts about how US Constitution organized the judiciary system.

My day job involves very conventional social science. We study various groups, from Millennials and voters to Members of Congress. After reading The Ant Trap, I won’t think of groups in the same way again. I am not yet sure what specific methodological implications follow, but that seems an important question to pursue.

See also Brian Epstein’s TedX Standford talk, which captures some of the book.

 

on the proper use of moral clichés

In Joseph Roth’s finely wrought novel The Redetsky March (1932), a simple and good-hearted peasant orderly tries to make a huge financial sacrifice to help his boss, Lieutenant Trotta. The feckless Trotta is badly in debt, and the orderly, Onufrij, has buried some savings under a willow tree. Onufrij has already appeared in the novel many times by this point, but always as a cipher. Now suddenly we see things from his perspective as he walks home (fearfully and yet excitedly), tried to remember which one is his left hand so that he can identify the location where he buried his money, digs it up, and uses it as collateral to obtain a loan from the local Jewish lender.

Apparently, cheap novels that were popular among Austro-Hungarian officers in Trotta’s day “teamed with poignant orderlies, peasant boys with hearts of gold.” Because his actual servant is acting like a literary cliché, Trotta disbelieves and callously rejects the help. He tells Onufrij that it is forbidden to accept a loan from a subordinate and dismisses him curtly. Trotta “had no literary taste, and whenever he heard the word literature he could think of nothing but Theodor Körner’s drama Zriny and that was all, but he had always felt a dull resentment toward the melancholy gentleness of those booklets and their golden characters.” Thus he understands the offer from Onufrij as a fake episode from an unbelievable book. Trotta “wasn’t experienced enough to know that uncouth peasant boys with noble hearts exist in real life and that a lot of truths about the living world are recorded in bad books; they are just badly written.”

Trotta can be compared to two other characters who have problematic relationships with clichés. In Dante’s Divine Comedy, Francesca da Rimini utters a speech that consists almost entirely of slightly garbled quotations from popular medieval romantic literature. She justifies her actions with these clichés and avoids any mention of her own sin. It becomes evident that she never really loved her lover, Paolo, but was only in love with the cliché of being a doomed adulteress. Like The Redetsky March, the Inferno is a beautiful and original construction in which clichés have a deliberate place.

Flaubert’s Madame Bovary (living more than five centuries after Francesca) also quotes incessantly from popular romantic literature and thereby avoids having to see things from the perspective of her victims, notably her husband and children. Flaubert italicizes her clichés to draw attention to them. He uses his own brilliant and acidly original prose to describe a person who can only think in clichés.

Even though Francesca and Emma Bovary quote statements that are literally true, they rely on stock phrases instead of seriously thinking for themselves. They love what they would call “literature,” but they reduce it to a string of clichés.

Trotta is in some ways their opposite and in some ways similar. He despises “literature” but knows some clichés that popular books contain and uses them to avoid reality. His method of avoidance is to doubt anything that is a literary cliché, whereas Emma Bovary and Francesca da Rimini believe them all.

Although Dante and Flaubert were making different points from Roth about clichés, I think both perspectives have some value. Certain cultural movements—notably, the Romanticism of ca. 1800 and the High Modernism of ca. 1900—have prized originality and have scorned cliché as one of the worst aesthetic failings. Indeed, they have defined “literature” as writing free of cliché at the level of style, plot and character, or theme. These movements have enriched our store of ideals, but they have been overly dismissive of the wisdom embodied in tradition. If you respect the accumulated experience of people who have come before you, you may reasonably assume that many truths are clichés and that many clichés are true. To scorn cliché can mean treating one’s own aesthetic originality as more important than the pursuit of moral truth.

Thus I would not try to delete statements from my list of moral beliefs because they have been made many times before or have been expressed in a simple and unoriginal fashion. I would even be inclined to consider our culture’s store of moral clichés as a set of likely truths. Roth was right: “a lot of truths about the living world are … just badly written.” Situations repeat, and what needs to be said has often been said many times before.

But the risk is that a stock phrase can prevent a person from grasping the concrete reality of the situation at hand. I’d propose two remedies for that problem. First, it is worth recognizing which of our moral commitments, even if they are fully persuasive and valid, are also clichés in the sense that they are standardized and prefabricated phrases. Those commitments deserve special scrutiny.

Second, it is worth attending to the ways that all of our various moral commitments fit together. Each cliché may be true, but when it is juxtaposed with other general statements, it always turns out to be only partly true. Life is full of tradeoffs and tensions. Even if the components of my overall worldview are mostly clichés, the whole structure of moral ideas that emerges from my best thinking about my own circumstances is original–just because I am my own person.

Sources: Joseph Roth, The Radetsky March, translated by Joachim Neugroschel, Part II, chapter 17; my article “Why Dante Damned Francesca da Rimini,” Philosophy & Literature, vol. 23 (October, 1999), pp. 334-350. See also on the moral peril of cliché and what to do about it; and on the moral dangers of cliché.

The post on the proper use of moral clichés appeared first on Peter Levine.

latest thoughts on animal rights and welfare

When we stand to affect another person or animal, at least four moral considerations seem potentially relevant:

  1. The creature’s suffering or distress versus its happiness, contentment, or satisfaction.
  2. The creature’s sense of meaning, purpose, and agency.
  3. The creature’s ability to live in its natural way or to be itself. And …
  4. The impact on other creatures that know and care about the creature that we are directly affecting.

The first consideration is relevant to all sentient beings in proportion to their capacity for sensation and experience. Perhaps a clam cannot suffer appreciably. But there is no reason to think that we human beings are the most sensitive of all creatures–or at least, not by much. And since the first consideration applies to most other animals, it is wrong to reduce their happiness or increase their suffering.

A more difficult question is whether a sudden and painless death reduces happiness. On one account, the net of a creature’s happiness accumulates like a running tally over the life-course. In that case, a painless death freezes the score permanently in place, which can make the total higher than it would have been if the future would have been less happy than the past was. A different views is that a creature has no happiness or suffering at all after death, and therefore death has no impact on happiness. In Epicurus’ phrase, “Death is nothing to us.” I am dissatisfied with both views but not sure that I have a better proposal. Certainly, happiness has a temporal aspect, because suffering on one day lingers on the next. But I struggle to say what impact ending a life has on the creature’s happiness.

The second consideration depends on an ability to make meaning or sense of one’s life and to make consequential choices according to one’s sense of purpose: in a word, “agency.” I am not committed to the premise that agency is a capacity of human beings alone, but we certainly have a very advanced version of it. Note that this capacity is temporal: we make meaning by putting our present state and our current choices in a longer narrative that includes a past and a future. One reason that killing a human being is badly wrong is that it ends the narrative that the person is constructing and thereby destroys her agency. I don’t think the same argument applies to the instantaneous and painless termination of the life of a chicken.

The third consideration–naturalness–seems to apply most to creatures that are not human beings. If possible, a bear should be left alone to live as a bear. Our family dog would not be better off if he were left in the woods to fend for himself like coyote, but he should be able to live the life natural to a domesticated dog, with activities like walks and cuddles. And as for a cow–I am inclined to think that its natural state must include time grazing in a field and nursing a calf. I am not sure that suddenly being slaughtered violates its ability to live a natural life. That means that factory farming is unacceptable but family farming may be consistent with respecting the natural states of farm animals.

As for human beings, we are also natural creatures in the sense that we are an evolved species with many innate limitations and tendencies. But we are capable of reflecting on the whole range of our inherited traits and distinguish the better from the worse. We have a natural proclivity to altruism but also to aggression, even to rape and murder. For us to live according to nature is not nearly good enough. We build institutions and norms to change our inherited natures for the better. That forfeits a right to live naturally and makes the third consideration irrelevant to us.

The fourth consideration applies to any animal that cares for another. In the old Disney cartoon, the death of Bambi’s mother deeply hurt Bambi. Although the cartoon anthropomorphized its animal characters, Bambi’s emotional reaction seems plausible enough for a deer. Still, people may be unique in that our relationships with other people are mediated by language and other forms of communication. We can suffer–or have our sense of purpose and agency frustrated–by learning of the death of someone we have never even met. In contrast, if Bambi had not directly experienced his mother’s death, he wouldn’t have suffered from it.

Freedom is certainly a moral consideration as well, but for human beings, it has a lot to do with #2 (purpose and agency), whereas for animals, it is related to #4 (naturalness). For a person, to be free is to be able to live according to her own sense of purpose. But a bear is free if it’s left alone to be a bear.

What all this means: Intentionally causing the suffering of another creature is always wrong, albeit a wrong that should be balanced against other considerations. Reducing the ability of a non-human creature to live naturally is also a wrong, at least ceteris paribus. But that is a complex question when it comes to farm animals. Killing a person is a special evil because it not only causes suffering but it ruins the purpose and agency that came from that person’s ability to plan and foresee the future. Furthermore, the impact on other human beings of killing a given person is particularly deep and widespread. This is one reason that it is badly wrong to kill even a human being who does not have much agency, such as a neonate. Killing an advanced animal painlessly and suddenly (beyond the sight of its kin) does not necessarily violate considerations #3 or #4. It may violate #1, depending on how we understand the temporal dimension of happiness and suffering. And it may violate #2, but only to the degree that other advanced species have capacities for long-term planning.

See also my evolving thoughts on animal rights and welfare.

The post latest thoughts on animal rights and welfare appeared first on Peter Levine.

right and true are deeply connected

Beliefs about “is” and “ought” are so deeply interrelated that it is often better to think of truth and rightness as two dimensions of the same thought than as separable concepts.* That means that it is almost always important to analyze whether a moral belief you hold is true (as opposed to false or uncertain) and also whether any factual claim you make is good (as opposed to bad or unethical).

Consider these examples:

1. “It is wrong to discriminate on the basis of race.” That sounds like a pure value-judgment. It may be an excellent or even an obligatory value judgment, but it doesn’t sound like a truth, like “2 plus 3 equal 5,” or “Lincoln issued the Emancipation Proclamation.”

However, someone who believes this statement and takes it seriously almost certainly holds a set of other beliefs that are factual. For instance: There has been, and continues to be, a lot of discrimination on the basis of race. Racial discrimination has caused (and seems likely to continue to cause) suffering, injustice, and pain. And people of different races are not actually different in ways that should matter. These statements are true and based on information.

So now the claim is starting to look very factual again. It’s starting to sound like a testable hypothesis that isn’t a matter of moral judgment. But the stance against racial discrimination is also inextricably moral, at several levels.

First, it isn’t a logical or scientific fact that it it wrong to cause suffering, injustice, or pain. when animals cause pain, we don’t blame them morally. Implicit in the idea that we should not discriminate is some account of how we should behave toward other human beings.

Second, how do we know that racial discrimination has been common? People have experienced it personally and have taken the trouble to share their own experiences with others who have chosen to listen to them; or they have collected evidence of other people’s suffering from libraries and archives. In other words, people have accumulated and shared an understanding of racial oppression in the United States. That process takes intentional effort. Whether you are a professional historian who uncovers original documents about slavery or a parent who shares family memories with your toddlers, you are creating knowledge because of your moral commitments.

So now the statement “It is wrong to discriminate on the basis of race” is again beginning to seem highly moral and not factual at all. It is built on moral concepts like “injustice,” and an understanding of our history and present circumstances that we have created because of our values. But again, we cannot ignore the factual element. Yes, people create an understanding of history. But they cannot make just anything up. Racial discrimination has been all too real. That is why it appears in books of history and not just in fiction. We make the books of history, but it is “history” because it is real.

To add another layer: race itself is not a scientific concept. No biologist from another planet would classify human beings into races. But race is a social construct of enormous power. As such, it has really existed, and its existence has mostly been bad, although certainly some have made good of it through their effort and their art.

In short, a statement against racism, like very many other statements, combines evaluations and facts in ways that are impossible fully to disentangle. And so one question that you can ask about a statement like this is: “Is it true?”

2. Every child has a right to a good education. The invocation of a right in this sentence makes it a moral claim. Rights cannot be detected or vindicated by scientific methods. To say that someone has a right is to assert what is just, fair, or good.

At the same time, education is something that we observe and experience. Although education occurs in many settings (beginning with the home), usually a right to education is interpreted as a right to free or affordable schooling of a certain quality. Schools and colleges were founded at particular points in human history and have evolved and diversified until they reflect a range of purposes, as well as a wide range of quality. It only makes sense to favor a right to education (translated as a right to a certain quality and extent of schooling) if one observes that schools are, or could be, good for children.

That is partly an empirical claim, informed by evidence about their actual impact. But it is not a purely empirical assertion, because what is good for children is a moral question. (Should children become free and autonomous? Obedient and productive? Smart? Happy?)

Moral judgment enters the analysis in another way as well. To say, “Every child has a right to a good education” does not imply that a satisfactory education is what we actually offer in schools today. We can develop a vision of better schools in the future. But that vision should be vivid and detailed, not just a rote invocation of a better time. And it should be a plausible vision, given what we know (or think we know) about how human beings learn, about how institutions function, about what laws can achieve, and about what money can buy.

Once again, the factual and the moral interpenetrate deeply, so that teasing one strand from the other does not seem productive, even if it were possible.

3. “A good and omnipotent God exists”: This is a claim about how the universe actually is. It is phrased so that it is literally true or false, just like the claim that 2 plus 3 equals five or the earth is round. But God is different. God could exist and yet be completely immune from being empirically proven by living human beings during the regular course of history. (Only souls after death or at the end of time would have direct empirical evidence of God.)

I think people are entitled to believe in God if that genuinely feels true to them. I would not advocate deleting that belief from one’s set of ideas because it isn’t a scientific hypotheses, subject to being tested. But you can ask whether your own religious beliefs feel secure and sincere. The question is whether you really believe in God. That is a different question from whether you wish that God exists or whether you belong to a community that traditionally believes in God. Nothing is true just because it would be better if it were true or just because people have believed it.

Again, this is not an argument against the existence of God. It is merely a reminder that one is responsible for reflecting on the truth of one’s religious beliefs, quite apart from their consequences. God belongs in your store of beliefs if subjective experience or reason leads you to believe that there is a God. If not, perhaps that idea should go.

4. “Everything happens for a good reason.” That statement could be true if God or Providence or some other supernatural force makes everything come out well, either on earth or in heaven. In other words, this statement could be true if it is connected to a religious claim that is true. But the statement seems flatly false if it is not sustained in that way. UNICEF estimates that 21 children under the age of five die every minute because of preventable causes, most of which could be removed with modest amounts of money. If those children die for a good reason, I fail to see it. To believe that everything happens for the best without citing a religious justification seems to me a classic example of bad faith. It is an error, a falsehood, motivated by the hope of evading upsetting thoughts. It is an example of the kind of belief that we should delete as we look for falsehoods in our own beliefs (unless, again, you choose to retain it because of a religious belief that truly justifies it).

*Cf. Bernard Williams on “thick” moral concepts in Ethics and the Limits of Philosophy, pp. 140-1.

The post right and true are deeply connected appeared first on Peter Levine.

to whom do the ancient Greeks belong?

There has been some valuable debate about the diversity of the authors on the syllabus of the Summer Institute of Civic Studies. A participant noted, in particular, that Aristotle is mentioned over and over again in the readings. Is that a sign that the scope of the authors is too narrow for the 21st century world?

It could be. My own views on that question are complex and unsettled. But I think it is worth thinking seriously about the identity of a person like Aristotle.

On one hand, he was (to use our terms) a white man. He spoke an Indo-European language and lived in a country that currently belongs to the EU; in fact, his countrymen invented the idea of “Europe” as distinct from “Asia.” He was the tutor of another white man, Alexander, who conquered Egypt, Mesopotamia, and northern India. Aristotle’s thought deeply influenced Greco-Roman civilization and then was grafted onto Western Christian thought (especially after 1100) so that he now provides core ideas for Catholicism and some of its Protestant offshoots. So he is quintessentially Western.

On the other hand, Aristotle lived in a culture strikingly remote from our own. If we are individualistic, materialistic, technocratic, and used to mass societies, he came from a world of tightly integrated, deeply pious, zealously communitarian city-states. He lived in the eastern Mediterranean, influencing and studying cultures in countries that we now call the “Middle East.” The idea of whiteness had yet to be invented in his era. His thought arrived in the Christian world via Islamic authors who had made heavy use of him while hardly anyone in what we now call “the West” knew anything about him. The main entry point for his thought into the Catholic world was the Spain of the “tres culturas” (Islam, Christianity, and Judaism). Today, he is more likely to be studied deeply in Shiite Iran or in a Catholic seminary in Bolivia than in the United States.

I do not dismiss the argument that a syllabus in which most of the authors refer to Aristotle is too narrow. But I do dispute the idea that Aristotle is somehow “ours” (where “we” are Westerners) and doesn’t also belong to the rest of the world.

See also Jesus was a person of coloravoiding the labels of East and Westwhen East and West were oneon modernity and the distinction between East and West.

The post to whom do the ancient Greeks belong? appeared first on Peter Levine.

Hannah Arendt and thinking from the perspective of an agent

In the following passage from On Revolution (pp. 42-3), Hannah Arendt is criticizing the Hegelian tradition of German philosophy (including Marx) that purports to find fundamental meanings in the narrative of world history.  I think that her words would also describe mainstream social science, which attempts to explain ordinary events empirically rather than philosophically:

Politically, the fallacy of this new and typically modern philosophy is relatively simple. It consists in describing and understanding the whole realm of human action, not in terms of the actor and the agent, but from the standpoint of the spectator who watches a spectacle. But this fallacy is relatively difficult to detect because of the truth inherent in it, which is that all stories begun and enacted by men unfold their true meaning only when they have come to their end, so that it may indeed appear as though only the spectator, and not the agent, can hope to understand what actually happened in any given chain of deeds and events.

The more successful you are in social science, the more you can explain who acts and why. By explaining “deeds and events” that have already happened, you make them look determined. You seek to reduce the unexplained variance. But when you are a social actor, it feels as if you are choosing and acting intentionally. The unexplained is a trace of your freedom.

Arendt does not assert that the spectator’s perspective is epistemically wrong, but that it reflects a political fallacy. It has the political consequence of reducing freedom.

On p. 46, she gives an example: the French Revolution has been understood in ways that hamper the agency and creativity of subsequent revolutionaries. She even argues that revolutionary leaders have submitted to being tried and executed because they assume that revolutions must end in terror. Thus all later upheavals have been

seen in images drawn from the course of the French Revolution, comprehended in concepts coined by spectators, and understood in terms of historical necessity. Conspicuous by its absence in the minds of those who made the revolutions as well as of those who watched and tried to come to terms with them, was the deep concern with forms of government so characteristic of the American Revolution, but also very important in the early stages of the French Revolution.

If you are a political agent, you believe that you can invent or reconstruct “forms of government” to reflect your considered opinions. Deliberate institutional design and redesign seems both possible and valuable. But if you think of history as inevitable and driven by grand forces (the World Spirit, the class struggle), by root causes (capitalism, racism), or by empirical factors (income, gender, technology), then institutional design seems to be an outcome, not a cause; and the designers appear to lack agency. “Civic Studies” can be seen as a reorientation of the humanities and social sciences so that they take an agentic perspective and therefore avoid the “political fallacy” of determinism.

See also: Roberto Unger against root causes and the visionary fire of Roberto Mangabeira Unger

The post Hannah Arendt and thinking from the perspective of an agent appeared first on Peter Levine.

defining “games”

I am reading Josh Lerner’s Making Democracy Fun: How Game Design Can Empower Citizens and Transform Politics because it makes an important argument. Games are fun for specific reasons; most political processes fail to be fun because they lack those elements; and we could make politics more fun without sacrificing serious purposes if we learned from game design. That’s the great value of the book, but here is a philosopher’s digression ….

Lerner (p. 29) defines games as “systems where players engage in artificial conflict, defined by rules, that results in measurable outcomes.” My ears perk up at any definition of “games” because Ludwig Wittgenstein famously avoids defining that word in his Philosophical Investigations. There he observes that games come in many different forms and asserts that no single feature defines them all. Games constitute a family of cases, each of which resembles several others even though they are not all alike in any particular respect. We know how to use (and teach) the word “game” even though we cannot define it in terms of necessary and sufficient conditions. This observation is important for Wittgenstein because he believes that language is a heterogeneous set of games. And we think in language. Thus our thought is a set of practices that lack a common feature, yet we can learn to think and communicate.

Lerner offers a definition. He emphasizes relevant and important features of many practices that we call “games”–features that we should heed when we design political processes, which is Lerner’s interest. One wouldn’t need his definition to understand the word “game”: I have been playing games for almost half a century without thinking in Lerner’s terms. His doesn’t exactly work as a literal definition, because, for instance, a business competition could easily be an “artificial conflict, defined by rules, that results in measurable outcomes” such as profit and loss. If that competition is devoid of fun, we wouldn’t call it a “game,” except metaphorically. Also, if you showed Lerner’s definition to someone who had never played a range of games, it wouldn’t communicate what he has in mind. This person might think of standardized tests, duels, court cases, and other artificial conflicts that we don’t usually call “games.”

This is not a criticism of Lerner. I think his definition plays its intended role in his book. He presumes some real world experience with games and provides many vivid examples to expand one’s store of cases. His definition points to general tendencies in those examples that are important in a different context, politics. That is a typical and appropriate way to advance an argument. But I am left thinking that Wittgenstein was right about the indefinability of the word “game.”

(As a digression on this digression: Wittgenstein wrote in German, and the word “Spiel” means both “game” and “play.” For Lerner, the differences between the English words “game” and “play” are important; to make politics more game-like is different from making it more playful. Does Wittgenstein fail to see a common denominator to all “Spiele” because that word encompasses play as well as games? I don’t think so: all of his examples are actually “games” in the English sense. His argument works perfectly well when translated.)

The post defining “games” appeared first on Peter Levine.

voting and punishment: Foucault, biopower, and modern elections

Michel Foucault wrote a great deal about punishment as a tool that governors use to discipline the governed. Voting seems like the opposite: a device for the governed to discipline the governing. But Foucault’s concept of bio-politics can be illuminatingly applied as a critique of modern voting.

Foucault begins “Security, Territory, Population” (his 1977-8 lectures at the Collège de France) with a “very simple, very childish example” of punishment in three forms.

  • Juridico-Legal: The law defines a category of actions as a crime (e.g., theft), and sets a certain punishment to follow it in order to restore justice. This punishment is usually conducted in public and on the body of the accused.
  • Disciplinary: Punishment is used to influence behavior, both of the person being punished and of others who may be deterred. Punishments are now designed to have results; for instance, prisons become “houses of correction.” If a given punishment lacks beneficial consequences (as Cesare Beccaria argued of torture), it should be repealed. But in Discipline and Punish, Foucault interprets this apparent humanity or leniency as a reflection of an ominous improvement in the efficiency of discipline, whose purpose is “not to punish less, but to punish better.'”
  • Security: The objective becomes to influence the frequency of undesirable actions (such as theft) in the population as a whole. Outcomes are measured statistically, for instance, in terms of crimes/capita or probabilities of recidivism. A given punishment, such as imprisonment, is now a mere tool for security, to be assessed by its aggregate costs and benefits and compared against other tools, such as paying or training people to behave as desired or subjecting them to surveillance and monitoring.

Foucault emphasizes that these three “modulations” of punishment have not simply replaced one another in a historical sequence. Even medieval law sometimes aimed at security; juridico-legal thinking remains alive today. But security has become far more prominent in the current era than it was before.

Like punishment, voting has adopted relatively durable forms but has changed its purposes and rationales in profound ways. Drawing on Michael Schudson’s accessible history, I would identify the following three stages in the history of US voting:

  • Nineteenth Century: Voting is mostly a public expression of full membership in a group. By voting at all, a man shows that he is a full and free US citizen. By voting for a party, he shows his loyalty to a sub-population, e.g., Southern white Protestant farmers vote for Democrats. Voting is conducted in public (ballots are not secret) along with torchlight parades and other public rituals. Generally, everyone in a given community votes alike and reinforces each other. Voting is an obligation.
  • Progressive Era: Voting is a private choice among independent candidates and ballot questions. Voting maximizes the degree to which the government represents the voter’s interests and values. Elections also punish corrupt or incompetent incumbents by rotating them out of office. To enable a free and precise choice, the ballot is now secret; candidates are distinguished from parties; numerous offices are made elective; and important questions are put to referenda. Reporters, experts, and civic educators purport to assist voters in making up their own minds. Voting is a source of power that should be employed responsibly.
  • Post-Watergate: For individuals, voting is one means of influencing the government (at a time when other means have proliferated) and is one optional way to spend time and energy. A prospective voter is assumed to weigh the costs of voting–including the costs of becoming informed–against its benefits. The population is assumed to vote as a function of large external factors, such as the billions of dollars spent on campaign advertising and the constantly shifting procedures for registering and voting. Candidates are entrepreneurs who make heavy use of Big Data to target and influence citizens. Some prominent political scientists and jurists defend private campaign finance on the basis that the various campaign donors cancel each other out in a competitive market. Voting, running for office, and giving money are choices; aggregate results can be predicted.

The three stages of voting resemble those of punishment. In each case, we see a move from 1) symbolic to 2) deliberately manipulative to 3) scientific and statistical. We also see a move from 1) automatic to 2) individually tailored to 3) designed at a social scale. And a sequence of 1) physical impact on bodies, to 2) influence over individual minds, to 3) tweaking the milieux that shape mass behavior. Foucault calls scientific control over the contexts that shape human behavior “bio-politics,” which is the ascendant norm.

In the case of punishment, the tool’s effectiveness has increased, but control is increasingly dispersed. The medieval king was fully in charge of the gallows, but he couldn’t influence much of his realm with it. The modern regime of schools, prisons, and police is much more effective and pervasive, but there is no single king. Power strengthens but also multiplies.

In the case of voting, the tool may possibly have become more powerful, but the individual voter pretty clearly has less influence today, for other political acts (from drawing district lines to allocating campaign dollars) have become highly sophisticated and effective. Voting looks more like a dependent variable than the cause of anything.

If this portrait of the current situation is accurate, we need both an assessment and a strategy for improvement. Foucault proposes some theses about assessment and strategy at the outset of “Security, Territory, Population”:

I do not think there is any theoretical or analytical discourse which is not permeated or underpinned in one way or another by something like an imperative discourse. However, in the  theoretical domain, the imperative discourse that consists in saying “love this, hate that, this is good, that is bad, be for this, beware of that,” seems to me, at present at any rate, to be no more than an aesthetic discourse that can only be based on choices of an aesthetic order. And the imperative discourse that consists in saying “strike against this and do so in this way,” seems to me to be very flimsy when delivered from a teaching institution or even just on a piece of paper. … So, since there has to be an imperative, I would like the one underpinning the theoretical analysis we are attempting to be quite simply a conditional imperative of the kind: If you want to struggle, here are some key points, here are some lines of force, here are some constrictions and blockages. In other words, I would like these imperatives to be no more than tactical pointers. … So in all of this I will therefore propose only one imperative, but it will be categorical and unconditional: Never engage in polemics.

Contra Foucault, I would like to assert that the current system of elections (and much worse, of prisons) in the US is bad; that this is not a merely aesthetic judgment; that making such judgments is worthwhile if you defend them; and that effective polemics are badly needed. But I take Foucault’s point that a paper argument against the status quo can be valueless or arbitrary. As always, the question “What should we do?” requires tough-minded analysis that is about strategy as well as facts and values. Specifically, if we want to defend the Progressive Era ideal of voting, we must take seriously the deep shift toward what Foucault called “bio-power” in the society as a whole.

See also:when society becomes fully transparent to the state; qualms about Behavioral Economics; citizenship in the modern American republic: change or decline?

The post voting and punishment: Foucault, biopower, and modern elections appeared first on Peter Levine.

assessing a discussion

We discuss in order to address public problems together. We also develop morally through discussion–which, by the way, I would define very broadly to encompass a conversation with your neighbor over the backyard fence, with Leopold Bloom in the pages of Ulysses, with Angela Merkel through the New York Times, with Jesus in prayer, or with your late parent through memories and imagination.

I posit that the quality of discussion is a function of the skills, attitudes, and beliefs of the participants; the nature of the question under consideration; and the format. An individual’s contribution to a good discussion must be understood in context, because a given discursive act (such as making a concession or repeating a claim) can either be helpful or harmful, depending on the situation.

The tool I would use to assess discussion is a network map, where the nodes are the assertions made by the participants, and the links are explicitly asserted connections, such as “P implies Q” or “P is an example of Q” or “P is just like Q.” The network grows as the conversation proceeds–except when people stop adding new ideas and links–and each contribution can be assessed in terms of how it changes the network. A person’s statement can (for example) make a network larger, richer, denser, or more coherent.

As an illustration, I’ve mapped a 2005 Pew Research Center debate on the right to die (prompted by the then-recent Terry Schiavo case) that involved Daniel W. Brock (a medical ethicist), R. Alta Charo (a law professor), Robert P. George (a political theoriss), and Carlos Gómez (a hospice physician). The transcript is here and my map can be explored here:

This topic (end-of-life decisions) has certain features: it raises fundamental metaphysical questions rather than empirical questions that could be settled with data. It poses absolute and irrevocable decisions, unlike questions about the distribution of scare resources, which can be negotiated. As for the format, it involved relatively long prepared statements by just four experts, in contrast to a free-for-all among a larger group, which would have a different structure. And the speakers, although diverse in perspectives, were all accustomed to a certain style of argument (relatively abstract and organized). It would be interesting to contrast this transcript to, for instance, a New England town meeting about a budget.

Dan Brock goes first and has a chance to lay out a position in favor of allowing a patient or her surrogate to end life support. His position is neatly organized, with the principle of autonomy at the center. He names that principle as the underlying rationale for a series of professional reports and court decisions that represent what he calls the current consensus. He connects autonomy to several related concepts: bodily integrity, privacy, self-determination, and choice. He draws the explicit implication that an autonomous patient must be able to choose or refuse any treatment. He adds the idea that when a person is incapable of exercising autonomy and has not made an advance directive, the best course is to empower a surrogate to choose. And he denies that the patient’s or surrogate’s choice should be constrained by supposed distinctions between starting versus stopping care, hydration/nutrition versus medical treatment, or a terminal versus a stable condition. Below is his position, isolated from the rest of the network.

Screen Shot 2015-05-09 at 5.56.20 PM

Brock’s position is consistent (no nodes contradict each other), coherent (all nodes are connected), and centralized around the concept of autonomy. I would attribute those features of his position to: 1) the format (he gives prepared remarks that come first in a debate), 2) the professional style of the speaker (a professional philosopher), 3) the nature of the topic (bioethics), and 4) Brock’s position as a liberal who strongly favors autonomy. Indeed, Robert P George, the conservative theorist, says later in the debate: “liberals have to come up with a justification for placing autonomy in the central position in the first place, and that requires the defense of a moral proposition.” Note George’s use of a network metaphor to characterize Brock’s view.

Dr. Carlos Gomez speaks second. Unlike Dan Brock, he doesn’t produce a single, organized argument with explicit connections than link all of his ideas. I count nine different clusters of points in his remarks. Gomez’ points–isolated from the rest of the network–are shown below.

Screen Shot 2015-05-10 at 1.11.34 PM

An important claim for Gomez only becomes evident to me (although this might be my own limitation as an interpreter) during the following exchange from the Q&A:

MODERATOR: Actually, before we go to the next question, when you said autonomy misses something essential in this sort of doctor-patient relationship, would you elaborate a little bit more on what that means in the real world?

MR. GOMEZ: Yeah, I’ve never had a patient knock on my office door, come in, sit down, and say, “I’m here to exercise my autonomy.” Now I may be a little too glib there, but what I’m suggesting is that one of the reasons that they are coming to me is precisely by nature of what I profess as a physician, by nature of what I know in terms of my skills, and also by nature – and on this I think Robby is dead on – by nature of the fact that there is a moral construct to what it means to be a physician or a nurse, or any other professionals that professes publicly what they’re going to do.

I think what Carlos Gomez has been implying all along is that nurses and doctors are required to show care for a patient, and an ethic of care is inconsistent with ending the patient’s life. Further, caregivers should have a strong voice in the debate about bioethics. Unlike Dan Brock, however, Gomez does not present that position as an organized argument but alludes to it with relatively scattered claims about how, for instance, there is actually no consensus about end-of-life treatment and the press is uninformed about hospice care. If I were to evaluate Gomez’ participation, I would say that he is less rhetorically effective than he might have been because he never states a claim that actually is central for him. The moderator assists not only Gomez but also the group by drawing out one central node that had not been clear before. On the other hand, Gomez clearly contributes ideas to the conversation and connects many of them to points already introduced by Dan Brock; so he broadens and enriches the discussion.

Alta Charo, a law professor, speaks third. She makes a cluster of points about how people mistake biological patterns for moral imperatives, and a related cluster of points about how sometimes the law appropriately creates “fictions” that are not based on biology, such as the idea of adoptive parenthood. She also makes at least nine other points that don’t explicitly connect to these two clusters. Her view is about as coherent as Carlos Gomez’. However, she is in a different position from him. She generally holds the same liberal position as Dan Brock, who has already spoken. It would not contribute to the conversation for her to repeat Brock’s argument for the centrality of autonomy, although she does state that choices about life must be personal and free. Instead, she builds ideas around the structure than Dan Brock has already laid out.

George follows Charo, and he lays out an alternative view to Brock’s, in which autonomy is explicitly not the central idea. Instead, “human life, even in developing or severely mentally disabled conditions, [is] inherently and unconditionally valuable.” His structure is about as consistent, coherent, and centralized as Brock’s, but it has a different center. Below is shown a network consisting only of the ideas proposed by Brock and George. “Human life is unconditionally valuable” is a central node in the top third of the picture; autonomy is a different center about two-thirds down.

Screen Shot 2015-05-10 at 1.30.00 PM

The two networks touch at multiple points, either because George contradicts Brock (I show explicit disagreements with darker lines) or because he acknowledges specific areas of agreement.

Later, in the Q&A, George makes a discursive move that can sometimes be helpful to a group. He says, “As much as I love disagreement and dissent, I think that on one point on which Carlos and Dan thought they were arguing, there’s not actually a disagreement.” This is an example of tying together two points that have already been made in order to increase the coherence of the network. It is a helpful move–unless the two points are not actually alike.

By the time the session ends, the whole network is fairly connected. But certainly, no agreement has been reached, and two nodes remain central for different people but mutually inconsistent. That may be an inevitable feature of debates about the ends of life, or it may be a function of the way these speakers reason about such questions. Although they are speaking lightly at this juncture, Brock and Gomez imply a serious point about the impasse between them:

Dan Brock: Well Carlos and I first met on a PBS show about assisted suicide I guess 15 years ago, was it, Carlos? And we disagreed then roughly the way we do now, so –

MR. GOMEZ: I’m unteachable.

MR. BROCK: So am I.

I would hope that more mutual learning can occur when issues are either more empirical or more negotiable than this one is.

The post assessing a discussion appeared first on Peter Levine.