generosity as a virtue

Summary: I will argue here that generosity is a virtue when it is involves respectful care for an individual. Therefore, paradigm cases of generosity involve acts of personal attention and two-way communication, such as carefully selecting an appropriate gift or making a kind remark. To assess a transfer of money, it is better to ask whether it manifests justice, not generosity. Aristotle launched this whole discussion by drawing a useful distinction between generosity and justice. However, because his ideas of justice were constrained, and because he analyzed generosity strictly in terms of money, he left the impression that generosity was not a very appealing virtue. We can do better by focusing on acts conducted in the context of mutually respectful relationships.


To begin: virtues are traits or dispositions that we should want to cultivate in ourselves and in others to improve these individuals’ characters, to raise the odds that they will benefit their communities, or both.

Generosity is found on famous lists of virtues, such as Aristotle’s twelve (or so) and the Buddha’s six paramitas. However, generosity receives much less attention than most other virtues in contemporary English-language philosophy. Miller (2018) finds only three “mainstream philosophy” articles about generosity prior to his own. Ward (2011) finds little discussion of generosity in scholarship on Aristotle, notwithstanding that a whole section of Aristotle’s Nicomachean Ethics is focused on it.

I would propose this explanation. Aristotle continues to provide the most influential framework for theories of virtues in the academic world, partly because he is often insightful, and also because he shaped ethics in the three Abrahamic religions. However, his account of generosity (eleutheriotes–more literally translated as “liberality”) makes it a problematic trait. And that is why the virtue does not receive much attention in Anglophone and European academic philosophy.

Aristotle introduces his discussion of generosity with an explicit mention of money:

Let us speak then of freeness-in-giving [eleutheristes, generally translated as generosity or liberality]. It seems to be a mean in respect to needs/goods/property [chremata], for a man is not praised as generous in war, nor in matters that involve temperance, nor in court decisions, but in the giving or taking of goods, and especially in giving them–“goods” meaning all those things whose worth is measured with coins (NE 1119b–my translations).

For Aristotle, generosity does not mean transferring money to people who have a right to it, because that is the separate virtue of justice. Rather, generosity means donating material things voluntarily because one is not overly enamored of them, and doing so in an excellent way.

Things that are done in virtue are noble and are done for their nobility. The generous man therefore will certainly give for the nobility of it. And he will do it rightly, for he will give to the right people, in the right amount, at the right time, and whatever else counts as right giving; and he will give with pleasure or at least painlessly, for whatever is done virtuously is pleasant and painless, or at least not distressing (NE 1120a).

The appropriate recipient is not one who deserves the money (again, that would be an act of justice), but rather someone whom a person of generous spirit would desire to help. I imagine a land-owner being generous to his tenant or to a retainer of long standing.

Aristotle acknowledges that a person with less money can be as generous as a rich man, since the appropriate measure is the proportion of one’s wealth that one donates. Nevertheless, his paradigm of a generous person is a man of inherited wealth who is liberated enough from the base appeal of material things that he voluntarily gives some money away in a gentlemanly fashion (NE 1120b).

I will not claim that the ideal of generosity in the Buddhist canon is the same as in Aristotle, but the early Buddhist texts also appreciate people who give things away because they are free from a desire for goods:

Furthermore, a noble disciple recollects their own generosity: “I’m so fortunate, so very fortunate! Among people full of the stain of stinginess I live at home rid of stinginess, freely generous, open-handed, loving to let go, committed to charity, loving to give and to share.” Then a noble disciple recollects their own generosity, their mind is not full of greed, hate, and delusion. This is called a noble disciple who lives in balance among people who are unbalanced, and lives untroubled among people who are troubled. They’ve entered the stream of the teaching and develop the recollection of generosity (Numbered Discourses 6.10.1, translated by Bhikkhu Sujato).

One difference is that Aristotle mainly thinks about generosity to people who are poor against their will, whereas the paradigm of generosity in early Buddhism is a wealthy layperson’s donation to monks, who have voluntarily renounced worldly goods. In fact, I am not sure that monks can be generous in the Pali Canon, because their role is to receive alms. Another difference—typical when comparing Aristotle to classical Buddhism–is that the Buddhist path leads toward complete liberation, whereas Aristotle expects us to navigate happiness and suffering until death.

In any case, for Aristotle, generosity is relational (one person is generous to another), and it usually accompanies an unequal relationship. As Ward writes, it “abstracts” from justice. When we are being generous, in Aristotle’s sense, we do not have justice on our minds, although we might also act justly.

If one accepts inequality and suffering as natural, then justice is simply a matter of paying one’s debts, honoring contracts, and otherwise following the current rules; and generosity easily accompanies justice. A true aristocrat exhibits justice by paying his bills and taxes. He may also make generous gifts, although never giving so much as to threaten his social standing. (Aristotle defines prodigality as giving so much as to ruin one’s own resources: NE 1119b–1120a.)

However, if we decide that the current distribution of rights and goods is unjust and should be changed, then we will not be impressed by a person who is generous yet not just. More than that, we may feel that justice is the only standard, and generosity is virtuous just to the degree that it approximates justice. Then a gentleman’s holiday gifts are virtuous insofar as they diminish an unjustifiable disparity between the lord and his tenants. The effect is probably quite small. It would be better if the gentleman were prodigal or if his lands were reallocated. Meanwhile, if he takes satisfaction in his own gift-making–as evidence that he is free from base material desires–then he looks worse, not better. If he makes gifts, he should demonstrate respect for the recipients by making the payments seem obligatory and insufficient.

By alluding to land reform, I am suggesting that a social system should be egalitarian, and some powerful force, such as a modern government, should make it so. This is not necessarily correct. Adam Smith makes a different argument for generosity. In his view, a market economy is best for everyone because it continuously increases prosperity. But rich people should be generous, not only for the sake of those with less but also because a reasonable person will not be overly attached to his own wealth and will know when he has more than enough.

When “a man of fortune spends his revenue chiefly in hospitality” (benefitting friends), he demonstrates a “liberal or generous spirit” and also puts his wealth into circulation, thus contributing to the “increase of the public capital.” On the other hand, by hoarding his money for himself, a person would manifest “a base and selfish disposition” (Wealth of Nations, ii:3). It is less clear whether Smith recommends generosity toward poor people who are not one’s friends (discussed in Birch 1998). But in general, virtues are good for the individual and contribute to a civil society. Generosity is just one example; “humanity, kindness, compassion, mutual friendship and esteem” are others (Theory of Moral Sentiments, IV).

Whether you endorse or reject Smith’s view of markets, at least his theory of generosity is connected to his theory of social justice. Ward argues that Aristotle also considers generosity in the context of his view of a good community. She discusses the sections in the Politics where Aristotle says that the best regime empowers the middle classes. They are neither arrogant, like the rich, nor craven, like the poor (Pol. 1295b5).

A democracy dominated by the middle classes enables deliberation among peers. Equal citizens can look one another in the eye, say what they think, and cast equal votes to set policy. To the extent that Aristotle appreciates this kind of political system, then his discussions of generosity (giving moderate amounts of money to individuals) and munificence (giving lots of money to the city) begin to seem ironic. These are virtues of oligarchy, and Aristotle prefers democracy (albeit with qualifications).

I appreciate Ward’s argument, but I suspect that for Aristotle, equal standing or eisonomia can only work for an elite (even if it extends to the middling sort), and they should be generous to those who are naturally inferior. Members of the Assembly should treat the large majority of humans who are non-citizens generously, while treating one another with equal respect. However, once we embrace universal human rights, then everyone should be a citizen–somewhere–and the Aristotelian versions of generosity and munificence begin to look problematic.

As long as we are thinking primarily about the transfer of money or goods that money can buy, then I think that justice is the relevant virtue, and generosity is a poor substitute. This point does not depend on a radically egalitarian theory of social justice, because a libertarian should also put justice first and generosity well behind.

However, we naturally use the word “generous” for things other than money. For instance, “generous reading” is a common phrase for interpretive methods that seek to reconstruct persuasive positions from texts. Ann Ward reads Aristotle generously by combining his discussion of generosity in the Nicomachean Ethics with his analysis of democracy in the Politics.

Likewise, we can make “generous remarks” at a colleague’s retirement party, and our words will offer real insights about the colleague’s contributions. We can also give things or people our “generous attention.”

Our partner the Vuslat Foundation defines generous listening as “active, empathetic engagement with another person’s thoughts and feelings. At its core, generous listening is about creating a space for authentic dialogue.”

Think of a colleague who skillfully chooses holiday gifts, wrapping them nicely, and adding thoughtful notes. The objects may have limited monetary value yet reflect generous attitudes toward their recipients because they match each person’s desires and needs. Finding the gifts required time, and during that time, the donor focused on the recipient. We would not object if the skillful donor takes pleasure and pride, just as we generally appreciate cases when people derive happiness from their own virtue.

Whereas money is fungible, the generosity in these examples is specific to the individuals involved. Aristotle (like the Buddhist sutra I quoted earlier) is most interested in generosity as a display of freedom on the part of the giver, but in the cases I am sketching, the donors focus on the recipients. And these forms of generosity are relatively independent of the social system. I presume that generous speeches at retirement parties are appreciated alike in state socialism, corporate capitalism, and the nonprofit sector.

We might, then, agree with Smith in the Theory of Moral Sentiments that generosity is one of the virtues that “appear in every respect agreeable to us.” Generosity is agreeable regardless of the social or economic system, and apart from justice. But it is a virtue that requires benevolent respect for the recipient, listening and speaking as well as giving. Contrary to Aristotle, it is least relevant to monetary transfers and does not reflect a gentlemanly insouciance about private wealth. Rather, it is best manifested in reciprocal relationships, when the parties devote time and attention to one another.


Sources: Christian B. Miller, “Generosity,: in Michel Croce and Maria Silvia Vaccarezza, eds., Connecting Virtues: Advances in Ethics, Epistemology, and Political Philosophy (Wiley, 2018): 23-50; Ann Ward, “Generosity and inequality in Aristotle’s ethics.” Polis: The Journal for Ancient Greek and Roman Political Thought 28.2 (2011): 267-278; Thomas D. Birch, “An analysis of Adam Smith’s theory of charity and the problems of the poor.” Eastern Economic Journal 24.1 (1998): 25-41.my translations of Aristotle use the text from Project Perseus.

The post generosity as a virtue appeared first on Peter Levine.

how thinking about causality affects the inner life

For many centuries, hugely influential thinkers in each of the Abrahamic faiths combined their foundational belief in an omnipotent deity with Aristotle’s framework of four kinds of causes. Many believers found solace when they discerned a divine role in the four causes.

Aristotle’s framework ran afoul of the Scientific Revolution. Today, there are still ways to be an Abrahamic believer who accepts science, and classical Indian thought offers some alternatives. Nevertheless the reduction of causes from Aristotle’s four to the two of modern science poses a spiritual and ethical challenge.

(This point is widely understood–and by no means my original contribution–but I thought the following summary might be useful for some readers.)

To illustrate Aristotle’s four causes, consider my hands, which are currently typing this blog post. Why are they doing that?

  • Efficient cause: Electric signals are passing along nerves and triggering muscles to contract or relax. In turn, prior electrical and mechanical events caused those signals to flow–and so on, back through time.
  • Material cause: My hand is made of muscles, nerves, skin, bones, and other materials, which, when so configured and stimulated, move. A statue’s hand that was made of marble would not move.
  • Formal cause: A hand is defined as “the terminal part of the vertebrate forelimb when modified (as in humans) as a grasping organ” (Webster’s dictionary). I do things like grasp, point, and touch with my hand because it is a hand. Some hands do not do these things–for instance, because of disabilities–but those are exceptions (caused by efficient causes) that interfere with the definitive form of a hand.
  • Final cause: I am typing in order to communicate certain points about Aristotle. I behave in this way because I see myself as a scholar and teacher whose words might educate others. In turn, educated people may live better. Therefore, I move my fingers for the end (telos, in Greek) of a good life.

Aristotle acknowledges that some events occur only because of efficient and material causes; these accidents lack ends. However, the four causes apply widely. For example, not only my hand but also the keyboard that I am using could be analyzed in terms of all four causes.

The Abrahamic thinkers who read Aristotle related the Creator to all the causes, but especially to the final cause (see Maimonides, Guide for the Perplexed, 2:1 or Aquinas, Summa TheologiaeI, Q44). In a well-ordered, divinely created universe, everything important ultimately happens for a purpose that is good. Dante concludes his Divine Comedy by invoking the final cause of everything, “the love that moves the sun and other stars.”

These Jewish and Christian thinkers follow the Muslim philosopher Avicenna, who even considers cases–like scratching one’s beard–that seem to have only efficient causes and not to happen for any end. “Against this objection, Avicenna maintains that apparently trivial human actions are motivated by unconscious desire for pleasure, the good of the animal soul” (Richardson 2020), which, in turn, is due to the creator.

However, writing in the early 1600s, Francis Bacon criticizes this whole tradition. He assigns efficient and material causes to physics, and formal and final causes to metaphysics. He gestures at the value of metaphysics for religion and ethics, but he doubts that knowledge can advance in those domains. His mission is to improve our understanding and control of the natural world. And for that purpose, he recommends that we keep formal and final causes out of our analysis and practice only what he calls “physics.”

It is rightly laid down that true knowledge is that which is deduced from causes. The division of four causes also is not amiss: matter, form, the efficient, and end or final cause. Of these, however, the latter is so far from being beneficial, that it even corrupts the sciences, except in the intercourse of man with man (Bacon, Novum Organum. P. F. Collier, 1620, II;2).

In this passage and others related to it, Bacon proved prescient. Although plenty of scientists after Bacon have believed in final causes, including divine ends, they only investigate efficient and material causes. Perhaps love moves all the stars, but in Newtonian physics, we strive to explain physical motion in terms of prior events and materials. This is a methodological commitment that yields what Bacon foresaw, the advancement of science.

The last redoubt of final causes was the biological world. My hand moves because of electrical signals, but it seemed that an object as complicated as a hand must have come into existence to serve an end. As Kant writes, “it is quite certain that in terms of purely mechanical principles of nature we cannot even adequately become familiar with, much less explain, organized beings and how they are internally possible.” Kant says that no Isaac Newton could ever arise who would be able to explain “how even a mere blade of grass is produced” using only “natural laws unordered by intention” (Critique of Judgment 74, Pluhar trans.). But then along came just such a Newton in the form of Charles Darwin, who showed that efficient and material explanations suffice in biology, too. A combination of random mutation plus natural selection ultimately yields objects like blades of grass and human hands.

A world without final causes–without ends–seems cold and pointless if one begins where Avicenna, Maimonides, and Aquinas did. One option is to follow Bacon (and Kant) by separating physics from metaphysics, aesthetics, and ethics and assigning the final causes to the latter subjects. Indeed, we see this distinction in the modern university, where the STEM departments deal with efficient causes, and final causes are discussed in some of the humanities. Plenty of scientists continue to use final-cause explanations when they think about religion, ethics, or beauty–they just don’t do that as part of their jobs.

However, Bacon’s warning still resonates. He suspects that progress is only possible when we analyze efficient and material causes. We may already know the final causes relevant to human life, but we cannot learn more about them. This is fine if everyone is convinced about the purpose of life. However, if we find ourselves disagreeing about ethics, religion, and aesthetics, then an inability to make progress becomes an inability to know what is right, and the result can be deep skepticism.

Michael Rosen (2022) reads both Rousseau and Kant as “moral unanimists”–philosophers who believe that everyone already knows the right answer about moral issues. But today hardly anyone is a “moral unanimist,” because we are more aware of diversity. Nietzsche describes the outcome (here, in a discussion of history that has become a science):

Its noblest claim nowadays is that it is a mirror, it rejects all teleology, it does not want to ‘prove’ anything any more; it scorns playing the judge, and shows good taste there, – it affirms as little as it denies, it asserts and ‘describes’ . . . All this is ascetic to a high degree; but to an even higher degree it is nihilistic, make no mistake about it! You see a sad, hard but determined gaze, – an eye peers out, like a lone explorer at the North Pole (perhaps so as not to peer in? or peer back? . . .). Here there is snow, here life is silenced; the last crows heard here are called ‘what for?’, ‘in vain’, ‘nada’ (Genealogy of Morals, Kaufman trans. 2:26)

Earlier in the same book, Nietzsche recounts how, as a young man, he was shaped by Schopenhauer’s argument that life has no purpose or design. But Nietzsche says he detected a harmful psychological consequence:

Precisely here I saw the great danger to mankind, its most sublime temptation and seduction – temptation to what? to nothingness? – precisely here I saw the beginning of the end, standstill, mankind looking back wearily, turning its will against life, and the onset of the final sickness becoming gently, sadly manifest: I understood the morality of compassion [Mitleid], casting around ever wider to catch even philosophers and make them ill, as the most uncanny symptom of our European culture which has itself become uncanny, as its detour to a new Buddhism? to a new Euro-Buddhism? to – nihilism? (Genealogy of Morals, Preface:6)

After mentioning Buddhism, Nietzsche critically explores the recent popularity of the great Buddhist virtue–compassion–in Europe.

Indeed, one of the oldest and most widely shared philosophical premises in Buddhism is “dependent origination,” which is the idea that everything happens because of efficient causes alone and not for teleological reasons. (I think that formal causes persist in Theravada texts but are rejected in Mahayana.)

Dependent origination is taken as good news. By realizing that everything we believe and wish for is the automatic result of previous accidental events, we free ourselves from these mental states. And by believing the same about everyone else’s beliefs and desires, we gain unlimited compassion for those creatures. Calm benevolence fills the mind and excludes the desires that brought suffering while we still believed in their intrinsic value. A very ancient verse which goes by the short title ye dharma hetu says (roughly): “Of all the things that have causes, the enlightened one has shown what causes them, and thereby the great renouncer has shown how they cease.”

I mention this argument not necessarily to endorse it. Much classical Buddhist thought presumes that a total release from the world of causation is possible, whether instantly or over aeons. If one doubts that possibility, as I do, then the news that there are no final causes is no longer consoling.


Secondary sources: Richardson, Kara, “Causation in Arabic and Islamic Thought”, The Stanford Encyclopedia of Philosophy (Winter 2020 Edition), Edward N. Zalta (ed.); Michael Rosen, The Shadow of GodKantHegel, and the Passage from Heaven to History, Harvard University Press, 2022. See also how we use Kant today; does skepticism promote a tranquil mind?; does doubting the existence of the self tame the will?; spirituality and science; and the progress of science.

The post how thinking about causality affects the inner life appeared first on Peter Levine.

Foucault the engaged scholar

I admit that I had long understood Michel Foucault as a “universal intellectual” — a thinker who conveys an original and general stance to the public, the nation, or the masses, serving as their conscience. If this intellectual is radically critical of the status quo, and his audience is the whole public, then the implication is: Revolution! Examples of revolutionary universal intellectuals include Rousseau, Marx, and Sartre.

Placed in that tradition, Foucault can be frustrating. He held a distinctive and original (albeit evolving) stance, he participated in radical politics in Tunisia and France, and he reached a global audience, yet he eschewed recommendations and explicit moral judgments. He seemed to conceal his own views, to the extent that he held them.

My take on Foucault has been changed (and my appraisal has been much improved) by reading three interviews conducted between 1976 and 1981 that are included in Rabinow’s and Rose’s The Essential Foucault anthology. These conversations have also revised my understanding of his major works.

In the 1976 interview, Foucault describes “universal intellectuals” as I did at the start of this post, but he says that “some years have passed since the intellectual has been called upon to play this role” (1976, 312). A universal intellectual works alone and addresses everyone. In contrast, a “specific intellectual”–a type that emerges after World War II (1976, 313)–works within an institution where knowledge and power come together. Examples include nuclear physicists, psychiatrists, social workers, magistrates, administrators, planners, and educators. They possess genuine knowledge that gives them influence. Since the failed revolution of 1968, it has become clear that beneficial social change depends on them, not on revolutionaries who fight the state (1976, 305). Specific intellectuals are becoming politically conscious and connected across disciplines and national borders (1976, 313).

And Foucault works with them. He doesn’t go into much detail about his own activities in these interviews, but we know that psychiatrists have read his works about mental illness and sexuality, prison administrators have read his book on prisons, and people who train professionals have assigned his texts; and he acknowledges their influence on him. Thus his audience is not “the people,” and his contribution is not a philosophy. Instead, he is a professional historian who contributes information and insights to various conversations that are also informed by the behavioral and social sciences and law.

In a 1981 interview, Didier Erihon suggests that “criticism carried about by intellectuals doesn’t lead to anything” (1981, 171). This is meant as a challenge to Foucault, whom Erihon assumes is an intellectual.

Foucault first notes that the previous twenty years have seen substantial changes–beneficial ones, I presume–in views of mental illness, imprisonment, and gender relations, issues on which he had worked intensively.

Next, he observes that progress does not result from political decisions alone; any policy requires implementation, and its impact depends on the people who implement it. At any rate, that is how I would gloss these words:

Furthermore, there are no reforms in themselves. Reforms do not come about in empty space, independently of those who make them. One cannot avoid considering those who will have to administer this transformation (1981, 171).

It follows that to influence the “assumptions” and “familiar notions” of practitioners is “utterly indispensable for any transformation” (172). (Compare my recent post on institutions).

Foucault concludes his response by criticizing the ways that universal intellectuals (whether famous or aspiring to fame) typically criticize society. He says, “A critique does not consist in saying that things aren’t good the way they are. …. Criticism consists in uncovering [everyday] thought and trying to change it” (1981, 172).

The key point, for me, is that “trying to change” something requires a strategy, and Foucault wants to abandon the strategy of changing everything all at once by telling The People that society is bad and should be different. His alternative strategy is to engage well-placed practitioners.

In the 1980 interview, Foucault elaborates his doubts about criticism that takes the form of denouncing existing things, ideas, or people:

It’s amazing how people like judging. Judgment is being passed everywhere, all the time. Perhaps its one of the simplest things mankind has been given to do. And you know very well that the last man, when radiation has finally reduced his last enemy to ashes, will sit down behind some rickety table and begin the trial of the individual responsible (1980, 176).

Foucault diagnoses Parisian intellectuals’ love of denouncing each other as a result of their “deep-seated anxiety that one will not be heard or read.” This anxiety motivates the “need to wage an ‘ideological struggle’ or to root out ‘dangerous thoughts'” (1980, 177).

The interviewer counters, “But don’t you think our period is really lacking in great writers and minds capable of dealing with its problems?” (1980, 177). Later, the same interviewer asks, “If everything is going badly, how do we make a start?” (1980, 178).

Foucault resists both pessimistic premises. “But everything isn’t going badly,” he exclaims (1980, 178). He describes a “plethora,” an “overabundance” of interesting ideas and people who have pent-up curiosity. The task, he proposes, is to “multiply the channels, the bridges, the means of information” so that more people with “thirst for knowledge” can learn from more other people (1980, 177).

In a passage that reminds me of Dewey’s The Public and its Problems (1924), Foucault describes his “dream of a new age of curiosity” (1980, 178). He says, “I like the word [curiosity]. It evokes ‘care’; it evokes the care one takes of what exists and what might exist.” (1980, 177). In the age of curiosity that he envisions, “people must be constantly able to plug into culture in as many ways as possible” (178-9).

Given Foucault’s understanding of his own role as a “specific intellectual,” he must have been at least somewhat concerned about his reputation. He was not only a historical specialist who helped fellow practitioners to become conscious of shared prejudices and to discover alternatives. He was also (and mainly) a world-famous French philosopher, a purported representative of movements like post-structuralism and postmodernism, whose public lectures on general subjects in venues like the Collège de France and UC-Berkeley were packed with aspiring philosophers, and whose interviews about the condition of the world were published in Le Monde and Libération.

I am not sure how he navigated this tension, not having read the biographies. But it’s clear that it worried him. In the 1980 interview, part of a series on major intellectuals in Le Monde, Foucault asks not to be named. The interview (still archived on Le Monde’s website), is headlined, “The Masked Philosopher.” It begins:

Here is a French writer of some renown. Author of several books whose success has been affirmed well beyond our borders, he is an independent thinker: he is not linked to any fashion, to any party. However, he only agreed to grant us an interview about the status of the intellectual and the place of culture and philosophy in society on one explicit condition: to remain anonymous. Why this discretion? Out of modesty, calculation or fear? The question deserved to be asked–even if, by the end of this conversation, the mystery will undoubtedly have dissipated for the most perceptive of our readers…

Foucault explains that he would like to try being anonymous “out of nostalgia for a time when, since I was quite unknown, what I said had some chance of being heard” (my translation). In other words, we cannot hear Foucault well unless we shake the model of a famous thinker who offers big ideas. He wants us, instead, to ask whether the claims about specific phenomena that we find in his works ring true or false and whether they are useful or not for our purposes.


Sources: Michel Foucault, “Truth and Power” (1976), “The Masked Philosopher” (1980), and “So is it Important to Think?” (1981), all in Paul Rabinow and Nikolas Rose, The Essential Foucault (The New Press, 2003), but I retranslated the 1980 interview myself because of a misplaced modifier in the anthology. See also: Vincent Colapietro, “Foucault’s Pragmatism and Dewey’s Genealogies: Mapping Our Historical Situations and Locating Our Philosophical Maps,” Cognitio, 13/2 (2012), p. 187-218; Foucault’s spiritual exercises; does skepticism promote a tranquil mind?; and Civically Engaged Research in Political Science

The post Foucault the engaged scholar appeared first on Peter Levine.

phenomenology of nostalgia

The other day, I saw on social media that my 40th high-school reunion will happen next spring. I felt a pang. This sensation passed, and while it lasted, it offered some sweetness along with a sense of loss. I would not swallow a pill that prevented similar reactions in the future. Still, it was an interesting feeling that might tell me something about my personality or even about the nature of time and identity.

I suspect that my nostalgia reflected a mistake: a desire for something impossible (backward time-travel) or a failure to appreciate the living present sufficiently. Although I would refuse a cure, I might want to assess my response critically and direct my mind differently.

Marshawn Brewer offers a brilliant “Sketch for a Phenomenology of Nostalgia” (Brewer 2023) that has influenced the following thoughts, but I’ll concentrate on exploring my own experiences and won’t try to compare my first-person account to his broader (and better informed) study.

First, I notice that nostalgia focuses me on one period from my distant past, and the rest of my life seems to vanish, in a mildly distressing fashion. When we middle-aged people think, “High school seems like yesterday!”, it feels as if there weren’t many days between then and now. This is because we cannot think about many times at once. I could make myself nostalgic about virtually any intervening year–but only one at a time. I have the momentary sensation that I’ve thrown away the intervening years; they are somehow gone.

We might assume that experiences and settings are harder to recollect the longer ago they happened, much as objects tend to be smaller and fuzzier the further they are located from our eyes. But that analogy doesn’t hold. Distant memories often come back more forcefully than recent ones. Indeed, “middle-aged and elderly people [tend] to access more personal memories from approximately 10–30 years of age” than from other times in their past (Munawar, Kuhn, and Haque 2018). From my perspective, the decades of the 30s and 40s have a disproportionate tendency to fade away.

If I pause to focus on my teenage years, a certain scene comes into my mind. The first image happens to be a meeting of the Latin Club in the cafeteria after school. This is probably a composite or partial invention, but it is based on memories. I can move from that image to innumerable others from the same period in my life, but (as Brewer notes), nostalgia quickly adopts one setting or another–a physical location that is suffused with a certain atmosphere. It would be hard to feel nostalgic without this sense of place, which connects the word to its etymology (nostos plus alpos = home-pain). Insofar as the feeling is bitter-sweet, the bitterness is a sense that one cannot go back to a place that one recalls. And if I were to return to the high school cafeteria, it would not seem to be the place that I remember.

By the way, I don’t see myself in the cafeteria; I see that room from my perspective, as if I inhabited my 17-year-old body again. I think my recollection is mostly visual, although I wonder if I am also summoning other senses. Certainly, a sound or smell can trigger nostalgia.

I was enrolled in high school for the standard four years, a brief period. However, Brewer notes that nostalgia has “an aeonic temporality” (from the word “aeon,” meaning an indefinite or very long period of time). Here Brewer cites J.G. Hart, who is worth quoting:

the time of my nostalgic past does not know a passing or fleeting character. Nostalgia is not about passing time but about eras, seasons or aeons. It is not about dates but about “times” (which in actual historical fact might have been quite long or quite short) which are enshrined in a kind of atemporal (i.e., non-fleeting) dimension : “the three days vacation with you in Wisconsin,” “our time at college,” etc. This “aeonic” character of the nostalgic world resembles the time of the mythic world. … In the nostalgic having of a non-fleeting aeon the themes of death, aging and illness are out of place (Hart 1973, pp. 406-7).

Childhood seems to have this aeonic character, perhaps in part because we gradually emerge into full awareness and the ability to use language. We cannot remember the beginning of our own childhoods. Regular events, such as birthdays, feel as if they recurred endlessly, even though we can actually celebrate no more than a dozen of our own birthdays between the onset of memory and adulthood. Parents and siblings take on an outsized and permanent or recurrent (once-upon-a-time) character. Like the gods in myths, these relatives have “back-stories” about how they came to be, but while our own story unfolds, they do not seem to change (cf. Hart, 414-15).

I write here of “we,” but I realize that experiences vary. I happen to have had a stable childhood, which would encourage my feelings of timelessness. Later, I had the opportunity to be a parent; and since then, we have watched the years when we raised our children recede–in turn–into memory. While I was a young parent, my own childhood seemed like the template or baseline reality, and I self-consciously inhabited my new role of fatherhood. At that time, my childhood seemed “aeonic,” while parenthood was a matter of specific events and changes that we adults planned or dealt with. But now that second wave of life increasingly has the same character as the first, echoing it. It is another once-upon-a-time.

Indeed, the things that I recall and miss are often my identities. At one time, I was a young guy, a novice at everything, a learner. Later, I was the dad of young kids, someone who played with Legos and read bedtime stories. I am not entitled to think of myself as either of those things anymore.

A Victorian house on a stately street,
Formal, ornate. The bell breaks the silence.
Would a gift have been wise--something to eat?
When to shift from pleasantries to science?
A ticking clock, long rows of serious books,
China, polished wood, a distant dog barks.
Pay attention, this might have some value.
It's rude to seek help without taking advice.
Now say what you've really come for, shall you?
Then: time to go? Did our talking suffice?
Not for years now have I been the visitor.
This is my parlor and I am the grey one,
The host, the ear, the kindly inquisitor.
How can it be that it's my turn to play one?
("The Student," 2021)

For me, nostalgia is not really a feeling that things were better in the past. My life has tended to improve. Rather, it’s the feeling that I used to have one set of identities in one context–for instance, as a graduate student–and those are now gone.

I agree with Brewer that nostalgia involves regret for a whole situation that feels harmonious or integrated, which suggests some alienation from the present. But I can remind myself that I was mildly alienated in the past–and frequently already nostalgic in those days–and I would guess that I am more comfortable in my current identities than in my previous ones. It’s just that I can’t inhabit the old roles as well. I cannot be both the deferential but ambitious graduate student and the avuncular advisor, and I should learn to accept that reality.

We can even be nostalgic about the present. I take that to be the meaning of Basho’s lines (as translated here by Jane Hirshfield):

In Kyoto,
hearing the cuckoo,
I long for Kyoto.

This is an example of mono no aware, that cultivated sense that the present is sublime and also transitory. It is a sad longing to experience what one is (in fact) experiencing.

Perhaps nostalgia-for-now is a desire to see the present in the simplified, comprehensive way that we recollect our own distant pasts. I feel that I know what it was like to be 17 years old: that identity comes to me in an instant. But what is it like to be me, now? I perceive a whole set of changing experiences, emotions, moods, and beliefs, and I’m not sure what they add up to. I want my “now” to resemble how I (falsely) imagine my past–as coherent. Hart writes:

Nostalgia is an instance of one of these unique moments of “gathering.” In it the dispersed projects of life find their unity …. We do not thematically have ourselves together; we are not perpetually in possession of ourselves. But there is a “synthesis in the making” and there are especial moments when I come to grasp my life more or less as a whole (Hart 1973, p. 405).

Nostalgia-for-the-present is a temptation for me, and I am not sure whether to accept (or even nurture) it or to learn to avoid it. Is it a way of appreciating the living moment, as Basho seems to? Or is it a neurotic distancing from the only thing that’s real–the now?

A final point of self-criticism: I believe that my pang of regret at the passing of 40 years is not only nostalgia for the past. At least as significant is my alarm that the future is shortening. Nostalgia looks backward, but one motivation (I believe–at least in my own case) is a desire to travel back to those times so that the end of my life would be further in the future. The ambitious graduate student has more years ahead than the kindly old mentor. To regret that difference is a kind of greediness, an unwise stinginess about time.

See: Marshawn Brewer, “Sketch for a Phenomenology of Nostalgia,” Human Studies 46.3 (2023): 547-563; K. Munawar, S.K, Kuhn, and S. Haque, “Understanding the reminiscence bump: A systematic review,” PLoS One. 2018 Dec 11;13(12); J.G. Hart, “Toward a phenomenology of nostalgia,” Man and World 6, 397–420 (1973). See also: “nostalgia for now,” “there are tears of things,” “the student,” “Midlife,” “when the lotus bloomed,” “to whom it may concern,” and “echoes.”

The post phenomenology of nostalgia appeared first on Peter Levine.

does skepticism promote a tranquil mind?

I think of myself as less sure about most important matters than are most people I know–more equivocal and conflicted. Maybe I shouldn’t be so confident about that comparison! Regardless, my self-perception often makes me wonder about skepticism as a stance. Is being skeptical a flaw, a virtue, or (most likely) a bit of both?

The virtues and drawbacks of skepticism have been an explicit topic of discussion for at least 23 centuries. In this mini-essay, I organize some of that conversation–from Greek, Indian, and modern sources–and conclude with a proposal, meant mostly for myself. As the great skeptic Montaigne wrote, “This is not my teaching, it is my studying; it is not a lesson for anyone else, but for myself. [But] what helps me just might help another” (ii.6).

On one hand, it seems that our duty is to glean what is right and then to act accordingly, to the best of our ability, always with an awareness that we could be wrong. Whenever we decide, act, and stand ready to reflect on the results, we are entitled to derive satisfaction.

For instance, if an election is coming up, we should decide whether voting is a worthwhile way to affect the world. If it is, we should determine whom to vote for. We should remain attentive to what happens, because we could have been wrong. Yet as long as we reason and participate–not only in politics but in innumerable other domains–we may and should feel content.

Socrates presents a particularly strong version of this view. Arguing with Protagoras, who espouses some form of relativism or skepticism, Socrates recommends a “science of measurement” (metrike techne) that gradually improves our objective understanding of right or wrong by detecting and overcoming various kinds of bias. This techne, “by showing the truth, would finally cause the soul to abide in peace with the truth, and so save its life” (Prot. 356d). Similarly, near the end of The Republic, Socrates advises that when misfortune comes, we should not “waste time wailing” but “deliberate” about what has happened and then “engage as quickly as possible in correcting” the problem (Rep. 10.604b-9). Here, Socrates likens this approach to a medical art for the soul.

In these passages, Socrates has not proven that knowledge is possible, but he has claimed that lacking knowledge is–and should be–a cause of discomfort, for which the only appropriate cure is to pursue the truth.

David Hume reaches a comparable conclusion in a somewhat different way. Hume reports that when he considers fundamental questions (“Where am I, or what? From what causes do I derive my existence, and to what condition shall I return?”), he is struck by the “manifold contradictions and imperfections in human reason” and feels “ready to reject all belief and reasoning, and [to] look upon no opinion even as more probable or likely than another.”

But this is not a happy conclusion. Instead, “I am confounded with all these questions, and begin to fancy myself in the most deplorable condition imaginable, invironed with the deepest darkness, and utterly deprived of the use of every member and faculty.”

For a time, Hume is able to cure himself of this “philosophical melancholy and delirium” by distracting himself with ordinary life. “I dine, I play a game of backgammon, I converse, and am merry with my friends; and when after three or four hours’ amusement, I would return to these speculations, they appear so cold, and strained, and ridiculous, that I cannot find in my heart to enter into them any farther.” But this mood must also pass, for it is “impossible for the mind of man to rest, like those of beasts, in that narrow circle of objects” Ultimately, “we ought only to deliberate concerning the choice of our guide. … And in this respect I make bold to recommend philosophy” (Hume, 1739, 1.4.7).

Hume says that he cannot be “one of those sceptics, who hold that all is uncertain, and that our judgment is not in any thing possest of any measures of truth and falshood” because it is naturally impossible to remain in that posture. We make judgments for the same reason we breathe; because what we are designed to do. “Neither I, nor any other person was ever sincerely and constantly” a skeptic (1.4). The only way forward is to reason about what is true.

On the other hand, it seems that to hold any strong views about the world is a source of disquiet. Our thoughts rarely influence matters or convince others, and they may prove incorrect. Opinions are sources of frustration. Therefore, members of the ancient Skeptical School advised that we should attain mental peace by convincing ourselves that we do not know what is true.

The Greek verb epekho usually means to “present” or “offer,” but in the Skeptics’ jargon, it meant actively convincing yourself that it is impossible to know what is true, either in a specific case or generally. This “suspension of judgment” (as the related noun is usually translated) is an accomplishment, not a passive state. Effort is required to counteract the tendency to make judgments, which Hume claimed was natural. Epikhe is not a shrug-of-the-shoulders but a deep realization that knowledge is impossible.

The Skeptics provided lists of techniques or practices (generally translated “modes”) to induce such suspension. One mode that sometimes works for me is to reflect on how fundamentally different everything would seem to a different species. This form of relativism has impressed people as diverse as William Blake; Friedrich Nietzsche (“Man, a small, wild animal species …”: Will to Power, 121); the Zen master Mumon Yamada; and the narrator of Marilynne Robinson’s Gilead, who suggests that the way “this world embraces and exceeds [the family cat] Soapy’s understanding of it” opens the “possibility of an existence beyond this one” (pp. 162-3).

These people have reached divergent conclusions from their shared premise that each kind of sentient being perceives a fundamentally different world. Blake concludes that “everything is holy.” Nietzsche sees “unbelief” as a “precondition of greatness” and “strength of the will” (Will to Power, 615). In Zen, the moral is to be aware of one’s experience. For Robinson’s Rev. Ames (as for St. Augustine), an awareness of human limitations permits a faith in the things not seen that are disclosed in Scripture.

For the Greek Skeptics, the outcome of suspending all belief was calm or equanimity. Sextus Empiricus says, “The skeptics at first hoped [like Socrates] that untroubledness would arise by resolving irregularities of phenomena and of thought, but, not being able to do this, they held back, and when they suspended judgment, untroubledness [ataraxia] came as if by chance, like a shadow after a body. … For this reason, then, we say that untroubledness about opinions is the goal, but about things that we experience by force, moderation” (Outlines of Pyrrhonism, 1:29).

Nearly two millennia later, John Keats praised “Negative Capability, that is when a man is capable of being in uncertainties, Mysteries, doubts, without any irritable reaching after fact and reason” (Keats 1817). For a writer, the easier path is to adopt and promote one’s own views. Negative Capability is a difficult alternative with the potential to cure “irritability.” Keats’ main example is Shakespeare, who depicts myriad characters without divulging (or perhaps even holding) opinions of his own.

The Buddha and Nietzsche on Skepticism

I would like to draw attention to two (or two-and-a-half) thinkers who have discussed skepticism in a somewhat similar and interesting way. These people could be described as skeptics, and they use some of the techniques–those that the ancient Skeptics called “modes”–for undermining beliefs. Yet they explicitly disparage practitioners of skepticism on the basis of character. In other words, although they are skeptical, they think that the main proponents of skepticism lead bad lives.

I mainly refer to the Buddha as described in the Pali Canon (1st century BCE?) and Friedrich Nietzsche. (Ironically, in the relevant passages, Nietzsche disparages the Skeptics as “Greek Buddhists,” but he did not know the Pali texts). I also refer to Michel Foucault, who can be classified as a kind of skeptic and who writes in detail about the ancient Greek philosophical schools, but who interestingly omits any mention of Skepticism.

In the first Long Discourse in the Pali Canon, the Buddha canvasses 62 possible views about metaphysical matters, such as whether the cosmos and the soul are immortal. (These are much like the questions that troubled Hume). After summarizing each position, he repeats a close variant of this formula:

The Realized One understands this: ‘If you hold on to and attach to these grounds for views it leads to such and such a destiny in the next life.’ He understands this, and what goes beyond this. And since he does not misapprehend that understanding, he has realized quenching within himself. Having truly understood the origin, ending, gratification, drawback, and escape from feelings, the Realized One is freed through not grasping (S?lakkhandhavagga, DN 1, translated by Bhikkhu Sujato).

Among the 62 views are four that sound like Skepticism. The Buddha calls proponents of these four views “endless flip-floppers.” (Maurice Walsche translates the same word as “eel-wrestlers.”) Presented with each major issue, some of them think, “‘I don’t truly understand … If I were to declare [a view], I might be wrong. That would be stressful for me, and that stress would be an obstacle.’ So … whenever they’re asked a question, they resort to verbal flip-flops and endless flip-flops: ‘I don’t say it’s like this. I don’t say it’s like that. I don’t say it’s otherwise. I don’t say it’s not so. And I don’t deny it’s not so.’”

The three other types of “flip-floppers” behave the same way but for different reasons. Some fear that they would feel “desire or greed or hate or repulsion” as a result of holding beliefs, some fear that they would be defeated in a debate, and some are simply “dull and stupid.”

Apart from the dull and stupid, the “flip-floppers” sound like Skeptics, and the Buddha rejects them with the same formula that he uses in response to all the dogmatists: “‘If you hold on to and attach to these grounds for views it leads to such and such a destiny in the next life.'” The Realized One “understands this, and what goes beyond this. …”

In short, the Buddha is a skeptic about Skepticism. Near the end of the Discourse, he elaborates his own view. All opinions, he says, “are conditioned by contact.” In other words, we believe everything we do as a direct and unavoidable result of previous events. Specifically, the events that lead people to hold philosophical opinions are feelings of craving. This is as much the case for the flip-floppers as for everyone else. They have chosen Skeptical opinions because of their feelings, which include an aversion to being moved or criticized. The real path to enlightenment is to see all opinions as fully conditioned, a realization that permits one to escape from the whole fisher’s net of beliefs.

Is the doctrine that everything is conditioned (“dependent origination”) just another metaphysical position that could be explained away on psychological grounds? Or is it self-refuting?

One answer might that it is self-refuting, but in a good way, a ladder that we can push aside once we have climbed it. Meanwhile, it offers insights about specific positions. When you adopt any given idea and find yourself committed to it, you can ask what prior psychological experience must have generated it and thereby release yourself from it, as a form of therapy.

I think a similar view can be attributed to Nietzsche. His supposed doctrine of Will to Power alleges that all beliefs result from “will,” including the doctrine itself. Like dependent-origination, Will to Power it may be self-refuting, but in a good way.

Deeply distrustful of epistemological dogmas, I loved to look now from this window, now from that, was careful not to get stuck in them, considered them harmful – and finally: is it likely that a tool can criticize its own suitability?? – What I was more careful to note was that no epistemological scepticism or dogmatism ever arose without ulterior motives – that it has a secondary value as soon as one considers what basically forced this position (Will to Power, 179).

Nietzsche’s treatment of the ancient Skeptics (whom he calls Pyrrhonists, after their founder, Pyrrho of Ellis) parallels the Buddha’s analysis of “flip-floppers.” Nietzsche criticizes the Pyrrhonists psychologically. They manifest a “desire for disbelief” that has base motives. “What inspires the sceptics? Hatred of the dogmatists – or a need for rest, a weariness, as in Pyrrho” (193).

Nietzsche argues that the earliest Greek philosophers had been creative, noble (vornehme), and “fertile.” They weren’t right (nothing is right), but they left behind beautiful works. Pyrrho was “necessarily the last”: he killed this creative tradition. His philosophy was “wise weariness”:

Living among the lowly, lowly. No pride. Living in the common way; honoring and believing what everyone believes. On guard against science and the mind, even everything that puffs you up…. Simple: indescribably patient, carefree, mild, apatheia, or even prautes [a word for ‘mild’ in the New Testament]. A Buddhist for Greece, brought up amid the tumult of the schools; late comer; tired; the protest of the tired against the zeal of the dialecticians; the disbelief of the tired in the importance of all things. He has seen Alexander, he has seen the Indian penitents. To such late comers and refined people, everything low, everything poor, everything idiotic has a seductive effect. That narcotizes: that makes you stretch out (Pascal). On the other hand, in the midst of the crowd and confused with everyone else, they feel a little warmth: they need warmth, these tired people…. Overcoming contradiction; no competition, no desire for distinction: denying the Greek instincts. (Pyrrho lived with his sister, who was a midwife.) Disguising wisdom so that it no longer distinguishes itself; giving it a cloak of poverty and rags; doing the most menial tasks: going to the market and selling milk pigs…. Sweetness; brightness; indifference; no virtues that need gestures: equating oneself even in virtue: ultimate self-conquest, ultimate indifference.

Nietzsche’s epithet for Pyrrho, a “Buddhist for Greece,” may turn out to be true (if Pyrrho was a practicing Buddhist), yet a bit unfair if we attend to the Buddha’s reported criticism of eel-wrestling skeptics.

Michel Foucault can also be seen as a kind of skeptic. He describes his project as “making things more fragile” by showing that the combinations of concepts and practices that shape our world arose recently, have contingent origins, and therefore “can be politically destroyed” (Foucault 1981). He does not offer prescriptions but shakes his readers’ confidence in what they think they know. Foucault’s practices of genealogy and archaeology sound like Skeptical modes and strikingly resemble the Buddha’s technique of demonstrating that beliefs arise contingently.

Especially in his last four years, Foucault turned from critically investigating forms of power-and-knowledge to exploring ways that individuals have cared for themselves. In this period, he focused on the ancient Greek philosophical schools that offered various forms of therapy. Citing Carlos Lévy, Frédéric Gros comments;

Foucault, in fact, takes the Hellenistic and Roman period as the central framework for his historico-philosophical demonstration, describing it as the golden age of the culture of the self, the moment of maximum intensity of practices of subjectivation, completely ordered by reference to the requirement of a positive constitution of a sovereign and inalienable self, a constitution nourished by the appropriation of logoi as so many guarantees against external threats and means of intensification of the relation to the self. And Foucault successfully brings together for his thesis the texts of Epicurus, Seneca, Marcus Aurelius, Musonius Rufus, Philo of Alexandria, Plutarch. [However,] The Skeptics are not mentioned; there is nothing on Pyrrhon and nothing on Sextus Empiricus. Now the Skeptical school is actually as important for ancient culture as the Stoic or Epicurean schools, not to mention the Cynics. Study of the Skeptics would certainly have introduced some corrections to Foucault’s thesis in its generality. It is not, however, the exercises that are lacking in the Skeptics, nor reflection on the logoi, but these are entirely devoted to an undertaking of precisely de-subjectivation, of the dissolution of the subject. They go in a direction that is exactly the opposite of Foucault’s demonstration (concerning this culpable omission, Carlos Lévy does not hesitate to speak of “exclusion”). This silence is, it is true, rather striking. Without engaging in a too lengthy debate, we can merely recall that Foucault took himself for . . . a skeptical thinker. (Foucault 2001, p. 548).

    Foucault’s silence about Skepticism is impossible to explain conclusively but provocative. One possibility is that he did not know how to address thinkers like Sextus because they were too close to his own approach.

    Despite some similarities, it is worth distinguishing these thinkers’ goals. The Buddha of the Pali Canon promises “a complete and permanent end to desire, attachment, and aversion” (Segal 2020, 110)–let’s call that “enlightenment.” The Skeptics offer “untroubledness about opinions” and a moderate response to suffering in this life–a version of worldly happiness. Nietzsche admires greatness. The Will to Power bears the epigraph: “Great things demand that we either remain silent about them or speak with greatness: with greatness, that is, cynically and with innocence.” And Foucault seems to offer some kind of liberation, albeit always partial and provisional.

    For myself, I cannot endorse what I take to be the fundamental goal of the Theravada texts: permanent release from a cycle of literal rebirth into suffering. Although there is an enormous amount to learn from these works, their core purpose doesn’t work for me. My view is closer to Rev. Ames’: “this life has its own mortal loveliness” (Robinson, 184).

    Foucault was engaged in something deeply important, and it’s tragic that he wasn’t able to continue his exploration of how to cultivate the self. We are left with impressive critiques and hints of a positive program focused on the inner life. I’d like to think Foucault would have written explicitly and interestingly about Skepticism, but he did not have that opportunity. Foucault also recommends that “Montaigne should be reread … as an attempt to reconstitute an aesthetics and an ethics of the self” (Foucault 2011, p. 251), but he was not able to offer that re-reading.

    As for Nietzsche: I agree with him that most cultural and intellectual figures, and some political leaders, who leave a significant creative legacy are firmly committed to opinions of their own. They are often “hedgehogs” (those who know one thing, even if it’s cynicism or nihilism), not “foxes” (those who know many things and are prone to change their theories). I admit that I sometimes resent the disadvantage that arises from being an equivocal fox or an eel-wrestler–from being chronically unsure. However, at least for me, the point is not to be great (which is surely out of reach), but to live reasonably well. And if I can make progress on that, “what helps me might help another.”

    Thus the questions reduce to these: Should we want worldly happiness in the form of untroubledness, and if so, does suspension of belief help us get there?

    How circumstances have changed

    Jonathan Barnes, an expert on Sextus (with whom I studied many decades ago) finds the Skeptics’ therapeutic promises implausible and even “reprehensible” (Barnes 2000, xxxviii). He acknowledges that “Skepticism is offered as a recipe for happiness…. Sextus thinks that we should read Sextus in order to become happy.” However, he writes, “I find it difficult to take this sort of thing seriously” (p. xxx)

    Barnes offers an example: “Suppose that I suspect that I have a fatal disease: unsure, I worry, I become depressed; and in order to restore my peace of mind I decide to investigate — I visit my doctor.” Unfortunately, the doctor is a Skeptic, so he persuades Jonathan that it is impossible to tell whether or not he has a fatal illness. He “lets me leave … in the very state of uncertainty which induced me to enter it.” This result shows that the purported therapist is “a quack” (xxi).

    We might think that it’s actually better to suspend belief about whether one has an incurable, fatal disease rather than to learn that one is dying. This question seems debatable. But I would like to draw attention to the genuine value of–in this case–medical knowledge. Sextus is said to have practiced as a physician, but there was rarely much that an ancient doctor could do for you. Matters were not much better in 1811-14, when John Keats received his medical training. Nowadays, however, a doctor has a pretty good chance of determining what ails you and may be able to assist, if not with a cure than at least with effective palliatives.

    It’s not that a modern doctor, as an individual, has far more knowledge and better perception than Sextus had. Rather, medical science is deeply collaborative and cumulative. Your physician sends your blood samples to a lab, which uses protocols and instruments developed in other labs, based on previous findings from still others. Much of the physician’s individual knowledge is about how to navigate this human system. Socrates’ metrike techne has become a group effort.

    Trust is essential: not only trust in one’s own senses and reason (which Skepticism challenges), but also trust in other people and institutions. This is the case not only for medicine but also for engineering, academic research, government statistics, journalism, market data, and other forms of organized knowledge.

    By displaying appropriate amounts of trust in cumulative human knowledge, we can find partial solutions to human suffering. Even if we agree with the Buddha that suffering always remains, compassion compels us to do the best we can. Blanket skepticism interferes with our ability to help ourselves and others. That is what happens when people who doubt medical science or professional journalism or government statistics refuse to do things like take vaccines or use currency or participate in politics.

    Yet we must always remember that we and others can be wrong and should build the possibility of error and bias into our institutions and processes. Robert K. Merton saw “organized skepticism” as one of the defining features of science, and constitutional democracy offers mechanisms for identifying and challenging errors.

    At the personal level, we might learn from both the Theravada texts and the Greek Skeptics about the drawbacks of identifying too strongly with our own ideas. A moderate kind of Skepticism encourages not to cling to what we believe, because that is a cause not only of dogmatism but also of disquiet. It increases the odds that we will be frustrated when our ideas fail to persuade.

    One of the Skeptics’ techniques, “The Mode of Dispute,” attempts to attain peace by observing the unresolved disagreements among previous thinkers. The Buddha also practices this mode. At one point, he is asked, “The very same teaching that some say is ‘ultimate,’ others say is inferior. Which of these doctrines is true, for they all claim to be an expert?” The Buddha replies that sages “take no side among factions.”

    Peaceful among the peaceless, equanimous, they don’t grasp when others grasp. Having given up former defilements and not making new ones, not swayed by preference, nor a proponent of dogma, that wise one is released from views, not clinging to the world, nor reproaching themselves. They are remote from all things seen, heard, or thought. With burden put down, the sage is released: not formulating, not abstaining, not longing (“The Longer Discourse on ‘Arrayed for Battle,'” trans. by Bhikkhu Sujato.).

    Here I would emphasize the Buddha’s attitudinal stance. The takeaway is not to be skeptical about everything but rather to avoid clinging to one’s views, submitting to mere preferences, or reproaching oneself for one’s errors and failures to change the world. The text recommends a mild detachment, which is compatible with trying to determine the best thing to do and acting accordingly. This, I think, is the form of skepticism that encourages a tranquil mind.

    [I revised and expanded this post on 8/27/24.]

    Sources: I translate Montaigne from the 1598 Middle French edition (“Ce n’est pas icy ma doctrine, c’est mon estude : & n’est pas la leçon d’autruy, c’est la mienne. … Ce qui me sert, peut aussi par accident servir à un autre”); the Greek texts from Project Perseus; and Nietzsche from the Max Braun 1917 edition (which seems to omit some valuable material). Also quoting: David Hume, A Treatise of Human Nature, 1739; Marilynne Robinson, Gilead: A Novel, Farrar (Straus and Giroux); John Keats, letter to his brothers (Dec. 21, 1817); Jonathan Barnes, introduction to Sextus Empiricus’ Outlines of Scepticism, translated by Julia Annas and Jonathan Barnes (Cambridge 2000); Michel Foucault, “What Our Present Is” (1981), from The Politics of Truth; Gros’ note to Foucault’s The Hermeneutics of the Subject: Lectures at the Collège de France (1981-1982), Palgrave, 2001; and Seth Zuiho Segall, Buddhism and Human Flourishing (Palgrave 2020). The Pali translations are by Bhikkhu Sujato via the amazing SuttaCentral.net.

    See also: Cuttings version 2.0: a book about happiness; does doubting the existence of the self tame the will?

    The post does skepticism promote a tranquil mind? appeared first on Peter Levine.

    Montaigne and Buddhism

    Michel de Montaigne (1533-1592) was deeply influenced by the ancient philosophical school called Skepticism, which he first studied directly in the form of a 1562 translation of Sextus Empiricus’ Outlines. Sextus had called himself a follower of the first Skeptic, Pyrrho of Elis (ca. 360-270 BCE).

    Ancient authors report that Pyrrho went to India with Alexander the Great and studied with there with Indian philosophers. Christopher Beckwith makes the boldest case that Pyrrho was in fact a Buddhist, and thus Greco-Roman Skepticism was an offshoot of Buddhism. In turn, Montaigne called Skepticism “the wisest school of philosophy” (see below).

    I cannot assess Beckwith’s thesis that Pyrrho was a Buddhist. However, I have found parallels between Montaigne’s writing and a specific text from the Buddhist Pali Canon, The Atthaka Vagga or “Octet Chapter.” Because material from this work has also been traced to the Greco-Buddhist kingdoms of what is now Afghanistan and Pakistan, it may be especially old and close to what Pyrrho might have learned when he went to India.

    It is possible to doubt that this early text–when read on its own–really captures what we should call “Buddhism.” The Octet Chapter emphasizes the value of renouncing all kinds of beliefs and presents the model of a sage as one who avoids concepts and arguments. I don’t see anything in this text about nirvana or perfect enlightenment, but rather an argument for a certain way of living as a sage. It sounds a bit like the doctrine of the (non-Buddhist) teacher Sañjaya Belatthiputta, who is presented as misguided in an influential long discourse from the same Pali Canon (DN 2).*

    Still, any category as abstract as Buddhism can be defined in many ways, and arguably this text belongs to it. In fact, some have seen the Octet Chapter as presciently Buddhist, foreshadowing the Mahayana School. And whether or not this text is Buddhist, it is also consistent with Skepticism. In fact, there could have been some reciprocal influence from Greco-Roman Skepticism back to Mahayana.

    The most interesting questions, for me, are not about who influenced whom or where various ideas began, but rather how we should live now. To that end, I present some characteristically quotable sentences from Montaigne in parallel with verses from the Octet Chapter of the Pali Canon.

    From Michel de Montaigne, The Complete Essays, translated by M.A. Screech, PenguinFrom AN Anthology of Discourses: A refreshing translation of the SuttanipAta, TrANS. by Bhikkhu Sujato
    Most pleasures, they say, tickle and embrace us only to throttle us… If a hangover came before we got drunk we would see that we never drank to excess: but pleasure, to deceive us, walks in front and hides her train (p. 275).If a mortal desires sensual pleasure and their desire succeeds, [most people] definitely become elated, having got what they want. 
    But for that person in the throes of pleasure, aroused by desire, if those pleasures fade, it hurts like an arrow’s strike (Snp 4.1)
    My business, my art, is to live my life (p. 425). Now since we are undertaking to live, without companions, by ourselves, let us make our happiness depend on ourselves (p. 269).The chains of desire, the bonds of life’s pleasures are hard to escape, for one cannot free another (Snp 4.2).

    Understanding the teaching, [the wise] are independent (Snp 4.15).
    The learned do arrange their ideas into species and name them in detail. I, who can see no further than practice informs me, have no such rule, presenting my ideas in no categories and feeling my way – as I am doing here now (pp. 1221-1222).“Purity is spoken of not in terms of view,” said the Buddha to Maga??iya, “learning, knowledge, or precepts and vows; nor in terms of that without view, learning, knowledge, or precepts and vows. Having relinquished these, not adopting them, peaceful, independent, one would not pray for another life” (Snp 4.9).
    We are never ‘at home’: we are always outside ourselves. Fear, desire, hope, impel us towards the future; they rob us of feelings and concern for what now is, in order to spend time over what will be – even when we ourselves shall be no more. ‘Calamitosus est animus futuri anxius’ [Wretched is a mind anxious about the future-Seneca] (p. 11).Greedy, fixated, infatuated by sensual pleasures, [many people] are incorrigible, habitually immoral. When led to suffering they lament, “What will become of us when we pass away from here?” That’s why a person should train in this life (Snp 4.2).
    That man will be happy and master of himself who every day declares, ‘I have lived. Tomorrow let Father Jove fill the heavens with dark clouds or with purest light’… [Horace] Let your mind rejoice in the present: let it loathe to trouble about what lies in the future. (p. 43).Rid of attachment to the future, [wise people] don’t grieve for the past. A seer of seclusion in the midst of contacts is not led astray among views (Snp 4.10)
    So too for our souls: we must therefore educate and train them for their encounter with that adversary, death; for the soul can find no rest while she remains afraid of him. But once she does find assurance she can boast that it is impossible for anxiety, anguish, fear or even the slightest dissatisfaction to dwell within her. And that almost surpasses our human condition (p. 101). Rid of desire for both ends, having completely understood contact, free of greed, doing nothing for which they’d blame themselves, the wise don’t cling to the seen and the heard.  Having completely understood perception and having crossed the flood, the sage, not clinging to possessions, with dart plucked out, living diligently, does not long for this world or the next (Snp 4.2)
    That is why it is equally mad to weep because we shall not be alive a hundred years from now and to weep because we were not alive a hundred years ago (p. 102). Short, alas, is this life; you die before a hundred years. Even if you live a little longer, you still die of old age. People grieve over belongings, yet there is no such thing as permanent possessions. Separation is a fact of life; when you see this, you wouldn’t stay living at home (Snp 4.6).
    When my convictions make me devoted to one faction, it is not with so violent a bond that my understanding becomes infected by it.
    (p. 1144). I am firmly attached to the sanest of the parties, but I do not desire to be particularly known as an enemy of the others beyond what is generally reasonable (p. 1145).
    Desiring debate, [many people] plunge into an assembly, where each takes the other as a fool. Relying on others they state their contention, desiring praise while claiming to be skilled. Addicted to debating in the midst of the assembly, their need for praise makes them nervous. But when they’re repudiated they get embarrassed; upset at criticism, they find fault in others (Snp 4.8).
    If, maintaining that theirs is the “ultimate” view, a person makes it out to be highest in the world; then they declare all others are “lesser”; that’s why they’re not over disputes. …

    [Instead, the wise person] does not grasp any view—how could anyone in this world judge them?  They don’t make things up or promote them, and don’t subscribe to any of the doctrines. The brahmin has no need to be led by precept or vow; gone to the far shore, one such does not return. (Snp 4.5)
    Now the Pyrrhonians make their faculty of judgement so unbending and upright that it registers everything but bestows its assent on nothing. This leads to their well-known ataraxia: that is a calm, stable rule of life, free from all the disturbances (caused by the impress of opinions, or of such knowledge of reality as we think we have) which give birth to fear, acquisitiveness, envy, immoderate desires, ambition, pride, superstition, love of novelty, rebellion, disobedience, obstinacy and the greater part of our bodily ills. In this way, they even free themselves from passionate sectarianism, for their disputes are mild affairs and they are never afraid of the other side having its say (pp. 560-561).A mendicant, peaceful, quenched, never boasts “thus am I” of their precepts. They have a noble nature, say those who are skilled, who have no pretensions regarding anything in the world. 
    For one who formulates and creates teachings, and promotes them despite their defects, if they see an advantage for themselves, they become dependent on that, relying on unstable peace. 
    It’s not easy to get over dogmatic views adopted after judging among the teachings. That’s why, among all these dogmas, a person rejects one teaching and takes up another. 
    The cleansed one has no formulated view at all in the world about the different realms. Having given up illusion and conceit, by what path would they go? They are not involved (Snp 4.3).

    A person who has given up all judgments creates no conflict in the world.” (Atthakavagga, shorter discourse on ‘arrayed for battle’)
    Pyrrhonist philosophers, I see, cannot express their general concepts in any known kind of speech; they would need a new language: ours is made up of affirmative propositions totally inimical to them – so much so that when they say ‘I doubt’, you can jump down their throats and make them admit that they at least know one thing for certain, namely that they doubt. …
    (Scepticism can best be conceived through the form of a question: ‘What do I know?’ – Que sçay-je, words inscribed on my emblem of a Balance.)
    (pp. 590-1).
    [Question:] how do happiness and suffering disappear? 
    [Buddha’s answer:] Without normal perception or distorted perception; not lacking perception, nor perceiving what has disappeared. Form disappears for one proceeding thus; for judgments due to proliferation spring from perception. … Knowing that these states are dependent, and knowing what they depend on, the inquiring sage, having understood, is freed, and enters no dispute.
    ‘No reason but has its contrary,’ says the wisest of the Schools of Philosophy (p. 694, quoting Sextus; and Screech notes that Montaigne had this epigram inscribed in his library.)One who knows, having comprehended the truth through the knowledges, does not visit various teachers, being of vast wisdom. …. The brahmin has stepped over the boundary; knowing and seeing, they adopt nothing. Neither in love with passion nor besotted by dispassion, there is nothing here they adopt as the ultimate (Snp 4.4) That’s why they’ve gotten over disputes, for they see no other doctrine as best (Snp 4.13).
    I reckon that it is as injudicious to set our minds against natural pleasures as to allow them to dwell on them (p. 1256). When I dance, I dance. When I sleep, I sleep; and when I am strolling alone through a beautiful orchard, although part of the time my thoughts are occupied by other things, for part of the time too I bring them back to the walk, to the orchard, to the delight in being alone there, and to me (p. 1258).
    Guarded in these things, walking restrained in the village, they wouldn’t speak harshly even when provoked. 

    Eyes downcast, not footloose, devoted to absorption, they’d be very wakeful (Snp
    Attyhakavagga, “With Sariputta”),

    To me, the most likely difference is in the last row. In his final essay, “Of Experience,” the elderly Montaigne expresses genuine enthusiasm for the experiences of his present life, whereas the Pali text recommends guarded “wakefulness.” When Montaigne writes about bringing his thoughts “back to the walk, to the orchard, to the delight in being alone there, and to me,” he sounds like a practitioner of mindfulness, but not very much like the author(s) of the Octet Chapter. A little facetiously, we could say that Montaigne was more of a Buddhist than the author(s) of this early Buddhist text. Or we could just acknowledge that he was also an Epicurean.

    *Thanissaro Bhikkhu considers evidence that this text is very early and is philosophically distinct from the Pali Canon but largely disagrees. See also: Montaigne the bodhisattva?; some basics; the fetter; Cuttings version 2.0: a book about happiness; what should we pay attention to?

    The post Montaigne and Buddhism appeared first on Peter Levine.

    using a model to explain a single case

    Charles Sanders Peirce introduced the logic of what he called “abduction” — a complement to both deduction and induction — with this example:

    The surprising fact, C, is observed;
    But if A were true, C would be a matter of course,
    Hence, there is reason to suspect that A is true.

    At least since Harry Frankfurt in 1958, many readers have been skeptical. Can’t we make up an infinite number of premises that could explain any surprising fact?

    For instance, Kamala Harris has gained in the polls compared to Joe Biden. If it were true that voters generally prefer female presidential candidates, then her rise would be a “matter of course.” But it is a mistake to infer that Harris has gained because she is a woman. Other explanations are possible and, indeed, more plausible.

    Note that “voters prefer women candidates” is an empirical generalization. Generalizations cannot be derived from any single case. If that is what abduction means, then it seems shaky. Its only role might be to suggest hypotheses that should then be tested with representative samples or controlled experiments.

    But what if A (the premise) is not an empirical generalization but rather a model? For instance, a model might posit that Harris’ current position in the polls is the combined result of eight different factors, some of them general (voters usually follow partisan cues) and some of them quite unrepeatable (the incumbent president has suddenly bowed out).

    Positing a model to explain a single case has risks of its own. Perhaps we add no insight by contriving an elaborate model just to fit the observed reality. And we might be tempted to treat the various components of the model as general patterns and apply them elsewhere, even though one case should give us no basis for generalizing.

    But let’s look at this example from a different perspective–a pragmatic one, as Peirce would recommend. After all, Peirce calls his topic “Abductive Judgment” (Peirce 1903), suggesting a connection to practical reason or phronesis.

    The question is what should (someone) do? For instance, a month ago, should Joe Biden have dropped out and endorsed Harris? Right now, should Harris accentuate her gender or try to balance it with a male vice-presidential candidate?

    Inductive logic might offer some insights. Research suggests that the choice of vice-president has never affected the outcome of a presidential election, and this general inference would suggest that Harris needn’t pay attention to the gender of her VP. But induction cannot answer other key questions, such as what to do when you replace the nominee 100 days before the election. (There is no data on this matter because it hasn’t happened before.)

    Besides, various factors can interrelate. The general pattern that vice-presidents do not matter might be reversed in a situation where the nominee had herself been the second person on the ticket until last week.

    And the important questions are inescapably normative. For Harris, one good goal is to win the election, but she must attend to other values as well. For instance, I think she should adopt positions that would benefit working-class voters of all races. Possibly this would help her win by restoring some of Biden’s working-class coalition from 2020. Polling data would help us assess that claim. But I favor a worker-oriented strategy for reasons of justice, and I think the important question is how (not whether) to campaign that way.

    Models of social phenomena typically incorporate descriptive elements (Harris is down by two points today), causal claims (Trump is still benefitting from a minor convention bump), and normative premises (Harris must win)–all combined for the purpose of guiding action.

    Arguably, we cannot do better than abduction when we are trying to decide what to do next. Beginning with a surprising fact, C (and almost anything can be seen as “surprising”), we must come up with something, A, that we can rely on to guide our next steps. A should not be a single sentence, but rather a model composed of various elements.

    It is worthwhile to consider evidence from other cases that may validate or challenge components of A. But it is not possible to prove or disprove A. As the pioneering statistician Georg Rasch said, “Models should not be true, but it is important that they are applicable, and whether they are applicable for any given purpose must of course be investigated. This also means that a model is never accepted finally, only on trial.”

    If a model cannot be true, why should we make it explicit? It lays out what we are assuming so that we can test the assumptions as we act. It promotes learning from error. And it can help us to hold decision-makers accountable. When evaluating leaders, we should not assess the outcomes, which are beyond anyone’s control, but rather the quality of their models and their ability to adjust in in the light of new experience.

    Sources: Peirce, C.S. 1903. Lectures on Pragmatism, Lecture 1: Pragmatism: The Normative Sciences; Frankfurt, Harry G. “Peirce’s notion of abduction.” The Journal of Philosophy 55.14 (1958): 593-597. See also: choosing models that illuminate issues–on the logic of abduction in the social sciences and policy; modeling social reality; different kinds of social models

    The post using a model to explain a single case appeared first on Peter Levine.

    what should we pay attention to?

    In “Your Mind is Being Fracked” (May 31, 2024), Ezra Klein Interviews Princeton professor D. Graham Burnett. Their main topic is how companies manipulate our attention for profit–to our severe detriment.

    Klein and Burnett also contrast two senses of “attention.” One is a focus on a practical task, leading to action. The other is an openness to experience or to another person that feels more like quiet waiting. These two forms of attention can conflict. The latter is especially at risk in a world of busy work-schedules and portable electronic devices.

    At one point, Klein refers to the “debate that we’re having right now about smartphones and kids.” He acknowledges that there is an unresolved debate about the critique of smartphones that Jonathan Haidt and others are making; “the research is very complicated and you can fairly come to a view on either end of it.” But for Klein, the effects of heavy smartphone use are not really the point. He says,

    If you convinced me that my kids scroll on their phones for four hours a day, had no outcome on their mental health at all — it did not make them more anxious — it did not make them more depressed — it would change my view on this not at all. I just think, as a way of living a good life, you shouldn’t be staring at your phone for four hours a day.

    And yet, I also realize the language of society right now and parenting doesn’t have that much room for that. And I think we have a lot of trouble talking about just what we think a good life would be. Not a life that leads to a good job, not a life that leads to a high income, but just the idea, which I think we were more comfortable talking in terms of at other points in history, that it is better to read books than to not read books ….

    As someone who spends about 3.5 hours a day on my smartphone and who reads somewhat fewer books than I once did, I agree that it is better to read books. Either my attention is being “fracked” (forcibly extracted for profit) or I am making unwise choices, or both.

    I would define the benefits of reading much as Klein does later in the interview. A carefully constructed, lengthy written work affords us access to someone else’s thinking, thus allowing us to escape from our own limited selves. As my former colleague Maryanne Wolf said in a previous Klein podcast, “deep readers” display signs of absorption, empathy, and creativity. This mental state may have positive outcomes later, but that’s not really the point. Our life consists of time. What matters is the quality of it. Being absorbed, empathetic, and creative is good. Spending our time in a state of distraction and anxiety is not.

    But here are some complications …

    Klein is rightly concerned about a simplistic ideal of free choice that blocks us from asking whether some choices are better than others, either for ourselves or for our children. On the other hand, as Klein might acknowledge, choice is important. People differ, and we know things about our own needs and interests that others do not know. Also, we have the right to be the authors of our own lives. If someone forcibly took away my iPhone and ordered me into the library, I would have a good reason to be angry.

    John Stuart Mill famously argued that individuals should have the liberty to allocate their time, yet if they are exposed to the higher things, they will freely choose them. If Mill was right, then excellence does not conflict with freedom. Liberal education liberates us by giving us the opportunity to choose higher things.

    Mill’s predecessor, Jeremy Bentham, had said that poetry was just as valuable as the folk game of “push-pin” (illustrated above by James Gillray). But Mill responded that people who have the opportunity to learn poetry will not want to waste their time on such trivial table games.

    Mill may not be right. I was given an expensive and extensive education, yet I am addicted–noticeably, although not overwhelmingly or irretrievably–to my phone. Sure, I sometimes use it for worthy purposes, including episodes of deep reading on its small screen, but I also play Stormbound enough to compete in the Platinum League. Actually, Stormbound has the same basic logic as push-pin–I try to get my tokens over the other player’s baseline, much like the Duke of Queensberry in Gillray’s cartoon.

    In short, offering everyone experiences with higher things may not work. Look at me, with my Oxford doctorate in literae humaniores–I spend my day playing Stormbound.

    But we should be open-minded and thoughtful when we make value-judgments. The game of push-pin actually doesn’t sound so bad. It was a safe contest of skill between human competitors–maybe a way to sustain relationships.

    Meanwhile, Bentham was suspicious of poetry. He saw poets as prone to lies and exaggeration. If we think that Bentham was wrong–poetry is better than push-pin–we owe an account of its value. What is so good about poetry and so bad about games? And is all poetry really worth our time?

    I think I can address these questions. Poetry is language that is especially carefully constructed, with particular attention to its formal qualities. As such, it is particularly well suited to promote absorption, assuming that you really attend to it and learn how to analyze it. Reading poetry requires experience, particularly because poems tend to refer to previous poems, and it’s only by reading many of them that you can really begin to see how they operate. Therefore, it is advanced reading that is worthy, not just any reading. As Wallace Stevens says, “Poetry is one of the enlargements of life.”

    Games are also worthwhile, particularly when they involve people who know each other and are in physical proximity, so that the players can learn and care about one another and exercise their bodies as well as their minds. I’m for push-pin! In contrast, my smartphone games pit me against the AI or against completely anonymous human opponents, and as such, they offer no human interaction. Besides, they are carefully designed to pull me back in for another round. In these respects, they are worse than poetry. (Yet I sometimes find my mind wandering into worthy topics while I play, so maybe that isn’t so bad.)

    The main point here is that our evaluation of various activities should be nuanced and critical, not prejudiced by assumptions about what count as the higher pursuits.

    For me at least, the epitome of an absorbing experience that takes me out of my own mind is a classic novel. Because of its length and careful construction, it retains attention. Because it is fictional, it is truly the product of someone else’s thought. Because it is mere text on paper, it requires and promotes imagination. And because I am not a literary critic, I don’t get anything concrete from reading a novel; its value is intrinsic.

    Thus we might want to pursue activities that are as much as possible like reading classic novels. However, from his unorthodox Marxist perspective in the 1930s, the great critic Walter Benjamin disparaged novels in favor of “stories.” By the latter word, he meant folktales and other oral narratives that emerge from the masses. Benjamin preferred stories because they are communal and they elicit responses from their listeners, including impromptu additions. In contrast, novels are constructed by solo authors who control the whole narrative, including its end. The relationship between the novelist and the reader is private and consumeristic: I buy the experience that James Joyce manufactured.

    If we applied Benjamin’s argument to the present day, it would offer no justification for playing Stormbound. But it might justify spending time interacting with other people on a social network (ignoring, for a moment, the problem of corporate ownership, which Benjamin would decry). Benjamin would see the attention demanded by a novel as individualistic and consumerist.

    Here is a different take on somewhat similar issues. In one of the oldest of all Buddhist texts, “The Fruit of Contemplative Life” from the Pali Canon, the Buddha tries to teach a very bad king, Ajatasattu–who is troubled by guilt for having murdered his own father and usurped the throne–to follow a monk’s contemplative path. One recommendation is “sense restraint”:

    And how does a mendicant guard the sense doors? When [monks see] a sight with their eyes, they don’t get caught up in the features and details. If the faculty of sight were left unrestrained, bad unskillful qualities of covetousness and displeasure would become overwhelming. For this reason, they practice restraint, protecting the faculty of sight, and achieving its restraint. When they hear a sound with their ears … When they smell an odor with their nose … When they taste a flavor with their tongue … When they feel a touch with their body … When they know an idea with their mind, they don’t get caught up in the features and details. If the faculty of mind were left unrestrained, bad unskillful qualities of covetousness and displeasure would become overwhelming. For this reason, they practice restraint, protecting the faculty of mind, and achieving its restraint. When they have this noble sense restraint, they experience an unsullied bliss inside themselves. That’s how a mendicant guards the sense doors.

    DN 2, translated by Bhikkhu Sujato, on suttacentral.net

    This passage surprises me a little because I would have thought that “getting caught up in … features and details” is how we achieve attention. Our task, when we read a poem by Wallace Stevens, is precisely to analyze its features and details. I suppose there’s a difference between “getting caught up” in something–so that you drift into “covetousness and displeasure”–versus attending to it with openness and equanimity. But the question remains whether complicated things like poems and novels are appropriate objects of attention or whether we would be better off with bare walls and our breath.

    Speaking of the Pali Canon: I struggle to attend to it because the narration is very repetitive. Before King Ajatasattu finds his way to the Buddha, he first meets eight misguided sages, and each of those episodes is narrated with precisely the same text, except that each guru’s name and a sentence about his mistaken doctrine is substituted at a key point.

    These discourses emerged as stories, not as novels. The medium was oral, meant for memorization and communal experience, not literature constructed for an individual reader. However, I happen to be an individual reader who sometimes opens translations of the Pali Canon–as well as many other kinds of texts–on my smartphone. “Unskillful qualities of covetousness and displeasure” arise rather quickly in my mind, not because I dislike the text but because I am unable to concentrate on it.

    We are not going back to oral recitations or baskets of palm leaves with handwritten text, nor should we want to. However, the technologies of the present have costs as well as benefits, and we are just beginning to learn how to deal with them.

    See also: Kieran Setiya on midlife: reviving philosophy as a way of life; are we forgetting how to read?; some basics

    The post what should we pay attention to? appeared first on Peter Levine.

    democracy’s sovereignty

    Human beings have invented a vast and diverse set of institutions that coordinate behavior and allocate resources.

    These forms include disciplined organizations headed by leaders, voluntary groups that strive to operate by consensus, procedures for voting directly on policies, elected bodies that deliberate and vote, courts that decide cases and controversies (with or without juries, which may or may not be randomly selected), bureaucracies characterized by hierarchies of defined positions, markets with or without firms (which may themselves by mini-dictatorships, bureaucracies, or co-ops), markets for capital, informal norms defined by a widespread assumption that everyone else will behave in certain ways, scientific disciplines organized by peer-review and replication, and networks that newcomers can join by agreeing to relay messages to other members.

    This list is not meant to be comprehensive and is not closed. Several important forms are no more than 300 years old, and the last one originated within the past half century. In the future, new forms will be invented.

    We should not view current institutions complacently, since many originated in injustice and still perpetuate bad outcomes. To name just one example, the Dutch East India Company, founded in 1602, pioneered essential features of capital markets, including shares that could be resold on the world’s first stock market and a board accountable to shareholders. Its major activities included conquest, ethnic cleansing, and slavery. And I do not mean to cite a corporation alone, since governmental forms are also rooted in cruelty.

    But I do start with the assumption that each of these forms has been invented and has survived because it serves significant functions and offers distinctive advantages. Also, each one can be improved. Progress is by no means inevitable, but we can identify and enhance changes that are beneficial. A certain kind of arrogance is required to assume that any one of these forms is simply bad and should be dispensed with.

    In that case, we must decide which institutional forms should be used for each social purpose. And a second-order question: which institution(s) should decide this matter?

    A decision about which institution should play any given role typically looks like a law (although it might technically be a constitutional provision, a decree, or a regulation). For example, to have an independent, private press or else a governmental media system requires a law. Likewise for health insurance.

    Which institutions can yield such laws? Not a market, which simply doesn’t offer products that look like laws. Nor are people invited to reason about the role of various other institutions when they are participating in market exchanges.

    A king, dictator, high priest, or junta can decide which institution will do what. Instead of grabbing all power for himself, a ruler may favor courts or markets (think of Frederick the Great or Augusto Pinochet). Regardless, we do not want rulers to make these decisions for two major reasons. First, they cannot be trusted to decide which institutions work best for all, when they stand to profit for making them work mainly for themselves. Besides, even in the rare case of a benign despot, he cannot know enough about how each institution affects all the people of the society to be able to decide wisely.

    In the US system, courts sometimes decide which kinds of institutions may do what. In the 1905 Lochner decision, the Supreme Court notoriously gave control over wages and working decisions to companies rather than the state. When judges seem to be deciding such cases on the basis of their own views (a charge against the current Supreme Court), then they appear no different from juntas. The special advantage of a court is not allocating responsibilities among institutions but interpreting and applying laws created by other institutions to adjudicate specific cases.

    Science might be able to decide which institution works best for each purpose–if this turns out to be a tractable research question. Coase’s Theorem is supposed to be a result of research that proves the superiority of competitive markets for many purposes; some versions of Marxism are supposed to prove the deep flaws of capitalism.

    I view these claims as useful inputs to reasoning about which institutions are best for various purposes. Research should be taken seriously and should develop further. I doubt it will ever resolve the discussion, because the choice of institutions involves conflicting values and interests, not merely empirical claims, and also because the world keeps changing as a result of people’s uncontrollable behavior. Any institution that is neatly designed according to a theory will soon be subverted by people who understand it and “hack” its design.

    If the decision about which institution should do what looks like a law, and we don’t want rulers, judges, or specialized experts to make such laws, then the best candidate is a democracy. As Knight and Johnson (2014) argue, a democracy elicits views about the role of various other institutions, it gives everyone an equal opportunity to affect the decision, and it permits continued reflection once a decision is made.

    One does not need optimistic assumptions about individuals’ wisdom or their tendency to learn from other people to believe that our best available way to decide the role of other institutions is to have an ongoing debate in civil society, then to empower elected, accountable representatives to vote, and then to debate the results and reconsider the decisions.

    In fact, a reasonably healthy democracy seems to be one in which political competition is about the role of other institutions. For example, things would be going better if US voters were thinking about whether the government of the United States should channel resources into “green” technologies or else leave the allocation of capital to markets. I mention this example because the 2024 election is not about the pros or cons of Biden’s channeling more than a trillion dollars into green industries, but about who counts as a real American.

    The above argument is deeply inspired by Knight and Johnson. Paul Aligica (2014) dissents in part. He sees all the different kinds of institutions as more or less on par within a polycentric order. He argues that institutions should and do grow and change as a result of decentralized decisions made by many actors across the society as a whole. In that sense, the people rule through all the institutions. Aligica defends an elected, democratic government but emphasizes that it cannot assess and influence the other institutions wisely unless they develop robustly and independently and demonstrate successes and failures.

    I think Aligica makes valid points, but the gap between him and Knight and Johnson is not very wide, and I’m inclined to endorse the priority of democracy as long as we remember (as Knight and Johnson do) that a plurality of institutions is an asset for democratic government.

    Sources: Jack Knight and James Johnson, The priority of democracy: Political consequences of pragmatism. Princeton University Press, 2014; Paul Aligica, Institutional Diversity and Political Economy: The Ostroms and Beyond. New York: Oxford University Press, 2014. See also polycentricity: the case for a (very) mixed economy; modus vivendi theory; what if people’s political opinions are very heterogeneous?; China teaches the value of political pluralism, etc.

    The post democracy’s sovereignty appeared first on Peter Levine.

    listeners, not speakers, are the main reasoners

    Robert Brandom offers an influential and respected account of reasoning, which I find intuitive (see Brandom 2000 and other works). At the same time, a large body of psychological research suggests that reasoning–as he defines it–is rare.

    That could be a valid conclusion. Starting with Socrates, philosophers who have proposed various accounts of reason have drawn the conclusion that most people don’t reason. Just for example, the great American pragmatist Charles Sanders Peirce defines reason as fearless experimentation and doubts that most people are open to it (Peirce 1877).

    Brandom’s theory could support a similarly pessimistic conclusion. But that doesn’t sit well with me, because I believe that I observe many people reasoning. Instead, I suggest a modest tweak in his theory that would allow us to predict that reasoning is fairly common.

    Brandom argues that any claim (any thought that can be expressed in a sentence) has both antecedents and consequences: “upstream” and “downstream” links “in a network of inferences.” To use my example, if you say, “It is morning,” you must have reasons for that claim (e.g., the alarm bell rang or the sun is low in the eastern sky) and you can draw inferences from it, such as, “It is time for breakfast.” In this respect, you are different from an app. that notifies you when it’s morning or a parrot that has been reliably trained to say “It is morning” at sunrise. You can answer the questions, “Why do you believe that?” and “What does that imply?” by offering additional sentences.

    (By the way, an alarm clock app. cannot reason, but an artificial neural network might. As of 2019, Brandom considered it an open question whether computers will “participate as full–fledged members of our discursive communities or … form their own communities which would confer content” [Frápolli & Wischin 2019].)

    Whenever we make a claim, we propose that others can also use it “as a premise in their reasoning.” That means that we implicitly promise to divulge our own reasons and implications. “Thus one essential aspect of this model of discursive practice is communication: the interpersonal, intra-content inheritance of entitlement to commitments.” In sum, “The game of giving and asking for reasons is an essentially social practice.” Reasoning in your own head is a special case, in which you basically simulate a discussion with real other people.

    The challenge comes from a lot of psychological research that finds that beliefs are intuitive, in the specific sense that we don’t know why we think them. They just come to us. One seminal work is Nisbett and Wilson (1977), which has been cited nearly 18,000 times, often in studies that add empirical support to their view.

    According to this theory, when you are asked why you believe what you just said, you make up a reason–better called a “rationalization”–for your intuition. Regardless of what you intuit, you can always come up with upstream and downstream connections that make it sound good. In that sense, you are not really reasoning, in Brandom’s sense. You are justifying yourself.

    Indeed, the kinds of discussions that tend to be watched by spectators or recorded for posterity often reflect sequences of self-justifications rather than reasoning. I recently wrote about the scarcity of examples of real reasoning in transcripts and recordings of official meetings. As Martin Buber wrote in The Knowledge of Man (as pointed out to me by my friend Eric Gordon):

    By far the greater part of what is called conversation among men would be more properly and precisely described as speechifying. In general, people do not really speak to one another, but each, although turned to the other, really speaks to a fictitious court of appeal where life consists of nothing but listening to him.

    Some grounds for optimism come from Mercier and Sperber (2017). They argue that people are pretty good at assessing the inferences that other people make in discussions. Although we may invent rationalizations for what we have intuited, we can test other people’s rationalizations and decide whether they are persuasive.

    Furthermore, our intuitions are not random or rooted only in fixed characteristics, such as demographic identities and personality. Our intuitions have been influenced by the previous conversations that we have heard and assessed. For instance, if we hold an invidious prejudice, it did not spring up automatically but resulted from our endorsing lots of prejudiced thoughts that other people linked together into webs of belief. And it is possible–although difficult and not common–for us to change our intuitions when we decide that some inferences are invalid. Forming and revising opinions requires attentive listening, critical but also generous.

    The modest tweak I suggest in Brandom’s view involves how we understand the “game of giving and asking for reasons.” We might assume that the main player is the person who gives a reason: the speaker. The other parties are waiting for their turns to play. But I would reverse that model. Giving reasons is somewhat arbitrary and problematic. The main player is the one who listens and judges reasons. A speaker is basically waiting for a turn to do the most important task, which is listening.

    This view also suggests some tolerance for events dominated by “speechifying.” To be sure, we should prize genuine conversations in which people jointly try to decide what is right, and in which one person’s reasons cause other people to change their minds. This kind of relationship is the heart of Buber’s thought, and I concur. But it is unreasonable to put accountable leaders on a public stage and expect them to have a genuine conversation. None of the incentives push them in that direction. They are pretty much bound to justify positions they already held. Although theirs is not a conversation that would satisfy Buber, it does have two important functions: it allows us to judge people with authority, and it gives us arguments that we can evaluate as we form our own views.

    Again, if we focus on the listener rather than the speaker, we may see more value in an event that is mostly a series of speeches.


    Sources: Robert R. Brandom, Articulating Reasons: An Introduction to Inferentialism. (Harvard 2000); Charles S. Peirce, “The Fixation of Belief,” Popular Science Monthly 12 (November 1877), 1-15; María José Frápolli and Kurt Wischin, “From Conceptual Content in Big Apes and AI, to the Classical Principle of Explosion: An Interview with Robert B. Brandom” (2019); Richard E. Nisbett and Timothy D. Wilson. “Telling more than we can know: Verbal reports on mental processes,” Psychological review 84.3 (1977); and Hugo Mercier and Dan Sperber, The Enigma of Reason (Harvard University Press 2017. See also: looking for deliberative moments; Generous Listening Symposium; how intuitions relate to reasons: a social approach and how the structure of ideas affects a conversation

    The post listeners, not speakers, are the main reasoners appeared first on Peter Levine.