important findings about the persuasive power of facts

There is a huge body of research that suggests that people are not very susceptible to good arguments. Apparently, we believe things for unexamined reasons, cherry-pick evidence to support our intuitive beliefs, and minimize the significance of inconvenient evidence.

These findings contribute to a general skepticism about people’s capacity for democracy, and I fear that this skepticism is self-reinforcing. If we presume that humans cannot reason well, why would we try to build institutions that promote reasoning? Only half jokingly, I sometimes say that the theme of current social science is: people are stupid and they hate each other.

But I also argue that at least some of this research employs methods that are biased against discovering rational thought. In particular, if you ask random samples of people disconnected survey questions that interest you (not them) and then use techniques such as factor analysis to find latent patterns, you will, indeed, often discover that people are stupid and hate each other. More prosaically, you will develop scales for latent variables like knowledge or tolerance that yield poor scores. But such methods may overlook the idiosyncratic ways that reasons influence individuals on the topics that matter to them.

Of all people, those who believe in false conspiracy theories are generally seen as the least susceptible to good reasons; and previous efforts to convince them have often failed. However, in a 2024 Science article, Thomas H. Costello, Gordon Pennycook, David G. Rand report results of an intervention that substantially reduced people’s commitment to conspiracy theories, not only in the short term, but also two months later.

In this study, holders of conspiracy theories wrote about why they held their beliefs, and then an AI bot held a conversation with them in which it supplied reliable information directly relevant to the specific factual premises of each respondent. For instance, if a person believed that 9/11 was an “inside job” because Building 7 collapsed even though no plane hit it (see Wood and Douglas 2013), the AI might provide engineering information about Building 7. Many people were persuaded.

These results are consistent with a study of conversations with canvassers who succeeded in persuading many voters “by listening for individual voters’ … moral values and then tailoring their appeals to those moral values” (Kalla, Levine, A. S., & Broockman 2022). The two studies differ in that one used people and the other, an AI bot; and one emphasized facts while the other focused on values. But both results point to a model in which each person holds various beliefs that are more-or-less connected to other beliefs as reasons, forming a network. Beliefs may be normative or empirical–they function very similarly. Discourse involves stating one’s beliefs and their connections to other beliefs that serve as premises or implications.

People actually do a lot of this and are relatively good at assessing the rigor of such conversations when they observe them (Mercier and Sperber 2017). However, many of our methods are biased against discovering such reasoning (Levine 2024a and Levine 2024b), leaving us with the mistaken impression that we are a bunch of idiots incapable of self-governance.


Sources: Costello, T. H., Pennycook, G., & Rand, D. G. (2024). Durably reducing conspiracy beliefs through dialogues with AI. Science385(6714); Wood MJ, Douglas KM. “What about building 7?” A social psychological study of online discussion of 9/11 conspiracy theories. Front Psychol. 2013 Jul 8;4:409; Kalla, J. L., Levine, A. S., & Broockman, D. E. (2022). Personalizing moral reframing in interpersonal conversation: A field experiment. The Journal of Politics84(2), 1239-1243; Mercier, H. & Sperber D, The Enigma of Reason (Harvard University Press 2017; Levine, P. (2024a). People are not Points in Space: Network Models of Beliefs and Discussions. Critical Review, 1–27 (2024a), and Levine, P. (2024v). Mapping ideologies as networks of ideas. Journal of Political Ideologies29(3), 464-491.

Cuttings: Ninety-Nine Essays About Happiness

Cuttings is a book in progress that consists of 99 essays about the inner life: about suffering, happiness, compassion, and related themes. I first posted each of the essays on this blog, which is 22 years old today and has accumulated more than 2,400 posts. I’ve selected the contents of Cuttings carefully from this archive, revised most of the essays substantially, and arranged them so that there is a small and meaningful step between each one. In the last three years, I have written some new posts to fill gaps that I perceive in the overall structure. I believe that the architecture is now pretty solid.

Michel de Montaigne is the hero; I seek to emulate his skeptical, curious, humane mind. Like Montaigne, I talk about books, but my library is different from his. Cuttings includes short essays about Montaigne himself, early Buddhist texts, Greek philosophers, Keats and Blake, Hopkins and Stevens, phenomenologists from Husserl to Merleau-Ponty, Arendt and Benjamin, and Hilary Mantel and Ann Carson, among others.

I am releasing the third edition today–a substantial revision from last year, but not yet the final one. You can find the book here as a Google doc. I have also posted it as an .epub file, which will open directly in many e-readers. Alternatively, you could download the .epub to a computer or phone and then use this Amazon page to send it with one click to your own Kindle.

As always, comments are welcome and really the best reward for me.

on defining movements and categorizing people: the case of 68ers

In 1968: Radical Protest and its Enemies (HarperCollins, 2018), Richard Vinen describes the ideals and mores of people he calls “68ers.” (He discusses the USA, France, Germany, and Britain and acknowledges that he omits Mexico, Czechoslovakia, and other parts of the world where the events of 1968 were probably more consequential.) For him, the 68ers include Black Panthers in Oakland, Maoist professors in Parisian grandes écoles, striking French industrial workers in shrinking factories, Berlin squatters, and more.

How should we define such a meaningful but heterogeneous category? A similar challenge may emerge when we try to define any religious or aesthetic movement or historical period. This is not only a scholarly but also a practical issue, because words like “68er”–or “expressionist,” or “fundamentalist”–can be used to motivate or to criticize. We should be able to assess whether such words apply.

One option is to apply a general scheme. For instance, 68ers were on the left. That statement invokes the ideological spectrum that originated in the French Revolution. But 68ers often differentiated themselves from the Old Left, and both sides in that debate claimed to be further left than the other.

One could define the spectrum independently and then use the definition to settle the question of how far left the 68ers stood–but surely they did not agree with each other. Nor would they all endorse anyone else’s definition of the ideological spectrum. They devoted considerable attention to debating issues (with their opponents and among themselves) such as race, sexuality, violence, Israel, and voting. Where specific views of these matters fall on the left-right spectrum seems hard to establish without taking a substantive political position.

Another option is to use an exogenous characteristic that is directly observable to define the category. For example, surely 68ers were college students during the year 1968–hence, early Baby Boomers. But most college students were not 68ers (by any definition of that term), and some classic 68ers were considerably older or had never gone to college. Even the founders of Students for a Democratic Society were as old as 32 (Vinen, p. 30), and many important 68ers were industrial workers.

A third option is to use concrete behavior to define the category. Maybe 68ers are those who participated in mass protests during the year 1968. But the largest protest in Paris was in support of de Gaulle and the regime. Some classic 68ers never literally protested. Probably few thought that the act of protesting defined their movement. And “1968” was not constrained by the calendar year. Vinen thinks that most of Britain’s ’68 took place during the 1970s. The “hard hat riot”–in favor of the Vietnam War — took place a bit late (May 1970) but is still part of Vinen’s narrative.

A common approach in the social sciences would be to treat “68er” as a latent construct that can be detected statistically. Imagine a survey with numerous items: “Do you have a poster of Che on your wall?” “Would you abolish prisons?” “Do you live in a commune?” “Do you like the main characters in Bonnie and Clyde?” After many putative 68ers had completed the survey, researchers would use techniques like factor-analysis to detect patterns. The data might show that an individual’s aggregate score on a small set of the questions defines the category of interest. Then we would have a reliable “68er scale.”

I think that kind of method is helpful, but it cannot be presented as innocent of concepts. We might ask about communes and Che Guevara because we already have a loose mental model of a 68er. We wouldn’t ask people their favorite flavors of ice cream. If we did, and the answer happened to correlate with the whole scale, we would treat that result that as a curiosity, not part of the definition of a 68er. But, if we asked about food and found out that 68ers ate lentils, that would be meaningful. Evidently, we must already know something about what a 68er is as we draft the survey. What is already in our minds?

My own view would build on Wittgenstein’s notion of a family resemblance. In Philosophical Investigations (67), he writes, “the various resemblances between members of a family: build, features, colour of eyes, gait, temperament, etc. etc. overlap and cris-cross in the same way.” He’s arguing that many useful words point to groups of objects that need not all share any single feature but that tend to share features from a list, much as a surname can point to a cluster of people who tend to display some of the same physical characteristics. (“Lots of the Joneses have curly red hair.”) Statistical procedures like cluster analysis can point to these resemblances.

But we know why physical features recur in families: DNA. Why would certain musical choices, political opinions, recreational drugs, hairstyles, and career choices cluster to form the group that we identify as 68ers? Is there an underlying cause?

I think of it this way: Each person holds many beliefs and values. Ideas come and go, and individuals hold them with various degrees of confidence. But ideas are not independent of each other. People think one thing and conclude something else as a result, thus linking two of their beliefs with a reason. For example, they might start by liking Joan Baez and come to oppose the Vietnam War, or vice versa. But there are many ways to put ideas together, and few do it in just the same way. You could hold a strongly anti-authoritarian premise that takes you to anarchism or to capitalism. You could begin by opposing the Vietnam War and find yourself against capitalism or against the state. (I’ve known some Boomer libertarians for whom Vietnam was the formative experience.)

Thus a group like the 68ers (and many others) consists of a cluster of people with a family resemblance, but the reasons that connect their individual beliefs and values together tend to recur, and they recur for discernible reasons. In that sense, a satisfactory account of the group is a list of many of their common specific beliefs and values plus a discussion of the ways that they tend to fit together. The resulting map will not describe everyone but it will capture some of the common patterns and explain on what basis members of the group disagree with each other.

See also: Levine, P. (2024). People are not Points in Space: Network Models of Beliefs and Discussions. Critical Review, 1–27 (2024). https://doi.org/10.1080/08913811.2024.2344994 

we treat facts and values alike when we reason

Years ago,  Justin McBrayer found this sign hanging in his son’s second-grade classroom:

Opinion: What someone thinks, feels, or believes.

Fact: Something that is true about a subject and can be tested or proven.

This distinction is embedded in significant aspects of our culture and society. For example, science aspires to be about facts, not opinions. And values are often assigned to the category of opinions. But this distinction doesn’t describe the way people actually reason.

After you utter any standard sentence, another person can ask two questions: “Why did you say that?” And, “What does it imply?” Any standard sentence has premises that entail it and consequences that it, in turn, implies. Any sentence is in the middle of a network of related thoughts, and you can be asked to make those relationships explicit (Brandom 2000).

Imagine a rooster who wakes you up by crowing at a dawn, and a parent who wakes her child in time for school. Both have brains, perceptions, and desires. But only the parent shares a language with another party. As a result, the child can ask, “Why are we waking up now?” or “What do I have to do next?” These are upstream and downstream implications of the sentence: “Wake up!”

Upon receiving an answer, the child can ask further questions. “Why do I have to go to school?” “Why is learning good?” The parent’s patience for this kind of discussion is bound to be finite, but the very structure of language implies that it could go on virtually forever.

The same process works for sentences that are about facts and for those that are more about values. A child asks, “Why do I have to go to school?” The answer, “Because it is 8 am,” is factual. The answer, “Because it’s important to learn” involves values. Either response can, in turn, prompt further “why” questions that can be answered.

The positivist assumption that values are opinions rather than facts suggests that values are conversationally inert, connected to the speaker but not to any other sentences. When you say that you value something, a positivist understands this as a fact about yourself, not as a claim that you could justify. However, we do justify value-claims. We state additional sentences about what implies our values or what our values imply.

In real life, people sooner or later choose to halt the exchange of reasons. “Why do you think that?” “I saw it with my own eyes.” “Why do you believe your eyes?” At this point, most people will opt out of the conversation, nor do I blame them.

Note, however, that the respondent probably could give reasons other than “I saw it with my eyes.” Statements typically have multiple premises, not just one. Further, a person could explain why we typically believe what we see. There is much to be said about eyes, mental processes connected to vision, and so on. I realize that discussing such matters is for specialists, and most people should not bother going into them. But the point is that the network of reasons could almost always be extended further, if one chose.

And the same is true for value-claims. “Why do you support that?” “Because it’s fair.” “What makes it fair?” “It treats everyone equally.” “Why do you favor equality?” At this point, many people may say, “I just do,” which is rather like saying, “I saw it with my own eyes.” But again, the conversation could continue. There is a great deal to be said about premises that imply the value of equality and consequences that equality entails if it’s defined in various specific ways. By spelling out more of this network, we make ourselves accountable for our positions.

Driving a distinction between opinions/values and facts would artificially prevent us from connecting our value-laden claims to other sentences, which we naturally–and rightly–do.

Source: Robert R. Brandom, Articulating Reasons: An Introduction to Inferentialism. (Harvard 2000). See also: listeners, not speakers, are the main reasoners; how intuitions relate to reasons: a social approach; we are for social justice, but what is it?; making our models explicit; introducing Habermas; and “Just teach the facts.

[Additional note, Oct 18: David Hume originated the fact-value distinction. For him, reasoning was essentially about perceiving things. The mind formed representations, especially visualizations. As Hilary Putnam writes (p. 15), Hume had a “pictorial semantics.” But you can’t see values. Nor can you see the self or causation. If we use visual metaphors–lenses, paintings, or images–for the mind, then it can’t seem to reason about values.

Nowadays, we think of reasoning mainly in terms of symbols that are combined and manipulated. The reigning metaphor is not a lens but a computer. We absolutely can compute sentences that include values. It’s true that a mind that manipulates and combines symbols must ultimately touch the world beyond itself, and there remains a role for sensation. Computers have input devices. But the connection between a mind and the world cannot be a matter of separate and distinct representations, since many things that we reason about–not only values, but also neutrinos, diseases, and economies–do not appear to our eyes. Source: Hilary Putnam (2002) The collapse of the fact/value dichotomy and other essays. Harvard.]

The post we treat facts and values alike when we reason appeared first on Peter Levine.

a collective model of the ethics of AI in higher education

Hannah Cox, James Fisher, and I have published a short piece in an outlet called eCampus News. The whole text is here, and I’ll paste the beginning here:

AI is difficult to understand, and its future is even harder to predict. Whenever we face complex and uncertain change, we need mental models to make preliminary sense of what is happening.

So far, many of the models that people are using for AI are metaphors, referring to things that we understand better, such as talking birds, the printing press, a monsterconventional corporations, or the Industrial Revolution. Such metaphors are really shorthand for elaborate models that incorporate factual assumptions, predictions, and value-judgments. No one can be sure which model is wisest, but we should be forming explicit models so that we can share them with other people, test them against new information, and revise them accordingly.

“Forming models” may not be exactly how a group of Tufts undergraduates understood their task when they chose to hold discussions of AI in education, but they certainly believed that they should form and exchange ideas about this topic. For an hour, these students considered the implications of using AI as a research and educational tool, academic dishonesty, big tech companies, attempts to regulate AI, and related issues. They allowed us to observe and record their discussion, and we derived a visual model from what they said.

We present this model [see above] as a starting point for anyone else’s reflections on AI in education. The Tufts students are not necessarily representative of college students in general, nor are they exceptionally expert on AI. But they are thoughtful people active in higher education who can help others to enter a critical conversation.

Our method for deriving a diagram from their discussion is unusual and requires an explanation. In almost every comment that a student made, at least two ideas were linked together. For instance, one student said: “If not regulated correctly, AI tools might lead students to abuse the technology in dishonest ways.” We interpret that comment as a link between two ideas: lack of regulation and academic dishonesty. When the three of us analyzed their whole conversation, we found 32 such ideas and 175 connections among them.

The graphic shows the 12 ideas that were most commonly mentioned and linked to others. The size of each dot reflects the number of times each idea was linked to another. The direction of the arrow indicated which factor caused or explained another.

The rest of the published article explores the content and meaning of the diagram a bit.

I am interested in the methodology that we employed here, for two reasons.

First, it’s a form of qualitative research–drawing on Epistemic Network Analysis (ENA) and related methods. As such, it yields a representation of a body of text and a description of what the participants said.

Second, it’s a way for a group to co-create a shared framework for understanding any issue. The graphic doesn’t represent their agreement but rather a common space for disagreement and dialogue. As such, it resembles forms of participatory modeling (Voinov et al, 2018). These techniques can be practically useful for groups that discuss what to do.

Our method was not dramatically innovative, but we did something a bit novel by coding ideas as nodes and the relationships between pairs of ideas as links.

Source: Alexey Voinov et al, “Tools and methods in participatory modeling: Selecting the right tool for the job,” Environmental Modelling & Software, vol 19 (2018), pp. 232-255. See also: what I would advise students about ChatGPT; People are not Points in Space; different kinds of social models; social education as learning to improve models

The post a collective model of the ethics of AI in higher education appeared first on Peter Levine.

People are not Points in Space

Newly published: Levine, P. (2024). People are not Points in Space: Network Models of Beliefs and Discussions. Critical Review, 1–27 (2024). https://doi.org/10.1080/08913811.2024.2344994 (Or a free pre-print version)

Abstract:

Metaphors of positions, spectrums, perspectives, viewpoints, and polarization reflect the same model, which treats beliefs—and the people who hold them—as points in space. This model is deeply rooted in quantitative research methods and influential traditions of Continental philosophy, and it is evident in some qualitative research. It can suggest that deliberation is difficult and rare because many people are located far apart ideologically, and their respective positions can be explained as dependent variables of factors like personality, partisanship, and demographics. An alternative model treats a given person’s beliefs as connected by reasons to form networks. People disclose the connections among their respective beliefs when they discuss issues. This model offers insights about specific cases, such as discussions conducted on two US college campuses, which are represented here as belief-networks. The model also supports a more optimistic view of the public’s capacity to deliberate.

The post People are not Points in Space appeared first on Peter Levine.

An Association as a Belief Network and Social Network

This is a paper that I presented at the Midwest Political Science Association on April 6, 2024. I hope to reproduce this study with another organization before publishing the results as a comparison. I am open to investigating groups that you may be involved with–a Rotary Club like the one in this study, a religious congregation, or something else. Please contact me if you are interested in exploring such a study.

Abstract

A social network is composed of individuals who may have various relationships with one another. Each member of such a network may hold relevant beliefs and may connect each belief to other beliefs. A connection between two beliefs is a reason. Each member’s beliefs and reasons form a more-or-less connected network. As members of a group interact, they share some of their respective beliefs and reasons with peers and form a belief-network that represents their common view. However, either the social network or the belief network can be disconnected if the group is divided.

This study mapped both the social network and the belief-network of a Rotary Club in the US Midwest. The Club’s leadership found the results useful for diagnostic and planning purposes. This study also piloted a methodology that may be useful for social scientists who analyze organizations and associations of various kinds.

The post An Association as a Belief Network and Social Network appeared first on Peter Levine.

An Association as a Belief Network and Social Network

I will present a paper entitled “An Association as a Belief Network and Social Network” at next week’s Midwestern Political Science Association meeting (remotely). This is the paper.

Abstract:

A social network is composed of individuals who may have various relationships with one another. Each member of such a network may hold relevant beliefs and may connect each belief to other beliefs. A connection between two beliefs is a reason. Each member’s beliefs and reasons form a more-or-less connected network. As members of a group interact, they share some of their respective beliefs and reasons with peers and form a belief-network that represents their common view. However, either the social network or the belief network can be disconnected if the group is divided.

This study mapped both the social network and the belief-network of a Rotary Club in the US Midwest. The Club’s leadership found the results useful for diagnostic and planning purposes. This study also piloted a methodology that may be useful for social scientists who analyze organizations and associations of various kinds.

Two illustrative graphs …

Below is the social network of the organization. A link indicates that someone named another person as a significant influence. The size of each dot reflects the number of people who named that individual. The network is connected, not balkanized. However, there are definitely some insiders, who have lots of connections, and a periphery.

The belief-network is shown above this post. The nodes are beliefs held by members of the group. A link indicates that some members connect one belief to another as a reason, e.g., “I appreciate friendships in the club” and therefore, “I enjoy the meetings” (or vice-versa). Nodes with more connections are larger and placed nearer the center.

One takeaway is that members disagree about certain matters, such as the state of the local economy, but those contested beliefs do not serve as reasons for other beliefs, which prevents the group from fragmenting.

I would be interested in replicating this method with other organizations. I can share practical takeaways with a group while learning more from the additional case.

See also: a method for analyzing organizations

The post An Association as a Belief Network and Social Network appeared first on Peter Levine.

people are not points in space

This is the video of a lecture that I gave at the Institute H21 symposium in Prague last September. The symposium was entitled Democracy in the 21st Century: Challenges for an Open Society, and my talk was: “People Are Not Points in Space: Opinions and Discussions as Networks of Ideas.” I’m grateful for the opportunity to present and for the ideas of other participants and organizers.

My main point was that academic research currently disparages the reasoning potential of ordinary people, and this skepticism discourages efforts to protect and enhance democratic institutions. I think the low estimate of people’s capacity is a bias that is reinforced by prevalent statistical methods, and I endorse an alternative methodology.

See also:  individuals in cultures: the concept of an idiodictuon; Analyzing Political Opinions and Discussions as Networks of Ideas; a method for analyzing organizations

can AI help governments and corporations identify political opponents?

In “Large Language Model Soft Ideologization via AI-Self-Consciousness,” Xiaotian Zhou, Qian Wang, Xiaofeng Wang, Haixu Tang, and Xiaozhong Liu use ChatGPT to identify the signature of “three distinct and influential ideologies: “’Trumplism’ (entwined with US politics), ‘BLM (Black Lives Matter)’ (a prominent social movement), and ‘China-US harmonious co-existence is of great significance’ (propaganda from the Chinese Communist Party).” They unpack each of these ideologies as a connected network of thousands of specific topics, each one having a positive or negative valence. For instance, someone who endorses the Chinese government’s line may mention US-China relationships and the Nixon-Mao summit as a pair of linked positive ideas.

The authors raise the concern that this method would be a cheap way to predict the ideological leanings of millions of individuals, whether or not they choose to express their core ideas. A government or company that wanted to keep an eye on potential opponents wouldn’t have to search social media for explicit references to their issues of concern. It could infer an oppositional stance from the pattern of topics that the individuals choose to mention.

I saw this article because the authors cite my piece, “Mapping ideologies as networks of ideas,” Journal of Political Ideologies (2022): 1-28. (Google Scholar notified me of the reference.) Along with many others, I am developing methods for analyzing people’s political views as belief-networks.

I have a benign motivation: I take seriously how people explicitly articulate and connect their own ideas and seek to reveal the highly heterogeneous ways that we reason. I am critical of methods that reduce people’s views to widely shared, unconscious psychological factors.

However, I can see that a similar method could be exploited to identify individuals as targets for surveillance and discrimination. Whereas I am interested in the whole of an individual’s stated belief-network, a powerful government or company might use the same data to infer whether a person would endorse an idea that it finds threatening, such as support for unions or affinity for a foreign country. If the individual chose to keep that particular idea private, the company or government could still infer it and take punitive action.

I’m pretty confident that my technical acumen is so limited that I will never contribute to effective monitoring. If I have anything to contribute, it’s in the domain of political theory. But this is something–yet another thing–to worry about.

See also: Mapping Ideologies as Networks of Ideas (talk); Mapping Ideologies as Networks of Ideas (paper); what if people’s political opinions are very heterogeneous?; how intuitions relate to reasons: a social approach; the difference between human and artificial intelligence: relationships