When we finally start talking about gun control, what should we say?

bullets

I love policy discussions, but the demands for policy discussion on gun control after the shootings in Newtown today are terribly wrong-headed.

The problem is that demanding a policy discussion is not the same thing as having a policy discussion. At this point, we’re just talking about talking about gun control. It’s all “mention” and no “use.” It’d be nice if folks would actually start proposing laws. Like: limits on magazine size. Ammo taxes. Closing the gun show loophole. Or even…

Prohibition.

I’d love to talk about gun prohibition. (Notice, this isn’t even the same policy debate as “gun control.”) Unfortunately, if we start talking about gun prohibition, then we will be forced to confront how badly prohibition is working in other markets. There are three hundred and ten million guns in the US. (Yes! 310,000,000!) What does prohibition look like under those circumstances?

Reflecting on that question, ask yourself this: how many people will be killed in no-knock police raids trying to root out the black market in guns? Will they be mostly white or mostly black? (Notice that gun control laws have tended to be stricter in majority black areas rather than majority white areas. Both DC and Chicago, the battleground states for the 2nd Amendment claims, are disproportionately black.) How many of those killed by police will be kids? How many kids’ deaths will be prevented?

On reflection, I suspect that many gun control regimes and all possible paths to gun prohibtion are more likely to increase the number of people hurt and killed by guns.  So when we do finally start talking about gun control and gun prohibition, let’s be very, very careful.

  • Something must be done.
  • Prohibition is something. 
  • ∴ ????

Also, let’s remember that the violent crime rate, including gun crimes, is the lowest it’s been in 20 years. That doesn’t make what happened today any easier to handle, but perhaps it will allow us to focus on what happened, and the people it happened to, instead of replaying Jon Stewart’s Monday night monologue. Something terrible has happened. It didn’t happen to you or I, so we have the ability to ask whether it could have been prevented. We should ask whether it could have been prevented. But we should also ask: at what cost? Then we should follow that calculation of lives lost and lives saved wherever it leads.

Craig Whitney’s July New York Times Op-Ed on the Aurora shooting is still apropos here:

Liberals should accept that the only realistic way to control gun violence is not by keeping guns out of the hands of as many Americans as possible, but by keeping guns out of the hands of people we all agree should not have them.

Read the whole thing.

Cultural Cognition is Not a Bias

Some recent posts by Dan Kahan on the subject of “cultural cognition” deserve attention:

(Cultural cognition refers to the tendency of individuals to conform their beliefs about disputed matters of fact (e.g., whether global warming is a serious threat; whether the death penalty deters murder; whether gun control makes society more safe or less) to values that define their cultural identities.)

There’s no remotely plausible account of human rationality—of our ability to accumulate genuine knowledge about how the world works—that doesn’t treat as central individuals’ amazing capacity to reliably identify and put themselves in intimate contact with others who can transmit to them what is known collectively as a result of science.

Indeed, as I said at the outset, it is not correct even to describe cultural cognition as a heuristic. A heuristic is a mental “shortcut”—an alternative to the use of a more effortful, and more intricate mental operation that might well exceed the time and capacity of most people to exercise in most circumstances.

But there is no substitute for relying on the authority of those who know what they are talking about as a means of building and transmitting collective knowledge. Cultural cognition is no shortcut; it is an integral component in the machinery of human rationality.

Unsurprisingly, the faculties that we use in exercising this feature of our rationality can be compromised by influences that undermine its reliability. One of those influences is the binding of antagonistic cultural meanings to risk and other policy-relevant facts. But it makes about as much sense to treat the disorienting impact of antagonistic meanings as evidence that cultural cognition is a bias as it does to describe the toxicity of lead paint as evidence that human intelligence is a “bias.”

Look: people aren’t stupid. They know they can’t resolve difficult empirical issues (on climate change, on HPV-vaccine risks, on nuclear power, on gun control, etc.) on their own, so they do the smart thing: they seek out the views of experts whom they trust to help them figure out what the evidence is. But the experts they are most likely to trust, not surprisingly, are the ones who share their values.

What makes me feel bleak about the prospects of reason isn’t anything we find in our studies; it is how often risk communicators fail to recruit culturally diverse messengers when they are trying to communicate sound science.

The number of scientific insights that make our lives better and that don’t culturally polarize us is orders of magnitude greater than the ones that do. There’s not a “culture war” over going to doctors when we are sick and following their advice to take antibiotics when they figure out we have infections. Individualists aren’t throttling egalitarians over whether it makes sense to pasteurize milk or whether high-voltage power lines are causing children to die of leukemia.

People (the vast majority of them) form the right beliefs on these and countless issues, moreover, not because they “understand the science” involved but because they are enmeshed in networks of trust and authority that certify whom to believe about what.

For sure, people with different cultural identities don’t rely on the same certification networks. But in the vast run of cases, those distinct cultural certifiers do converge on the best available information. Cultural communities that didn’t possess mechanisms for enabling their members to recognize the best information—ones that consistently made them distrust those who do know something about how the world works and trust those who don’t—just wouldn’t last very long: their adherents would end up dead.

Rational democratic deliberations about policy-relevant science, then, doesn’t require that people become experts on risk. It requires only that our society take the steps necessary to protect its science communication environment from a distinctive pathology that enfeebles ordinary citizens from using their (ordinarily) reliable ability to discern what it is that experts know.