What kind of right is the right to film police?
It is pretty clear to me that there ought to be some kind of right to photograph and film police, especially arrests. And yet, at least one US District Judge Finds no First Amendment Right to Film or Photograph Police:
We find there is no First Amendment right under our governing law to observe and record police officers absent some other expressive conduct. (Fields and Geraci v. City of Philadelphia et al)
Here’s the problem: the First Amendment protects expressive conduct. We often think of the main role of the photographer as quietly observing and recording; their expressive conduct comes later, when they publish that record. Of course, there’s some reason to think that that required action is thus equally-well protected: I can’t publish a video of police if I’m not allowed to film a video of police.
But we don’t really think this is a generic right. We usually assume that ordinary folks have some right to their likeness and some expectations of privacy. Police are special, and we need enhanced rights to record their activities. Yet the First Amendment might not be designed to cover that special instance. I suspect that the right to film police would best be understood as one of those old penumbral rights no longer in fashion: a living update of the implications of the First, Fourth, and Fourteenth.
I think of filming the police sort of like I think of election monitors: the right to free and fair elections occasionally requires an ancillary right (to monitor elections and note violations) to preserve that primary right to vote. This is always a strategic or practical question, though: you wouldn’t need election monitors of the ordinary sort in Oregon, where all voting is done by mail. Under those circumstances, it would be odd for an election monitor to shove his way into your living room to make sure your postal ballot was properly prepared. But we do need some form of accountability in these matters, and under the current circumstances, photography is a good check on police abuses.
You can’t guarantee due process, reasonable search and seizure, or free expression of dissent without the ability to record interactions with police. And yet, this would fail any originalist’s test, for how can there be an implied right in an 18th century document that can only be exercised with 21st century technologies? It’s not like firearms or the printing press, where some version of the technology existed and it has merely become more effective.
Alternatively, the courts should recognize a First Amendment right to observe and record police as a variety of assembly. This, though, would subject it to much more exacting restrictions on the time, place, and manner of the recording. Legislatures might even be able to curtail filming police arrests entirely under this understanding of the right! Consider that even with a constitutional right to assemble, a city may appropriately require permits for rallies and even restrict the spaces where protests can occur. Would we accept restrictions on observing and recording police such that only credentialed journalists could do it? I think not: the power of the camera phone is that anyone can act as a citizen journalist when they see police engaged in potential misconduct.
Of course, my real problem with original meaning arguments is that they assume the framers were godlike or genius-like in their pronouncements. They certainly weren’t. We should have a lot less respect for them, a lot less of a tendency to call them Founding Fathers with capital letters. They were men, and venal ones. Most of them had slaves, and large parts of the Constitution and the Bill of Rights were designed to help them keep their slaves. When we help people draft their own constitutions–like in Iraq or East Timor–we always make sure they don’t repeat the model in the US Constitution, because it’s antiquated and usually leads to massive constitutional crises in short order. Most of US politics is basically an elaborate work-around for that; a patch on a patch on a patch of broken code.
That’s why I hope that the Supreme Court will eventually recognize filming the police as an act of expressive conduct worthy of protection under the First Amendment: not because that’s the best analysis of such cases, but because our system increasingly needs such “cheats” just to function.
on the original meaning of democracy
We call ourselves a democracy and a republic. There’s a current right-wing talking point that we are only the latter, but I’ve argued that this claim deviates from a long bipartisan consensus that the US aspires to be a democratic republic. But what do these two terms mean?
This definitional question is challenging because the words come, respectively, from Greek and Latin, and they were coined to name specific regimes that had lots of eccentric features (huge juries in Athens; a host of executive officials in Rome) that no one considers definitive. The words have subsequently been used by many writers in many languages to name a wide variety of regimes–and sometimes as terms of abuse.
For instance, a “republic” presumably must name a regime that has something in common with the original, the ancient Roman res publica. One defining feature of the Roman republic was simply that it wasn’t a monarchy. Thus people who want to remove Queen Elizabeth II as the titular monarch of Australia (or Britain) call themselves “republicans.” Their proposal would change virtually nothing about the power structure; it would be almost entirely symbolic. But they have precedent for calling a regime without a monarch a “republic.”
In a very different vein, Jefferson defined a republic “purely and simply” as “government by its citizens in mass, acting directly and personally, according to rules established by the majority; and … every other government is more or less republican, in proportion as it has in its composition more or less of this ingredient of the direct action of the citizens.” For Jefferson, a “republic” is what others would call a direct and participatory democracy. Yet the original Roman republic was composed of legislative bodies and officers who represented various classes and interests. Some were elected and others were appointed. All were limited by various laws (albeit unstably so). Thus, for some, a republic is a government that avoids direct and participatory democratic elements.
Still other writers have noticed the ancient Roman penchant for civic duty and public service and have used the word “republic” for a regime that demands a great deal from its citizens and that encourages public engagement as a positive good. It is an alternative to the kind of liberalism that favors individual rights. Meanwhile, another tradition takes seriously the etymology–“res publica” means “public thing [or good]”–and translates the phrase as “commonwealth.” A “commonwealth,” in turn, could mean all the things that are commonly owned by the people. And if the people’s wealth extends to the land, then a certain kind of agrarian socialism emerges as the definition of republicanism.
That’s all about “republic,” but I’d like to address the term “democracy,” relying on a fascinating article by Josiah Ober.* Ober notes that if the Greeks had wanted a word that meant rule of the many (or the common people), they would have used pollo- as the suffix prefix. To name a regime in which all rule, they could have used “panocracy.” If they had wanted to emphasize the equality of all, they would have used iso-. For instance, isegoria meant an equal right to participate in deliberations in the agora. But they chose demo-, which refers to the whole people as one, without sociological distinctions.
Meanwhile, if they had wanted to specify who governed, in the sense of casting votes or holding offices, they would have used the suffix -archy. A monarchy has one ruler, an oligarchy has a few, and anarchies have none. The suffix -kratia is different. It does not imply an office or action but rather power, in the sense of capacity or an ability to make things happen.
Thus, in its original form, a democracy is a regime in which the whole population has the power to make things together. By the way, this definition comes close to uses of the word “republic” that emphasize the public’s role in making the res publica. So perhaps “democracy” means “republic” after all.
*Josiah Ober, “The Original Meaning of ‘Democacy’: Capacity to Do Things, Not Majority Rule,” Constellations, vol. 15, no. 1 (2008)
Self-Esteem and the Death of the Subject
I have written here repeatedly about the problems with person-oriented reactive attitudes and character skepticism. But recently I came across the work of the psychologist Albert Ellis, whose work is at the intersection of therapeutic psychology and philosophy. His work on self-esteem and person-oriented assessment suggests an interesting new direction for the general insight that we are in error when we attribute actions, habits, and tendencies to a self or a subject.
Ellis calls this “unconditional self-acceptance.” Where the psychology of self-esteem encourages us to continually affirm (perhaps daily) propositions about how lovable and capable we and others are, Ellis’s unconditional self-acceptance instead suggests that we forgo these exercises and the global evaluations they require for more careful assessments of acts and behaviors. The same applies to our assessments of others, and thus he offers a good case study of the attempt to operationalize a rejection of person-oriented reactive attitudes through “unconditional other-acceptance.”
Ellis’s student David Mills summarizes the argument like this:
- Most people unfortunately believe that self-esteem must, in some way, be earned through accomplishments.
- When self-esteem is based on accomplishments, it must be earned repeatedly. It is never permanent.
- The concept of self-esteem leads intermittently to self-damnation.
- The concept of self-esteem usually promotes social and behavioral inhibition.
- A compulsive drive for self-esteem leads to frequent anxiety. And self-esteem-related anxiety is an obstacle to achieving those goals essential to our self-esteem!
Now, I think there’s a lot of truth in Ellis’s diagnosis. We have good reasons to believe that our acceptance within the community is predicated on the judgments of our peers. So we are right to self-monitor the likely assessments of others, to avoid transgressing crucial communal norms, free-riding on the efforts of our collaborators, or running afoul of the unwritten standards of behavior and comportment. There’s some reason to believe that this monitoring is the basis for person-oriented status judgments: we assess others and ourselves in order to determine the standards for preserving our group membership, and the continued existence of social exclusion and individual choice proves that we’re not living under conditions of unconditional acceptance.
Yet at the same time, we also know that our assessments and attributions suffer from serious errors and biases. Psychology has begun to catalog these biases and give them catchy names like the spotlight effect and fundamental attribution bias, but the basic insight is just that we’re often very deeply wrong about these assessments.
As a result, Mills (following Ellis) recommends an elegant solution:
- To overcome self-esteem-related anxiety and inhibition, recognize that your choice is not between self-esteem and self-condemnation. Your choice, rather, is between establishing an overall self-image and establishing no self-image. That is, you can choose to view your external actions and traits as desirable or undesirable, but abstain from esteeming or damning yourself as a whole.
This is a philosophically dense proposal, one that assumes that by changing our metaphysical orientation to persons, we can overcome the pernicious (and importantly false!) habits of anxiety, self-blame, and self-destruction. In so doing, we can also develop a more sensitive and sophisticated attitude towards our neighbors and fellow citizens.
Of course, the practical efficacy of these attitudes are difficult to measure; apparently there’s been little empirical work on the topic, but to assess the model it helps to think through the best case scenario. Let’s assume that forgoing global evaluations of self and other has the effects promised: less anxiety, fewer fundamental attribution errors, improved mental health outcomes, etc.
Yet as we think about these themes, and especially about prescriptive metaphysics required for this to function, I wonder if we can preserve the sense of accuracy. Is this merely an exercise or is it meant to actually be supplying more accurate claims about the world? Is it convenient or true?
Academics of a certain stripe have been rehashing the “death of the subject” for a while now. The best reasons for rejecting person-oriented reactive attitudes seem to follow in this mold: one cannot judge a person without judging her acts, yet single acts are insufficient for a whole judgment of her person. Her acts are multifarious and varied, yet domain-specific judgments are subject to contextual factors. She is the agent of her acts, yet agency is empirically undermined by context.
Ellis himself claims the mantle of truth for this rejection of global judgments, but since his primary work is with patients who aren’t all willing to accept the full set of metaphysical presumptions here, he also suggests a “pragmatic” and “inelegant” alternative:
“If, however, you have difficulty refusing to rate your self, your being, you can arbitrarily convince yourself, ‘I am “good” or “okay” because I exist, because I am alive, because I am human.’ This is not an elegant solution to a problem of self-worth, because I (or anyone else) could reply, ‘But I think you are “bad” or “worthless” because you are human and alive.’ Which of us is correct? Neither of us: because we are both arbitrarily defining you as ‘good’ or ‘bad,’ and our definitions are not really provable nor falsifiable. They are just that: definitions.
Defining yourself as ‘good,’ however, will give you much better results than believing that you are ‘bad’ or ‘rotten.’ Therefore, this inelegant conclusion works and is a fairly good practical or pragmatic solution to the problem of human ‘worth.’ So if you want to rate your self or your being, you can definitionally, tautologically, or axiomatically use this ‘solution’ to self-rating.”
This was always the real problem with the self-esteem movement and with the two kinds of respect Stephen Darwall identified; it’s very difficult to preserve recognition respect, a sense of respect-for-persons that rates them higher than chairs, concepts, or other animals while simultaneously pretending that there are no further forms of appraisal like their skills, competences, and morally salient decisions.
We sometimes pretend that maximal attention to the norms of recognition respect eliminate the room for appraisal respect. Thus, because humans all have this recognition respect in the form of what Kant called “dignity” there’s no room for social status differentiation. But we play favorites. I have favorite people (friends), favorite scholars (idols?), favorite religious groups (Quakers!), and even favorite politicians (Elizabeth Warren, who was once a favorite scholar!) What’s more, I have good days and bad days, days where I’m proud of my teaching and writing, and other days where I feel like I failed to live up to my own expectations.
Ellis claims that we should actively resist any effort to assemble all these appraisals into a complete picture of the person. That we can assess the actions without making all the troublesome metaphysical assumptions required to attribute those actions to a person. Indeed, perhaps I shouldn’t give Elizabeth Warren the Senator so much credit for the work of Warren the Law Professor.
But I’m still giving Warren credit. And that’s the problem. I’m starting to think we can’t duck person-oriented reactive attitudes by merely reducing them to action-oriented reactive attitudes. Going back to the original Strawson paper, we don’t get angry at the painful blow, or fall in love with the witty reply. We get angry at the person who lands the painful blow; we fall in love with the person who offers the witty reply.
So how can we avoid the Nietzschean invention of a doer for every deed? Can we stop ourselves from filling in the back story of the driver who cuts us off in traffic to show that he is a terrible human being? And if we can, should we? Or should we continue to pretend?
We still might want to say that global judgments are a mistake. The person who offers you witty replies on a first date may also be kind of boring sometimes. The person who assaults you may also be a loving father or an honor roll student. It may well be that we learn remarkably little about most people from what we see of them, and that we fill in this ignorance with heuristics and biases that are more rough than ready.
It’s hard not to equate Nietzsche and Ellis here with Buddhist reflections on the illusory nature of selfhood. And it’s hard, too, not to think that this demand that we amend our syntax and our ethics begs the question.
Are we merely doing this to get off the treadmill of anxiety, to overcome maladaptive perfectionism? Is all this elaborate metaethical reflection really just therapeutic? Is it the philosopher’s obsessive #actually that demands we reassess the common sense for no other reason than to avoid imprecision? Is there a pragmatic upshot? What’s the cost of self-esteem? And what are its benefits?
Engaging Ideas – 2/26
The (Re)Emergence of American Hate
A certain presidential candidate, known for his racist, sexist, and otherwise outlandish rhetoric has recently won his third primary.
And if it wasn’t disturbing enough that people in KKK robes showed up to support him at the Nevada primary – an action which may or may not have been a poorly executed protest – one of the country’s most notorious white supremacist leaders unofficially endorsed this candidate today saying that anything other than voting for him was ‘treason to your heritage’.
Now, I have a general policy of not giving space to hate groups – which thrive on the attention generated by their shocking acts, but this is getting too serious to ignore.
But, here’s the thing – it’s not the idea that a particularly distasteful candidate might actually become president that I find so alarming. It’s the fact that he genuinely has so much popular support.
Donald Trump is making it acceptable to be a racist again.
Of course, racism has long been alive and well in this country. It never really died the quiet death we hoped it would. Through the activism of 60s and the “colorblindness” of the 90s, we just shoved it into the closet, hoping it would never spill out again.
In 1925, the KKK had “as many as 4 million members,” a number which shrank dramatically following the civil rights movement. The Southern Poverty Law Center estimates the group at 4,000-5,000 members today.
Of course, I still think the number of members is about 4-5 thousand more than I’d hope to see in my country – but that membership become even more disturbing when you consider that there are normative social pressures likely to prevent people from expressing their believes.
That is, our country is full of closeted racists.
Racists who aren’t closeted any more.
Earlier this week, the New York Times reported that 74% of South Carolina Republican primary voters favor “temporarily barring Muslims who are not citizens from entering the United States.”
Furthermore, a recent poll by Public Policy Polling found that in addition to barring Muslims, “31% [of Trump supporters] would support a ban on homosexuals entering the United States as well, something no more than 17% of anyone else’s voters think is a good idea.”
Again, 0% would be a better figure here.
The New York Times also reports that, “Nearly 20 percent of Mr. Trump’s voters disagreed with Abraham Lincoln’s Emancipation Proclamation, which freed slaves in the Southern states during the Civil War.”
This is profoundly disturbing.
I’d almost prefer to blame this all on Donald Trump. If we can only stop him from winning the Presidency, then all our racial problems will be solved.
But here’s the thing: Trump is the symptom, not the disease.
A significant number – a significant number – of white Americans seem ready to re-don their white robes. Americans who otherwise are not entirely unlike myself.
I find that terrifying, and I’m hardly the most at risk.
It is not enough to wave our hands, to hope that the Republican establishment comes through with blocking a Trump nomination. We have to recognize that there is a growing racist sentiment – or, perhaps, a growing willingness to express that sentiment.
My greatest concern is not that Trump will be elected – it’s that even after he is eventually defeated, this profoundly, openly racist faction of Americans will continue to grow.
Andreas Weber’s “Biology of Wonder”: Aliveness as a Force of Evolution and the Commons
When I met biologist and ecophilosopher Andreas Weber several years ago, I was amazed at his audacity in challenging the orthodoxies of Darwinism. He proposes that science study a very radical yet unexplained phenomenon -- aliveness! He rejects the neoDarwinian account of life as a collection of sophisticated, evolving machines, each relentlessly competing with maximum efficiency for supremacy in the laissez-faire market of nature. (See Weber's fantastic essay on “Enlivenment” for more on this theme.)
Drawing upon a rich body of scientific research, Weber outlines a different story of evolution, one in which living organisms are inherently expressive and creative in a struggle to both compete and cooperate. The heart of the evolutionary drama, Weber insists, is the quest of all living systems to express what they feel and experience, and adapt to the world -- and change it! -- as they develop their identities.
Except for a few essays and public talks, most of Weber’s writings are available only in his native German. So it is a thrill that some of his core ideas have now been published in English. Check out his lyrical yet scientifically rigorous book, Biology of Wonder: Aliveness, Consciousness and the Metamorophosis of Science, just published by New Society Publishers. (Full disclosure requires me to mention my modest role in helping Andreas improve the “natural English” of his translation of his original German writings.)
Future historians will look back on this book as a landmark that consolidates and explains paradigm-shifting theories and research in the biological sciences. Biology of Wonder explains how political thinkers like Locke, Hobbes and Adam Smith have provided a cultural framework that has affected biological inquiry, and how the standard Darwinian biological narrative, for its part, has projected its ideas about natural selection and organisms-as-machines on to our understanding of human societies. Darwinism and "free markets" have grown up together.
This is now changing, as Weber explains:
Biology, which has made so many efforts to chase emotions from nature since the 19th century, is rediscovering feeling as the foundation of life. Until now researchers, eager to discover the structure and behavior of organisms, had glossed over the problem of an organism’s interior reality. Today, however, biologists are learning innumerable new details about how an organism brings forth itself and its experiences, and are trying not only to dissect but to reimagine developmental pathways. They realize that the more technology allows us to study life on a micro-level, the stronger the evidence of life’s complexity and intelligence becomes. Organisms are not clocks assembled from discrete, mechanical pieces; rather, they are unities held together by a mighty force: feeling what is good or bad for them.
In the grand narrative of evolution, the idea that feeling, emotions, morality and even spirituality might be consequential has long been dismissed. Such experiences are generally regarded as trivial sideshows to the main act of the cosmos: nasty, brutish competition as the inexorable vehicle of evolutionary progress. Indeed, modern times have virtually combined the idea of "survival of the fittest" with our cultural ideas about the "free market economy."
humanities work related to incarceration
All are welcome to 2016’s second Tisch Talk in the Humanities, “Stages of Detention,” on March 4 at 2:00 pm in the Rabb Room, Lincoln Filene Hall, Tufts University’s Medford campus.
Increasingly, scholars in the arts and humanities are working in and around prisons. On March 2, we will hear from two distinguished practitioners and will have the opportunity to discuss their work.
Noe Montez is Assistant Professor of Drama and Dance at Tufts. Professor Montez’s project explores guided tours of Southern Cone detention sites that have recently been converted into spaces of memory in order to explore how trauma and commemoration are performed as part of an ongoing process of transitional justice. His work includes research on sites in Argentina, Chile, and Uruguay. He has also completed a monograph that explores a Buenos Aires theatre’s collaboration with human rights activists in Argentina’s post-dictatorship.
Amy Remensnyder is Professor of History and a Public Humanities Fellow at Brown. Since 2010, Professor Remensnyder has been teaching history to men incarcerated in Rhode Island’s medium security prison. She is the founder and director of the Brown History Education Prison Project. Her increasing interest in issues of incarceration spurred her to design a course on the global history of prison and captivity, which she has taught both at Brown and at the prison. She is beginning work on a book about the global history of captivity.
The moderator and organizer is the Tisch Senior Fellow for the Humanities, Diane O’Donoghue.
The Two Endings of Brison’s Aftermath
Susan Brison’s Aftermath ends twice: the final chapter discusses her various efforts to retell the story of her brutal rape and attempted murder (she calls it “attempted sexual murder.”) And ends with her final, planned retelling to her son when he is older:
“Tragedy,” Wittgenstein wrote, “is when the tree, instead of bending, breaks.” What I wish most for my son is not the superhuman ability to avoid life-threatening disasters, but, rather, resilience, the capacity to carry on, alive in the present, unbound by dread or regret. Not the hard, flinty brittleness of rock, but the supple tenacity of the wind-rocked bough that bends, the bursting desire of a new-mown field that can’t wait to grow back, the will to say, whatever comes, Let’s see what happens next.
The second ending comes in an afterword where she discusses four murders. The first set of murders is the murder of her friends Susanne and Half Zantop which occurs soon after she submitted the manuscript. The second set is the murder of Trhas Berhe and Selamawit Tsehaye, two of five black women candidates for PhD in physics at Dartmouth a decade before. Because they were black international students from Ethiopia–killed by a third black Ethiopian–the campus treated these murders as non-events, and failed to mourn or respond with what we sometimes think of as the characteristic security theater.
In both cases she struggles with survivor’s guilt, the sense that their deaths and her survival were random, and undeserved. So she finishes the story again:
None of us is supposed to be alive. We’re all here by chance and only for a little while. The wonder is that we’ve managed, once again, to winter through and that our hearts, in spite of everything, survive.
Video: “Poverty, Culture, and Justice,” @ Purdue U
I’ve posted a number of recordings of interviews and talks I’ve given on Uniting Mississippi. This talk is on my next project, which is still in progress. The book is titled A Culture of Justice. One of the chapters that is in progress is the subject of the talk I gave at Purdue University. Here’s the video, about 1hr 28 mins:
If you’re looking for a speaker, visit my Speaking and Contact pages.
