Exit, Voice, and Presidential Elections

In spring 2003, I was living in Japan.

That’s where I was when the Unites States invaded Iraq for “Operation Iraqi Freedom” as it was colorfully named by my government.

Throughout the months I lived abroad, I tried to keep up on the news from home; daily scouring reports from the U.S., the U.K., and Japan. The flavor of news coming out of each country was markedly different – the U.S. blindly patriotic, the U.K. supportingly reserved, Japan politely disapproving.

The details and word variation between articles told remarkably different stories, and I hoped, I suppose, that by reading multiple accounts I could somehow triangulate the truth.

The news coming out of the U.S. was particularly disturbing.

It was as though the whole nation had gone mad.

Other countries reported stories of schools being bombed by U.S. troops; my country was on some tear about Freedom Fries.

This was in the infancy of the blogosphere, so apart from the few people I kept in touch with over AOL Instant Messenger, my only sense for public opinion back home came from the sycophantic mainstream media. A media which has, in fact, somewhat reformed in recent years in response to its catastrophic failure of that time.

And perhaps this is why I’m inclined to sigh whenever someone declares that they will move to Canada, or, perhaps, the moon, should someone they strongly dislike be elected President.

I heard that a lot when President George W. Bush won reelection, and I’m hearing it a lot now.

It’s hardly a solution.

I hardly mean to imply that the Iraq War would have played out differently had I not been abroad; but it seems fairly certain that such warmongering tendencies would only be worse should all progressives decide to leave.

At the very least – I have to say – let’s not leave the nuclear launch codes behind.

In Exit, Voice, and Loyalty, Albert O. Hirschman outlines the three ways in which a person might interact with an organization, community, or state. As you may have guessed, the options are: exit, voice, and loyalty.

A person might stay loyal to an organization and support it’s views and actions; a person might exit an organization, leaving its undesirable policies in search of greener pastures; or a person might exercise voice: speaking up and fighting to make the organization the way they’d like it to be.

There are, of course, many instances throughout human history where people have been forced to exit for fear of their lives and wellbeing. One report estimates that there are nearly 60 million refugees in the world today. Theirs was not an exit taken lightly.

But the situation in the United States – while disheartening – is hardly so harsh.

I know most people are joking when they speak of plans to move away, and yet – it is a troubling sign of resignation.

We may not be the unparalleled superpower we might fancy ourselves to be, but we are still a nation which wields the potential for great harm or good.

If elections don’t go the way we like, it shouldn’t be cause to flee, but rather a call to action: our voices would be needed more than ever.

facebooktwittergoogle_plusredditlinkedintumblrmail

NCDD Members Get 30% Off New Transpartisan Book

We are pleased to share the announcement below from the Bridge Alliance – one of our NCDD organizational members – about the release of its co-founder and NCDD member Mark Gerzon‘s new book, The Reunited States of America, on March 1st. Mark’s book is highly relevant for D&D practitioners, so he and Bridge Alliance are offering a 30% discount for NCDD members with the code “NCDD”. Read more about the book in the announcement below, or find Mark’s book here.


Reunited States of America

We have an exciting offer for you that you won’t want to miss!

Mark Gerzon is releasing his new book March 1st and we want you to be some of the first people with this timely and important tool in hand!  That is why we are offering.30% off of the book when you pre-order as a member of NCDD.

Here are 5 reasons why you want to take advantage of this offer and order your copy of The Reunited States of America TODAY:

  1. This book puts the spotlight on dozens of individuals and organizations that comprise a new narrative for Democracy 2.0. It is a manifesto for a movement that includes the NCDD field – and highlights the importance of what those of us in this field are all trying to do.
  2. It describes problem solving on some of the most difficult and divisive issues, such as: abortion, gun control, sex education, defense spending, criminal justice reform and more!
  3. Gerzon invites us into a movement to reunite our nation and put country before party! This book doesn’t ask party members to forfeit their values, but celebrates the beauty of all diverse voices that grow out of love for your country.
  4. The Reunited States of America explains what you can do starting now to strengthen our nation’s sense of unity while honoring the vital role of conflicting points of view.
  5. While we all know that not one book, one person nor one organization has the power to change the course of American politics. But this new narrative, with your help, can rise up above the barrage of attack ads and spread a message about dialogue, deliberation and real democracy!

Although we come from opposite ends of the political spectrum, we believe our country needs to come together. “The Reunited States of America” will help us do that. It reconnects us to our country’s motto – “out of many, one” –  and helps us meet the challenge of reuniting the country that we all love.

– Grover Norquist, president, Americans for Tax Reform & Joan Blades, co-founder MoveOn.org and LivingRoomConversations.org

For 30% off of your copy of this book click here and use the discount code: NCDD

You bought the book, now what? Here are some easy ways to get the word out and unite America starting today!

Internet Communications Technologies (ICT)

Definition Problems and Purpose History Participant Selection Deliberation, Decisions, and Public Interaction Influence, Outcomes, and Effects Analysis and Lessons Learned Secondary Sources External Links http://wbi.worldbank.org/boost/tools-resources/topics/promoting-open-bud... https://rccsar.revues.org/316 http://wbi.worldbank.org/boost/Data/boost/boostcms/files/field/tools-res... Notes

Tamsin Shaw’s critique of moral psychology

I think that Tamsin Shaw’s article “The Psychologists Take Power” (New York Review of Books, February 25, 2016) is very important. I enjoyed an informal seminar discussion of it on Friday, but that conversation made me realize that the article is rather compressed and allusive, and its argument may not convey to readers who are unfamiliar with the research under review or with important currents in moral philosophy.

This is how I would reconstruct Shaw’s argument:

First, the psychological study of morality presents itself as a science; it claims to be value-neutral and strictly empirical. The phenomena under study are called “moral,” but the researchers purport or at least strive to be value-free.

Given that self-understanding, psychologists are attracted to three research programs: evolutionary biology, neuroscience, and game theory. Each presents itself as value-neutral. The three programs can be made highly consistent if one focuses on rapid human reactions to very basic stimuli, such as sexual desire or perceived threat. These reactions presumably arose well before cultural differentiation, they have Darwinian explanations, they would serve individuals or groups in competitive situations (e.g., while struggling for food or mates), and they light up specific parts of the brain. Findings that seem consistent with all three streams of research have special prestige because they seem particularly hard-headed and empirical. (A perfect example is the Times’ article yesterday: “What’s the Point of Moral Outrage? It may seem noble and selfless, but it’s also about improving your reputation.”)

People who think this way about morality are basically amoral. They have no independent moral compass. Yet they learn techniques that are useful for manipulating subjects, particularly in extreme situations where instinctive human impulses are most pertinent. Therefore, it is no surprise (Shaw writes) that some of them became professional advisers on torture during the first years of the Iraq occupation. Any argument against torture will seem to them arbitrary and subjective.

The last point may be a bit of an ad hominem, although it is certainly worth taking seriously as a warning. But even if all psychologists use good professional ethics, the agenda of making moral psychology strictly empirical needs to be challenged.

For one thing, you can’t study phenomena categorized as “moral” without independently deciding what constitutes morality. We have many deep, instinctive impulses. For instance, we are capable of altruism and even self-sacrificing love, but also of violence and greed. It’s plausible that many of these impulses have evolutionary roots and can be explained in game-theoretic terms. But only some of them are moral. Imagine, for instance, that I said, “Greed is a moral virtue that we developed early in our evolution as a species to motivate individuals to maximize resources.” This would not be a scientifically false statement. It would be morally false. The mistake is to call greed a “virtue.”

Jonathan Haidt likes to provoke liberals by describing “authority” and “sanctity” as moral values. They may be, but that requires a moral argument against the position that only care, fairness, liberty, and loyalty count as moral. The fact that some people see authority and sanctity as virtues does not make that opinion right. Hitler thought that racial purity was moral, and he was wrong. So moral reasoning is indispensable.

Further, when we reason morally, we are usually thinking about very complex, socially constructed phenomena that we don’t directly perceive. We certainly don’t experience them as immediate sense-data. I wrestle with my feelings about democracy, the United States, academia, capitalism modernity, etc. These things don’t appear in my visual field like violent threats or piles of yummy food. I experience such institutions through speech and text, through vicarious reports, and by accumulating experience and arguments over decades. Possibly the impulses that homo sapiens developed early in our evolution influence my judgments. For instance, I may have a deep, unconscious tendency to separate people into in-groups and out-groups, and that may affect my tendency to see the USA as my group. But I could treat another unit as my main group, I could be uninterested in (or even unaware of) the USA as an entity, or the country might not even exist. A nation is a social construction, built by people for complex reasons, that we understand in a mediated way. It would be a contentious assumption, not a hard-nosed scientific premise, that our most primitive impulses have much to say about institutions or our attitudes toward them.

See also: Jonathan Haidt’s six foundations of morality; neuroscience and morality; morality in psychotherapy; on philosophy as a way of life; is all truth scientific truth?; and right and true are deeply connected.

Citizen Budget

Problems and Purpose "Citizen Budget" is an innovative online tool to improve your public budget consultations. Our customizable budget simulator is at the cutting edge of online engagement and educates your residents about government services, budget tradeoffs and limits to government spending in a fun, user-friendly and and dynamic way...

What kind of right is the right to film police?

It is pretty clear to me that there ought to be some kind of right to photograph and film police, especially arrests. And yet, at least one US District Judge Finds no First Amendment Right to Film or Photograph Police:

We find there is no First Amendment right under our governing law to observe and record police officers absent some other expressive conduct. (Fields and Geraci v. City of Philadelphia et al)

Here’s the problem: the First Amendment protects expressive conduct. We often think of the main role of the photographer as quietly observing and recording; their expressive conduct comes later, when they publish that record. Of course, there’s some reason to think that that required action is thus equally-well protected: I can’t publish a video of police if I’m not allowed to film a video of police.

But we don’t really think this is a generic right. We usually assume that ordinary folks have some right to their likeness and some expectations of privacy. Police are special, and we need enhanced rights to record their activities. Yet the First Amendment might not be designed to cover that special instance. I suspect that the right to film police would best be understood as one of those old penumbral rights no longer in fashion: a living update of the implications of the First, Fourth, and Fourteenth.

I think of filming the police sort of like I think of election monitors: the right to free and fair elections occasionally requires an ancillary right (to monitor elections and note violations) to preserve that primary right to vote. This is always a strategic or practical question, though: you wouldn’t need election monitors of the ordinary sort in Oregon, where all voting is done by mail. Under those circumstances, it would be odd for an election monitor to shove his way into your living room to make sure your postal ballot was properly prepared. But we do need some form of accountability in these matters, and under the current circumstances, photography is a good check on police abuses.

You can’t guarantee due process, reasonable search and seizure, or free expression of dissent without the ability to record interactions with police. And yet, this would fail any originalist’s test, for how can there be an implied right in an 18th century document that can only be exercised with 21st century technologies? It’s not like firearms or the printing press, where some version of the technology existed and it has merely become more effective.

Alternatively, the courts should recognize a First Amendment right to observe and record police as a variety of assembly. This, though, would subject it to much more exacting restrictions on the time, place, and manner of the recording. Legislatures might even be able to curtail filming police arrests entirely under this understanding of the right! Consider that even with a constitutional right to assemble, a city may appropriately require permits for rallies and even restrict the spaces where protests can occur. Would we accept restrictions on observing and recording police such that only credentialed journalists could do it? I think not: the power of the camera phone is that anyone can act as a citizen journalist when they see police engaged in potential misconduct.

Of course, my real problem with original meaning arguments is that they assume the framers were godlike or genius-like in their pronouncements. They certainly weren’t. We should have a lot less respect for them, a lot less of a tendency to call them Founding Fathers with capital letters. They were men, and venal ones. Most of them had slaves, and large parts of the Constitution and the Bill of Rights were designed to help them keep their slaves. When we help people draft their own constitutions–like in Iraq or East Timor–we always make sure they don’t repeat the model in the US Constitution, because it’s antiquated and usually leads to massive constitutional crises in short order. Most of US politics is basically an elaborate work-around for that; a patch on a patch on a patch of broken code.

That’s why I hope that the Supreme Court will eventually recognize filming the police as an act of expressive conduct worthy of protection under the First Amendment: not because that’s the best analysis of such cases, but because our system increasingly needs such “cheats” just to function.

on the original meaning of democracy

We call ourselves a democracy and a republic. There’s a current right-wing talking point that we are only the latter, but I’ve argued that this claim deviates from a long bipartisan consensus that the US aspires to be a democratic republic. But what do these two terms mean?

This definitional question is challenging because the words come, respectively, from Greek and Latin, and they were coined to name specific regimes that had lots of eccentric features (huge juries in Athens; a host of executive officials in Rome) that no one considers definitive. The words have subsequently been used by many writers in many languages to name a wide variety of regimes–and sometimes as terms of abuse.

For instance, a “republic” presumably must name a regime that has something in common with the original, the ancient Roman res publica. One defining feature of the Roman republic was simply that it wasn’t a monarchy. Thus people who want to remove Queen Elizabeth II as the titular monarch of Australia (or Britain) call themselves “republicans.” Their proposal would change virtually nothing about the power structure; it would be almost entirely symbolic. But they have precedent for calling a regime without a monarch a “republic.”

In a very different vein, Jefferson defined a republic “purely and simply” as “government by its citizens in mass, acting directly and personally, according to rules established by the majority; and … every other government is more or less republican, in proportion as it has in its composition more or less of this ingredient of the direct action of the citizens.” For Jefferson, a “republic” is what others would call a direct and participatory democracy. Yet the original Roman republic was composed of legislative bodies and officers who represented various classes and interests. Some were elected and others were appointed. All were limited by various laws (albeit unstably so). Thus, for some, a republic is a government that avoids direct and participatory democratic elements.

Still other writers have noticed the ancient Roman penchant for civic duty and public service and have used the word “republic” for a regime that demands a great deal from its citizens and that encourages public engagement as a positive good. It is an alternative to the kind of liberalism that favors individual rights. Meanwhile, another tradition takes seriously the etymology–“res publica” means “public thing [or good]”–and translates the phrase as “commonwealth.” A “commonwealth,” in turn, could mean all the things that are commonly owned by the people. And if the people’s wealth extends to the land, then a certain kind of agrarian socialism emerges as the definition of republicanism.

That’s all about “republic,” but I’d like to address the term “democracy,” relying on a fascinating article by Josiah Ober.* Ober notes that if the Greeks had wanted a word that meant rule of the many (or the common people), they would have used pollo- as the suffix prefix. To name a regime in which all rule, they could have used “panocracy.” If they had wanted to emphasize the equality of all, they would have used iso-. For instance, isegoria meant an equal right to participate in deliberations in the agora. But they chose demo-, which refers to the whole people as one, without sociological distinctions.

Meanwhile, if they had wanted to specify who governed, in the sense of casting votes or holding offices, they would have used the suffix -archy. A monarchy has one ruler, an oligarchy has a few, and anarchies have none. The suffix -kratia is different. It does not imply an office or action but rather power, in the sense of capacity or an ability to make things happen.

Thus, in its original form, a democracy is a regime in which the whole population has the power to make things together. By the way, this definition comes close to uses of the word “republic” that emphasize the public’s role in making the res publica. So perhaps “democracy” means “republic” after all.

*Josiah Ober, “The Original Meaning of ‘Democacy’: Capacity to Do Things, Not Majority Rule,” Constellations, vol. 15, no. 1 (2008)

Self-Esteem and the Death of the Subject

I have written here repeatedly about the problems with person-oriented reactive attitudes and character skepticism. But recently I came across the work of the psychologist Albert Ellis, whose work is at the intersection of therapeutic psychology and philosophy. His work on self-esteem and person-oriented assessment suggests an interesting new direction for the general insight that we are in error when we attribute actions, habits, and tendencies to a self or a subject.

Ellis calls this “unconditional self-acceptance.” Where the psychology of self-esteem encourages us to continually affirm (perhaps daily) propositions about how lovable and capable we and others are, Ellis’s unconditional self-acceptance instead suggests that we forgo these exercises and the global evaluations they require for more careful assessments of acts and behaviors. The same applies to our assessments of others, and thus he offers a good case study of the attempt to operationalize a rejection of person-oriented reactive attitudes through “unconditional other-acceptance.”

Ellis’s student David Mills summarizes the argument like this:

  1. Most people unfortunately believe that self-esteem must, in some way, be earned through accomplishments.
  2. When self-esteem is based on accomplishments, it must be earned repeatedly. It is never permanent.
  3. The concept of self-esteem leads intermittently to self-damnation.
  4. The concept of self-esteem usually promotes social and behavioral inhibition.
  5. A compulsive drive for self-esteem leads to frequent anxiety. And self-esteem-related anxiety is an obstacle to achieving those goals essential to our self-esteem!

Now, I think there’s a lot of truth in Ellis’s diagnosis. We have good reasons to believe that our acceptance within the community is predicated on the judgments of our peers. So we are right to self-monitor the likely assessments of others, to avoid transgressing crucial communal norms, free-riding on the efforts of our collaborators, or running afoul of the unwritten standards of behavior and comportment. There’s some reason to believe that this monitoring is the basis for person-oriented status judgments: we assess others and ourselves in order to determine the standards for preserving our group membership, and the continued existence of social exclusion and individual choice proves that we’re not living under conditions of unconditional acceptance.

Yet at the same time, we also know that our assessments and attributions suffer from serious errors and biases. Psychology has begun to catalog these biases and give them catchy names like the spotlight effect and fundamental attribution bias, but the basic insight is just that we’re often very deeply wrong about these assessments.

As a result, Mills (following Ellis) recommends an elegant solution:

  • To overcome self-esteem-related anxiety and inhibition, recognize that your choice is not between self-esteem and self-condemnation. Your choice, rather, is between establishing an overall self-image and establishing no self-image. That is, you can choose to view your external actions and traits as desirable or undesirable, but abstain from esteeming or damning yourself as a whole.

This is a philosophically dense proposal, one that assumes that by changing our metaphysical orientation to persons, we can overcome the pernicious (and importantly false!) habits of anxiety, self-blame, and self-destruction. In so doing, we can also develop a more sensitive and sophisticated attitude towards our neighbors and fellow citizens.

Of course, the practical efficacy of these attitudes are difficult to measure; apparently there’s been little empirical work on the topic, but to assess the model it helps to think through the best case scenario. Let’s assume that forgoing global evaluations of self and other has the effects promised: less anxiety, fewer fundamental attribution errors, improved mental health outcomes, etc.

Yet as we think about these themes, and especially about prescriptive metaphysics required for this to function, I wonder if we can preserve the sense of accuracy. Is this merely an exercise or is it meant to actually be supplying more accurate claims about the world? Is it convenient or true?

Academics of a certain stripe have been rehashing the “death of the subject” for a while now. The best reasons for rejecting person-oriented reactive attitudes seem to follow in this mold: one cannot judge a person without judging her acts, yet single acts are insufficient for a whole judgment of her person. Her acts are multifarious and varied, yet domain-specific judgments are subject to contextual factors. She is the agent of her acts, yet agency is empirically undermined by context.

Ellis himself claims the mantle of truth for this rejection of global judgments, but since his primary work is with patients who aren’t all willing to accept the full set of metaphysical presumptions here, he also suggests a “pragmatic” and “inelegant” alternative:

“If, however, you have difficulty refusing to rate your self, your being, you can arbitrarily convince yourself, ‘I am “good” or “okay” because I exist, because I am alive, because I am human.’ This is not an elegant solution to a problem of self-worth, because I (or anyone else) could reply, ‘But I think you are “bad” or “worthless” because you are human and alive.’ Which of us is correct? Neither of us: because we are both arbitrarily defining you as ‘good’ or ‘bad,’ and our definitions are not really provable nor falsifiable. They are just that: definitions.

Defining yourself as ‘good,’ however, will give you much better results than believing that you are ‘bad’ or ‘rotten.’ Therefore, this inelegant conclusion works and is a fairly good practical or pragmatic solution to the problem of human ‘worth.’ So if you want to rate your self or your being, you can definitionally, tautologically, or axiomatically use this ‘solution’ to self-rating.”

This was always the real problem with the self-esteem movement and with the two kinds of respect Stephen Darwall identified; it’s very difficult to preserve recognition respect, a sense of respect-for-persons that rates them higher than chairs, concepts, or other animals while simultaneously pretending that there are no further forms of appraisal like their skills, competences, and morally salient decisions.

We sometimes pretend that maximal attention to the norms of recognition respect eliminate the room for appraisal respect. Thus, because humans all have this recognition respect in the form of what Kant called “dignity” there’s no room for social status differentiation. But we play favorites. I have favorite people (friends), favorite scholars (idols?), favorite religious groups (Quakers!), and even favorite politicians (Elizabeth Warren, who was once a favorite scholar!) What’s more, I have good days and bad days, days where I’m proud of my teaching and writing, and other days where I feel like I failed to live up to my own expectations.

Ellis claims that we should actively resist any effort to assemble all these appraisals into a complete picture of the person. That we can assess the actions without making all the troublesome metaphysical assumptions required to attribute those actions to a person. Indeed, perhaps I shouldn’t give Elizabeth Warren the Senator so much credit for the work of Warren the Law Professor.

But I’m still giving Warren credit. And that’s the problem. I’m starting to think we can’t duck person-oriented reactive attitudes by merely reducing them to action-oriented reactive attitudes. Going back to the original Strawson paper, we don’t get angry at the painful blow, or fall in love with the witty reply. We get angry at the person who lands the painful blow; we fall in love with the person who offers the witty reply.

So how can we avoid the Nietzschean invention of a doer for every deed? Can we stop ourselves from filling in the back story of the driver who cuts us off in traffic to show that he is a terrible human being? And if we can, should we? Or should we continue to pretend?

We still might want to say that global judgments are a mistake. The person who offers you witty replies on a first date may also be kind of boring sometimes. The person who assaults you may also be a loving father or an honor roll student. It may well be that we learn remarkably little about most people from what we see of them, and that we fill in this ignorance with heuristics and biases that are more rough than ready.

It’s hard not to equate Nietzsche and Ellis here with Buddhist reflections on the illusory nature of selfhood. And it’s hard, too, not to think that this demand that we amend our syntax and our ethics begs the question.

Are we merely doing this to get off the treadmill of anxiety, to overcome maladaptive perfectionism? Is all this elaborate metaethical reflection really just therapeutic? Is it the philosopher’s obsessive #actually that demands we reassess the common sense for no other reason than to avoid imprecision? Is there a pragmatic upshot? What’s the cost of self-esteem? And what are its benefits?

The (Re)Emergence of American Hate

A certain presidential candidate, known for his racist, sexist, and otherwise outlandish rhetoric has recently won his third primary.

And if it wasn’t disturbing enough that people in KKK robes showed up to support him at the Nevada primary – an action which may or may not have been a poorly executed protest – one of the country’s most notorious white supremacist leaders unofficially endorsed this candidate today saying that anything other than voting for him was ‘treason to your heritage’.

Now, I have a general policy of not giving space to hate groups – which thrive on the attention generated by their shocking acts, but this is getting too serious to ignore.

But, here’s the thing – it’s not the idea that a particularly distasteful candidate might actually become president that I find so alarming. It’s the fact that he genuinely has so much popular support.

Donald Trump is making it acceptable to be a racist again.

Of course, racism has long been alive and well in this country. It never really died the quiet death we hoped it would. Through the activism of 60s and the “colorblindness” of the 90s, we just shoved it into the closet, hoping it would never spill out again.

In 1925, the KKK had “as many as 4 million members,” a number which shrank dramatically following the civil rights movement. The Southern Poverty Law Center estimates the group at 4,000-5,000 members today.

Of course, I still think the number of members is about 4-5 thousand more than I’d hope to see in my country – but that membership become even more disturbing when you consider that there are normative social pressures likely to prevent people from expressing their believes.

That is, our country is full of closeted racists.

Racists who aren’t closeted any more.

Earlier this week, the New York Times reported that 74% of South Carolina Republican primary voters favor “temporarily barring Muslims who are not citizens from entering the United States.”

Furthermore, a recent poll by Public Policy Polling found that in addition to barring Muslims, “31% [of Trump supporters] would support a ban on homosexuals entering the United States as well, something no more than 17% of anyone else’s voters think is a good idea.”

Again, 0% would be a better figure here.

The New York Times also reports that, “Nearly 20 percent of Mr. Trump’s voters disagreed with Abraham Lincoln’s Emancipation Proclamation, which freed slaves in the Southern states during the Civil War.”

This is profoundly disturbing.

I’d almost prefer to blame this all on Donald Trump. If we can only stop him from winning the Presidency, then all our racial problems will be solved.

But here’s the thing: Trump is the symptom, not the disease.

A significant number – a significant number – of white Americans seem ready to re-don their white robes. Americans who otherwise are not entirely unlike myself.

I find that terrifying, and I’m hardly the most at risk.

It is not enough to wave our hands, to hope that the Republican establishment comes through with blocking a Trump nomination. We have to recognize that there is a growing racist sentiment – or, perhaps, a growing willingness to express that sentiment.

My greatest concern is not that Trump will be elected – it’s that even after he is eventually defeated, this profoundly, openly racist faction of Americans will continue to grow.

facebooktwittergoogle_plusredditlinkedintumblrmail