Normalizing the Non-Standard

I recently read Eisenstein’s excellent, What to do about bad language on the internet, which explores the challenge of using Natural Language Processing on “bad” – e.g., non-standard – text.

I take Eisenstein’s use of the normative word “bad” here somewhat ironically. He argues that researchers dislike non-standard text because it complicates NLP analysis, but it is only “bad” in this narrow sense. Furthermore, while the effort required to analyze such text may be frustrating, efforts to normalize these texts are potentially worse.

It has been well documented that NLP approaches trained on formal texts, such as the Wall Street Journal, perform poorly when applied to less formal texts, such as Twitter data. Intuitively this makes sense: most people don’t write like the Wall Street Journal on Twitter.

Importantly, Eisenstein quickly does away with common explanations for the prevalence of poor language on Twitter. Citing Drouin and Davis (2009), he notes that there are no significant differences in the literacy rates of users who do or do not use non-standard language. Further studies also dispel notions of users being too lazy to type correctly, Twitter’s character limit forcing unnatural contractions, and phones auto-correcting going out of control.

In short, most users employ non-standard language because they want to. Their grammar and word choice intentionally convey meaning.

In normalizing this text, then, in moving it towards the unified standards on which NLP classifiers are trained, researchers explicitly discard important linguistic information. Importantly, this approach has implications for not only for research, but for language itself. As Eisenstein argues:

By developing software that works best for standard linguistic forms, we throw the weight of language technology behind those forms, and against variants that are preferred by disempowered groups. …It strips individuals of any agency in using language as a resource to create and shape their identity.

This concern is reminiscent of James C. Scott’s Seeing Like a State, which raises deep concerns about the power of a centralized, administrative state. In order to function effectively and efficiently, an administrative state needs to be able to standardize certain things – weights and measures, property norms, names, and language all have implications for taxation and distribution of resources. As Scott argues, this tendency towards standardization isn’t inherently bad, but it is deeply dangerous – especially when combined with things like a weak civil society and a powerful authoritarian state.

Scott argues that state imposition of a impose a single, official language is “one of the most powerful state simplifications,” which lays the groundwork for additional normalization. The state process of normalizing language, Scott writes, “should probably be viewed, as Eugen Weber suggests in the case of France, as one of domestic colo­nization in which various foreign provinces (such as Brittany and Occitanie) are linguistically subdued and culturally incorporated. …The implicit logic of the move was to define a hierarchy of cultures, relegating local languages and their regional cultures to, at best, a quaint provincialism.”

This is a bold claim, yet not entirely unfounded.

While there is further work to be done in this area, there is good reason to think that the “normalization” of language disproportionally effects people who are outside the norm along other social dimensions. These marginalized communities – marginalized, incidentally, because they fall outside whatever is defined as the norm – develop their own linguistic styles. Those linguistic styles are then in turn disparaged and even erased for following outside the norm.

Perhaps one of the most well documented examples of this is Su Lin Bloggett and Brendan O’Connor’s study on Racial Disparity in Natural Language Processing. As Eisenstein points out, it is trivially impossible for Twitter to represent a coherent linguist domain – users around the globe user Twitter in numerous languages.

The implicit pre-processing step, then, before even normalizing “bad” text to be in line with dominant norms, is to restrict analysis to English-language text. Bloggett and O’Connor find that  tweets from African-American users are over-represented among the Tweets that thrown out for being non-English.

Dealing with non-standard text is not easy. Dealing with a living language that can morph in a matter of days or even hours (#covfefe) is not easy. There’s no getting around the fact that researchers will have to make difficult calls in how to process this information and how to appropriately manage dimensionality reduction.

But the worst thing we can do is to pretend that it is not a matter of concern; to begin our work by thoughtlessly filtering and normalizing without giving significant thought to what we’re discarding and what that discarded data represents.

facebooktwittergoogle_plusredditlinkedintumblrmail

Don’t Miss the Sept. 20th Nevins Fellowship Confab Call

As we announced last month, NCDD is hosting a special Confab Call with the McCourtney Institute for Democracy and Healthy Democracy next Wednesday, September 20th from 1-2pm Eastern / 10-11am Pacific. The call is the best place to learn more about this incredible opportunity to have a D&D trained student come work with your organization at no-cost, so we strongly encourage the NCDD network to register today!

Confab bubble image

During the call, NCDD Member and McCourtney’s Managing Director Christopher Beem will provide an overview of the Nevins Democracy Leaders Program and its aims, discuss the training that the future fellows are going through, and share more about how your organization can take advantage of this great chance to help cultivate the next generation of D&D leaders while getting more support for your work – all for FREE! We’ll also be joined by NCDD Member Robin Teater of Healthy Democracy, who will share her experiences hosting a fellow this summer.

NCDD is proud to have partnered the last couple years with the McCourtney Institute to help identify organizations in the field that can host Nevins fellows, and we’re continuing the exciting partnership this year. You can get a better sense of what the program experience is like by checking out this blog post from a 2017 Nevins Fellow about their summer fellowship with NCDD Sponsoring Member The Jefferson Center.

This is a rare and competitive opportunity for leading organizations in our field, and this Confab Call will be one of the best ways to find out more about how your group can take advantage of this program, so make sure to register today to save your spot on the call! We look forward to talking with you more then!

Tunnel-dialogue: Application of methods and processes for a participative citizen patrticipation about ecological relevant investment decisions in the special case of road tunnel filters

Author: 
Which effects does the exhaust air of the "Einhorn-Tunnel" in Schwäbisch Gmünd has on humas and the environment and which benefits brings the installation of a tunnel filter? A citizien dialogue in Schwäbisch Gmünd was established to gather results about this topic.

Data Technologies Colonize the Ontological Frontier

Writing recently in Medium, Salvatore Iaconesi -- a designer, engineer and founder of Art is Open Source and Human Ecosystems -- offers an extremely important critique of the blockchain and other data-driven network technologies.

While recognizing that these systems have enormous potential for “radical innovation and transformation,” he astutely warns against their dangerous psychological and cultural effects. They transfer entire dimensions of perception, feeling, relationships, trust-building, and more -- both individually and collectively experienced -- into algorithmic calculations. The System becomes the new repository of such relational epiphenomena. And in the process, our very sense of our shared agency and intentionality, relationships, and a common fate begins to dissolve.

In their current incarnations, the blockchain and related network-based technologies start with the ontological presumption that everything can be broken apart into individual units of feeling and action. Our character, viewpoints, emotions, behaviors, are more are all translated into data-artifacts. This is the essential role of data, after all – to distill the world into manipulable, calculable units of presumably significant information. 

Think that you are a whole human being?  Forget it.  Data systems are abstracting, fragmenting and filleting our identities into profiles that we don’t even control. A simulacrum of our "real identities" is being constructed to suit new business models, laying the foundation for what Iaconsi calls the "transactionalization of life." As he writes:

Everything is turning into a transaction: our relationships, emotions and expressions; our ways of producing, acquiring and transferring knowledge; communication; everything.

As soon as each of these things become the subject of a service, they become transactions: they become an atomic part of a procedure.

Because this is what a transaction is: an atom in a procedure, in an algorithm. This includes the fact that transactions are designed, according to a certain business, operational, strategic, marketing model.

This means that when our relationships, emotions, expressions, knowledge, communication and everything become transactions, they also become atoms of those business models whose forms, allowances, degrees of freedoms and liberty are established by those models.

"Everything, including our relations and emotions, progressively becomes transactionalized/financialized, and the blockchain represent an apex of this tendency. This is already becoming a problem for informality, for the possibility of transgression, for the normation and normalization of conflicts and, thus, in prospect, for our liberties and fundamental rights, and for our possibility to perceive them (because we are talking about psychological effects)," according to Iaconsi.

How does this process work?

By moving "attention onto the algorithm, on the system, on the framework. Instead of supporting and maintaining the necessity and culture of establishing co-responsibility between human beings, these systems include “trust” in procedural ways. In ways which are technical. Thus, the necessity for trust (and, thus, on the responsibility to attribute trust, based on human relations) progressively disappears," he writes.

Therefore, together with it, society disappears. Society as actively and consciously built by people who freely decide if and when to trust each other, and who collectively agree to the modalities of this attribution.

What remains is only consumption of services and products. Safe, transparent and all. But mere transactionalized consumption. Society ends, and so does citizenship: we become citizen of nothing, of the network, of the algorithm.

These are not technical issues, but psychological ones, perceptive ones. And, thus, even more serious.

As soon as I start using them [blockchains], as soon as I start imagining the world through them, everything starts looking as a transaction, as something which is “tokenizable”….Technology creates us just as much as we create technology.

In short, the radical atomization, objectification and financialization of human relationships begins to dissolve the very idea of a shared society.

Institutions and other people disappear, replaced by an algorithm. Who knows where trust is at/in! It is everywhere, diffused, in the peer-to-peer network. Which means that it’s nowhere, and in nobody.

In a weird way it is like in call centers: they are not really useful for the client, and they completely serve the purpose minimizing bother for the companies, letting clients slipping into the “procedure” (which is synonym with algorithm), and avoiding them from obtaining real answers and effects, in their own terms outside of procedures.

These are all processes which separate people from each other, from institutions, organizations, companies, through the Procedure.

Citizens of everywhere. Citizens of nowhere and nothing.

So what might be done?  

Iaconsi talks about the Third Infoscape, which is drives from the concept of the Third Landscape.  He writes that in the Third Landscape, “where ‘technicians’ see ‘weeds,’ the Third Landscape sees opportunity, biodiversity, an open source media which is a reservoir for the future of the planet, which does not require energy to maintain, but produces energy, food, knowledge, relations.”

Citing Marco Casagrande, Iaconsi argues that data and information should not be “laid out geometrically, formally, as in gardens, but more like the woods and wild nature, in which multiple forms of dimensions, boundaries, layers and interpretations co-exist by complex desire, relation and interaction, not by design.”

This, of course, implies “a different kind of technology, a different kind of science, with a different imagination to support it.” It also implies that we begin to speak not just of technology design, but of “sensibility, imagination and aesthetics.” 

Iaconsi's critique reminded me of Montreal-based communications professor Brian Massumi's important 2015 book, Ontopower: War, Powers and the State of Perception.  His basic thesis is that the national security state, in its perpetual fight against terrorism, has telescoped its political priorities into a new ontological paradigm. It seeks to validate a new reality through what he calls “ontopower.” This is “the mode of power embodying the logic of preemption across the full spectrum of force, from the ‘hard’ (military intervention) to the ‘soft’ (surveillance).” 

The point is that perception of reality itself is the new battleground. Power is not just carried out in overt state or policy settings – legislatures, courts, the media. State power wants to go beyond messaging and framing the terms of debate. It has deliberately moved into the realm of ontology to define the terms of reality itself. In the national security context, that means that nefarious terrorist threats exist potentially everywhere, and thus the logic of preemptive military action (drone killings, extra-legal violence, etc.) is thus fully justified. (Cf. the film Minority Report.

Massumi writes:

“Security threats, regardless of the existence of credible intelligence, are now felt into reality. Whereas nations once waited for a clear and present danger to emerge before using force, a threat's felt reality now demands launching a preemptive strike. Power refocuses on what may emerge, as that potential presents itself to feeling.”

So if ontopower is arising as a new strategy in national security agencies, it should not be surprising that a related mode of ontopower is being developed by Silicon Valley, which is a frequent partner with the national security state. 

The new frontier in Big Tech is to leverage Big Data to obtain unassailable market dominance and consumer control.  Naturally, the surveillance state envies this capacity and wants to be dealt into the game. Hence the tight alliances between the US Government and Silicon Valley, as revealed by Snowden. Now that the likes of Google, Amazon, Facebook and others have secured political and economic supremacy in so many markets and cultural vectors, is it any wonder that such power is itching to define social reality itself?

Democrats as technocrats

This web search takes you to a whole stack of good recent writing about the Democratic Party as the technocratic party, with headlines ranging from Twilight of the Technocrats? to The Triumph of the Technocrats. In lieu of a critical review, I’d pose these questions:

  1. What would a technocrat support and do in our context? It’s possible to be a socialist technocrat or a technocrat who works for a huge, for-profit company. I presume that a technocratic Democrat today is someone who believes in optimizing GDP growth, environmental sustainability, and reductions in tangible human distress (e.g., disease, homicide) through efficient governmental policies. These desired outcomes often conflict, and then technocrats are fine with compromise. To qualify as a technocrat, you can’t be too enthusiastic about working with ordinary citizens on public issues, and you can’t base your agenda on controversial, challenging moral ideals.
  2. Do Democrats present themselves as technocrats, in this sense? Some do and some don’t. It seems fair to read the positive agenda of Hillary Clinton’s 2016 campaign as largely technocratic (she promised to govern competently and continue the balanced progress of her predecessor), although her critique of Donald Trump was ethical rather than technical. I also think that Clinton was in a tough spot because she didn’t believe that she could accomplish transformative change with a Republican Congress; thus managerial competence seemed a workable alternative. The 2016 campaign does not demonstrate that she–let alone all Democrats–are fully technocratic. However, consider a different case that is pretty revealing: the Josiah Bartlet Administration. This is an informative example just because it is idealized and fictional, free of any necessary constraints. The Bartlet White House is staffed with hard-working, highly-educated, unrealistically competent, smartest-guy-in-the-room, ethical people who strive to balance the budget while making incremental progress on social issues. Hollywood’s idealized Democrats are technocrats in full.
  3. Do Democrats choose technocratic policies? Again, I’d say “sometimes.” Both the Clinton and Obama Administrations definitely showed some predilection for measurable, testable outcomes; for behavioral economics; and for models that were consistent with academic research about the economy and the climate. They weren’t particularly good at empowering citizens to govern themselves or collaborating with social movements. On the other hand, the Affordable Care Act has a moral core (aiming to cover people without health insurance), even if many of its tools and strategies are best defined as technocratic.
  4. Are Democrats good technocrats? There has been more economic growth under Democratic than Republican presidents. But the sample is small, several Democratic presidents faced conservative congresses, and any correlation with a small “n” can easily be spurious. A deeper point is that Democrats are currently more committed to the mainstream findings of climate science, social policy research, and academic economics than Republicans are. Their accomplishments may be affected by sheer chance, but their strategies tend to be consistent with positivist, empirical research.
  5. Is Democratic technocracy consistent with justice? No. Almost any theory of justice, from libertarian to strongly egalitarian, would demand fundamental shifts from the status quo. Certainly, I would favor deeper changes in our basic social contract. On the other hand, compared to what? Managing our existing social policies in a competent way delivers substantial, if inadequate, justice. It beats incompetence or deliberate assaults on existing social institutions. In a multi-party parliamentary democracy, a center-left technocratic party would play an important role. I would be open to voting for it, depending on the circumstances and the alternatives. In our two-party system, a technocratic and centrist component competes for control of the Democratic Party. It shouldn’t be surprising that this component receives constant criticism from within the Party, because the Democrats represent a broader coalition, and there is plenty of room to the left of someone like Hillary Clinton. Whatever you think of her, I don’t think you can complain that she was criticized from her left.
  6. Is Democratic technocracy good politics? That’s not a question that will be settled to everyone’s satisfaction any time soon. Clinton lost to Trump but also won the popular vote. She was technocratic but not completely so. She faced many contingencies, from Fox News to Bernie to Comey, and handled them in ways that we can debate for the next decade. Again, the answer has to be: Compared to what? A compelling new vision of America’s social contract would beat competent management at the polls. But competent management may beat incompetence or a deeply unpopular vision (from either right or left).
  7. What’s driving the Democratic Party’s drift to technocracy? One could explain it in class terms: the Democratic coalition is now highly educated, including many people who make a living by demonstrating expertise. But I would propose a deeper thesis. Modernity itself is defined by constant increases in specialization and differentiation, plus radical doubts about our ability to know which ends are moral or just. In that context, people prosper who are good at applying technical reasoning to complex problems without worrying too much about whether the ultimate ends are right. Modernity has generated a white-collar governing class that is currently aligned with the Democrats, but more than that, it has generated a very high estimation of expertise combined with a leeriness about moral discourse. Religious conservatives monopolize the opposition to both of these trends. Getting out of this trap requires more than new messages and policies. It is a fundamental cultural problem.

See also: the rise of an expert class and its implications for democracyvarieties of neoliberalismthe big lessons of Obamacarethe new manipulative politics: behavioral economics, microtargeting, and the choice confronting Organizing for Action; and why the white working class must organize.

Graduate Workers Need a Union

Last night I attended a great panel hosted by the Graduate Employees of Northeastern University (GENU), a union of research and teaching assistants. The union is currently working towards holding its first election and becoming certified with the National Labor Relations Board (NLRB), an independent federal agency which protects employees’ rights to organize and oversees related laws.

Those of you immersed in academic life may have noticed a recent increase in organizing efforts among graduate workers at many institutions – this is due to a 2016 ruling by the NLRB that “student assistants working at private colleges and universities are statutory employees covered by the National Labor Relations Act.”

In other words, graduate employees have the right to organize.

Those not immersed in academic life, or less familiar with graduate education, might find this somewhat surprising. As someone said to me when I told them about this panel, “wait, you’re an employee? Aren’t you a student?”

Well, yes. I am an employee and a student. These two identities and lives are complexly intertwined and can be difficult to distinguish – when am I a worker and when am I a learner?

Often I am both simultaneously.

But I think about the perspective of the student program staff at the college where I worked for several years before starting my PhD. Collectively, we made a lot of student employment decisions – hiring student workers to help around the office and selecting paid student fellows to work at local organizations. Those students – primarily undergraduates – were workers, too, but every decision we made was centered around the question: how will this improve the student’s education?

That is, their student identity was always centered. Work expectations always deferred to course expectations. We looked to hire students who were prepared to learn a lot from their experiences, and we created structured mentorship and other activities to ensure student learning was properly supported and enhanced. The work was good work which needed to be done, but the primary purpose of these opportunities was always to create space for students to learn.

Graduate student work is…a bit more complicated. I have been fortunate in my own graduate experience, but I couldn’t even begin to enumerate the horror stories I’ve heard from other graduate employees whose work is most definitely work.

Even assuming good faculty members and good departments, the entire structure of American higher education is designed to exploit graduate students as cheap labor. Their labor may serve to enhance the undergraduate experience, but is rarely designed to enhance their own.

This problem is exacerbated by the fact that graduate student workers have virtually no power while the faculty, department, and administration they serve have a great deal of power over them. For graduate workers it is often not a possibility to simply “get another job” – a difficult undertaking for any vocation. International students are particularly vulnerable, as their Visa status could be taken away in a heartbeat.

As several of the panelists mentioned last night, many graduate students simply try to “keep their head down” in the face of this power imbalance. Stay quiet, don’t complain, and do your best to keep focused on the research you’re passionate about.

This is a reasonable copping response, but the reality is that silence never fixes a problem, and sometimes trouble will find you no matter how hard you try to avoid it.

Nearly all of the panelists had a story of someone who was unfairly targeted for termination, who was entirely taken by surprise when a department in which they “had no problems” suddenly had a serious problem with them.

Without a union these become the isolated stories of isolated individuals. They are personal problems to be worked out and ignored at the local level. In the absence of clear rules and expectations, they will happen again, and again, and again – in good departments and bad – with very little recourse for the individuals involved and with no resulting structural change to prevent it from happening again.

Unions build collective power. They build the ability of a people to come together, to share their ideas and concerns, and to work together with a common voice in order to achieve mutually-agreed upon outcomes.

As one of the panelists from a faculty union described, forming a union was a clarifying experience. It brought the community together and generated a clear, shared understanding of common problems and collective solutions. It created venues for enabling structural and policy changes that had been deeply needed for years.

Perhaps most fundamentally, it is important to understand that a union is not some abstract outside, thing. It is a living thing. It is the workers. It is a framework which allows us to work together, learn together, and build together. It is formed from our voices in order to address our concerns and to protect our interests.

We are the union.

And graduate student workers need a union.

The live-stream of the event, which focused specifically on STEM workers, can be seen here. Particularly for those at Northeastern, please check the GENU-UAW website, Facebook page, and Twitter.

facebooktwittergoogle_plusredditlinkedintumblrmail

Social and Algorithmic Bias

A commonly lamented problem in machine learning is that algorithms are biased. This bias can come from different sources and be expressed in different ways, sometimes benignly and sometimes dramatically.

I don’t disagree that there is bias in these algorithms, but I’m inclined to argue that in some senses, this is a feature rather than a bug. That is: all methodical choices are biased, all data are biased, and all models are wrong, strictly speaking. The problem of bias in research is not new, and the current wave of despair is simply a reframing of this problem with automated approaches as the culprit.

To be clear, there are serious cases in which algorithmic biases have led to deeply problematic outcomes. For example, when a proprietary, black box algorithm regularly suggests stricter sentencing for black defendants and those suggestions are taken to be unbiased, informed wisdom – that is not something to be taken lightly.

But what I appreciate about the bias of algorithmic methods is the visibility of their bias; that is – it gives us a starting point for questioning, and hopefully addressing, the inherent social biases. Biases that we might otherwise be blind to, given our own personal embedding in the social context.

After all, strictly speaking, an algorithm isn’t biased; its human users are. Humans choose what information becomes recorded data and they choose which data to feed into an algorithm. Fundamentally, humans – both specific researchers and through the broader social context – chose what counts as information.

As urban planner Bent Flyvbjerg writes: Power is knowledge. Those with power not only hold the potential for censorship, but they play a critical role in determining what counts as knowledge. In his ethnographic work in rural appalachia, John Gaventa similarly argues that a society’s power dynamics become so deeply entrenched that the people embedded in that society no longer recognize these power dynamics at all. They take for granted a shared version of fact and reality which is far from the unbiased Truth we might hope for – rather it is a reality shaped by the role of power itself.

In some ways, algorithmic methods may exacerbate this problem – as algorithmic bias is applied to documents resulting from social bias – but a skepticism of automated approaches opens the door to deeper conversations about biases of all forms.

Ted Underwood argues that computational algorithms need to be fundamentally understood as tools of philosophical discourse, as “a way of reasoning.” These algorithms, even something as seemingly benign as rank-ordered search results – deeply shape what information is available and how it is perceived.

I’m inclined to agree with Underwood’s sentiment, but to expand his argument broadly to a diverse set of research methods. Good scientists question their own biases and they question the biases in their methods – whether those methods are computational or not. All methods have bias. All data are biased.

Automated methods, with their black-box aesthetic and hopefully well-documented Git pages,  may make it easier to do bad science, but for good scientists, they convincingly raise the specter of bias, implicit and explicit, in methods and data.

And those are concerns all researchers should be thinking about.

 

facebooktwittergoogle_plusredditlinkedintumblrmail

no justice, no peace? (on the relationship between these concepts)

As a political philosopher, I’m trained to think about justice versus injustice. Both terms are controversial. It would be hard to find two people (even two who might share the label be labeled “social justice warriors”) who define “justice” exactly alike. We each put together our own recipes using various combinations and flavors of liberty, equality, happiness, solidarity, sustainability, rights, voice, agency, status, security, and other values that conflict in practice. Injustice is equally complicated, and it may not mean the mere absence or negation of justice. But although the polarity of justice/injustice does not generate consensus, it structures many of our debates.

There’s another polarity that plays an analogous role for people who have been strongly influenced by Gandhi or the Civil Rights Movement: for instance, people who work in Peace and Conflict Studies.  This is the polarity of violence versus peace.

Again, both terms are complicated. Just as it won’t really work to define “justice” as equality (Equality of what? For whom? Equality and nothing else?), so it doesn’t work to define violence as physical assault, or peace as the absence of violence. Like justice, peace can provide the framework for a discussion in which various definitions are proposed and defended.

The following schematic diagram depicts these polarities as two different axes. It implies that it’s conceptually possible to have an unjust situation of peace or a just case of violence. Consider, for example, the imprisonment of a former dictator. He is arrested at gunpoint and forced into a cell (violence) but that’s a manifestation of justice. This case belongs in the bottom-left quadrant. On the other hand, situations of political quiescence involve living in injustice without any conflict: the top-right.

Some would argue that (true) justice is (true) peace; and injustice equals violence. Then the schematic is wrong; there is just one continuum whose ends should be labeled “Peace/Justice” and “Violence/Injustice.”

Or it could be that peace is a component of justice but not the only component. Perhaps you can’t have perfect justice with violence, but you can have a violent situation that’s more just than a peaceful situation would be, if the former scores higher on liberty, equality, or some other value.

My general instinct is to resist smooshing values together, because then we fool ourselves into ignoring tradeoffs. For instance, I don’t like to load lots of values into the definition of “democracy.” I prefer to define it as a system for making binding decisions in a group that affords everyone roughly equal influence. Then we can ask whether democracy requires or implies other values, such as social equality or freedom of speech, or whether it conflicts with these values. The same logic would encourage distinguishing between peace and justice as two different goods.

I have not made up my mind on this question, but here’s a text with which to think about it. Dr. King visited Joan Baez and other anti-war protesters in prison in Santa Rita, CA, on Jan. 14, 1968. Addressing a crowd outside the prison, he said (in my transcription from the audio):

There can be no justice without peace and there can be no peace without justice. People ask me from time to time, ‘Aren’t you getting out of your field? Aren’t you supposed to be working in civil rights?’ They go on to say, ‘The two issues are not to be mixed.’ And my own answer is that I have been working too long and too hard now against segregated public accommodations to end up at this stage of my life segregating my moral concerns. For I believe absolutely that justice is indivisible and injustice anywhere is a threat to justice everywhere. And I want to make it very clear that I’m going to continue with all of my might, with all of my energy, and with all of my action to oppose that abominable, evil, unjust war in Vietnam.

The first sentence might mean that peace and justice are always causally connected: one is necessary for the other. But then it becomes clear that King’s struggle for Civil Rights is about “justice,” and his opposition to the war is about “peace,” and he wants to connect these two concerns because they are both “moral.” That implies that justice and peace are two distinct components of a larger category: what is moral or right. Finally, King defines the specific war in Vietnam as unjust, leaving open the possibility that a different war (e.g., the US Civil War?) might be just. In that case, peace in Vietnam is a necessity of justice but not because it will bring about peace; only because the war is an injustice.

At another level, of course, King insists on peace as a strategy for justice. Active nonviolence is an ethical and effective method in a wide range of circumstances, with a better record of success than violent insurrection has. But analytically, we could still distinguish between peaceful and just means and between peaceful and just ends and then ask when any of these four go together.

See also:  the kind of sacrifice required in nonviolencesocial justice should not be a cliché; and we are for social justice, but what is it?