Semantic and Epistemic Networks

I am very interested in modeling a person’s network of ideas. What key concepts or values particularly motivate their thinking and how are those ideas connected?

I see this task as being particularly valuable in understanding and improving civil and political discourse. In this model, dialogue can be seen as an informal and iterative process through which people think about how their own ideas are connected, reason with each other about what ideas should be connected, and ultimately revise (or don’t) their way of thinking by adding or removing idea nodes or connections between them.

This concept of knowledge networks – epistemic networks – has been used by David Williamson Shaffer to measure the development of students’ professional knowledge; eg their ability to “think like an engineer” or “think like an urban planner.” More recently, Peter Levine has advanced the use of epistemic networks in “moral mapping” – modeling a person’s values and ways of thinking.

This work has made valuable progress, but a critical question remains: just what is the best way to model a person’s epistemic network? Is there an unbiased way to determine the most critical nodes? Must we rely on a given person’s active reasoning to determine the links? In the case of multi-person exchanges, what determines if two concepts are the “same”? Is semantic similarity sufficient, or must individuals actively discuss and determine that they do each indeed mean the same thing? If you make adjustments to a visualized epistemic network following a discussion, can we distinguish between genuine changes in view from corrections due to accidental omission?

Questions and challenges abound.

But these problems aren’t necessarily insurmountable.

As a starting place, it is helpful to think about semantic networks. In the 1950s, Richard H. Richens original proposed semantic networks as a tool to aid in machine translation.

“I refer now to the construction of an interlingua in which all the structural peculiarities of the base language are removed and we are left with what I shall call a ‘semantic net’ of ‘naked ideas,'” he wrote. “The elements represent things, qualities or relations…A bond points from a thing to its qualities or relations, or from a quality or relation to a further qualification.”

Thus, from its earliest days, semantic networks were seen as somewhat synonymous with epistemic networks: words presumably represent ideas, so it logically follows that a network of words is a network of ideas.

This may well be true, but I find it helpful to separate the two ideas. A semantic network is observed; an epistemic network is inferred.

That is, through any number of advanced Natural Language Processing algorithms, it is essentially possible to feed text into a computer and have it return of network of words which are connected in that text.

You can imagine some simple algorithms for accomplishing this: perhaps two words are connected if they co-occur in the same sentence or paragraph. Removing stop words prevents your retrieved network from being over connected by instances of “the” or “a.” Part-of-speech tagging – a relatively simple task thanks to huge databanks of tagged corpora – can bring an additional level of sophistication. Perhaps we want to know which subjects are connected to which objects. And there are even cooler techniques relying on probabilistic models or projections of the corpus into k-space, where k is the number of unique words.

These models typically assume some type of unobserved data – eg, we observe a list of words and use that to discover the unobserved connections – but colloquially speaking, semantic networks are observed in the sense that they can be drawn out directly from a text. They exist in some indirect but concrete way.

And while it seems fair to assume that words do indeed have meaning, it still takes a bit of a leap to take a semantic network as synonymous with an epistemic network.

Consider an example: if we were to take some great novel and cleverly reduce it to a semantic network, would the resulting network illustrate exactly what the author was intending?

The fact that it’s even worth asking that question to me indicates that the two are not intrinsically one and the same.

Arguably, this is fundementally a matter of degrees. It seems reasonable to say that, unless our algorithm was terribly off, the semantic network can tell us something interesting and worthwhile about the studied text. Yet it seems like a stretch to claim that such a simplistic representation could accurately and fully capture the depth of concepts and connections an author was seeking to convey.

If that were the case, we could study networks instead of reading books and – notably – everyone would agree on their meaning.

A semantic network, then, can be better considered as a representation of an epistemic network. It takes reason and judgement to interpret a semantic network epistemically.

Perhaps it is sufficient to be aware of the gap between these two – to know that interpreting a semantic network epistemically necessarily means introducing bias and methodological subjectivity.

But I wonder if there’s something better we can do to model this distinction – some better way to capture the complex, dynamic, and possibly conflicting essence of a more accurately epistemic network.

facebooktwittergoogle_plusredditlinkedintumblrmail

On Trolls and Dissenters

Community meetings of all types and topics are frequently endangered by a common complication: that guy.

The person who speaks longer than anyone wants them to, who raises concerns that are unpopular amongst the broader public, or who unfailing uses every public platform as an opportunity to promote their pet issue, whether it is on topic or not.

Many a meeting has been derailed by this character’s irrelevant ravings, and many a community member has been silenced – fearing that if they spoke up they might appear as mad.

But there’s an interesting dilemma in this portrayal: of the many actions, motivations, and outcomes which could be lumped into this category some of them productive and some of them not.

Manin persuasively argues that debate of conflicting views is a necessary condition for successful deliberation – with groups otherwise likely to default towards prevailing norms. Diversity of views is not enough; “disagreement in face-to-face interactions generates psychic discomfort” which groups will avoid given the opportunity.

Good deliberation, then, requires disagreement and debate as a core element – not as something which may arise or not as the context decides.

How, then, can one distinguish the actions of a counter-productive troll and a valuable dissenter? Many times, the unpopular thing needs to be said.

Rachel Barney’s excellent [Aristotle], On Trolling – written, as the name implies, in the spirit of Aristotle, lends some helpful guidance to this question.”Every community of speakers holds certain goods in common, and with them the conversation [dialegesthai] as an end in itself; and the troll is one who seeks to damage it from within.”

The troll, then actively seeks to destroy a community, to set “the community apart from each other” and introduce “strife where before there was scarcely disagreement.”

Barney/Aristotle is careful to note that the troll can be distinguished from the productive dissenter which Manin imagines:

One might wonder whether there is an art of trolling and an excellence; and indeed some say that Socrates was a troll, and so that the good man also trolls. And this is in fact what the troll claims: that he is a gadfly and beneficial, and without him to ‘stir up’ the thread it would become dull and unintelligent. But this is incorrect. For Socrates was speaking frankly when he told the Athenians to care for their souls, rather than money and honors, and showed that they lacked knowledge. And this is not trolling but the contrary, exhortation and truth-telling— even if the citizens get very annoyed. For annoyance results from many kinds of speech; and the peculiarity [idion] of the troll is not annoyance or controversy in general, but confusion and strife among a community who really agree.

Thus the troll takes the guise of a productive dissenter, whom a democratic peoples would do well to embrace, while actually seeking to destroy, not improve, a community through their dissension.

This may be a meaningful epistemic distinction, yet it can be challenging to define in practice. As Manin points out, a “community who really agree” may have simply come to agree through the processes of group dynamics.

Importantly, this type of agreement is not intrinsically related to issues of power and oppression. That is, while one may argue that agreement arrived through coercion is not really agreement at all, Manin is primarily concerned with instances where a group can be genuinely said to agree. The root of this surface agreement may not be coercion at all, but rather an unfortunate result of the fact that individuals tend to be biased and, worse yet, “groups process information in a more biased way than individuals do.”

That is, without some gadfly perturbing the system, groups tend to systematically shift toward consensus, “regardless of the merits of the issue being discussed.”

If we, like Barney/Aristotle, are to take trolling as inherently bad, more productive forms of dissent, exhortation, or truth-telling must then be distinguished. Therefore, following Manin, I’d be inclined to push back on defining a troll as one who sows discord amidst a community which agrees. If agreement was achieved through systematic social processes, perhaps a little discord could be good.

One then might seek to capture trolling through a broader definition of motivation: a troll seeks to destroy while a dissenter seeks to improve.

Importantly, though, destruction is not intrinsically beyond a dissenter’s concern: indeed, a dissenter may seek to break corrupt institutions and social structures. To smash context rather than settle for reformist tinkering, as legal scholar Roberto Unger would say.

More accurately, then, a dissenter can be seen as seeking to improve the human condition, apart from the specific context of political structures, while a troll – like Eris – seeks solely to sow discord.

In his 1992 address to Wroclaw University Václav Havel argues in favor of breathing “something of the dissident experience into practical politics.”

“The politics I refer to here cannot be enshrined in or guaranteed by any law, decree, or declaration,” Havel says. “It cannot be hoped that any single, specific political act might bring it about and achieve it. Only the aim of an ideology can be achieved. The aim of this kind of politics, as I understand it, is never completely attainable because this politics is nothing more than a permanent challenge, a never-ending effort that can only in the best possible case leave behind it a certain trace of goodness.”

This permanent challenge is the noble undertaking of the dissenter, whether in the form of sweeping revolution or more mundane provocations.

In the mundane world of practical politics, then, this leaves us still with the problem: how do we distinguish the permanent challenge of the dissenter from the wanton destruction of the troll?

facebooktwittergoogle_plusredditlinkedintumblrmail

Social Science that Matters (?)

The social sciences, some would argue, suffer from a ‘soft’ problem.

As Laurence Smith et al. describe in a 2000 article published in the aptly-named, Social Studies of Science, “Dating back at least to the writings of Auguste Comte, it has been thought that the sciences can be arrayed in a hierarchy, with well-developed natural sciences (such as physics) at the pinnacle, the social sciences at the bottom, and the biological sciences occupying an intermediate position.”

This hierarchy indicates somehow the ‘hardness’ or ‘softness’ a discipline. The natural sciences are more purely ‘science;’ more genuinely a description of nature as it is. The social sciences, on the other hand, are ‘softer’ – less predictive, testable, rigorous, or, perhaps, simply more subjective.

It’s generally unclear just what defines the hard/soft hierarchy, but in comparing a number of different definitions, Smith continually found the same thing: physics is the hardest science, sociology is the softest. Chemistry and biology are both well in the ‘hard’ science camp, while the analytic social sciences of psychology and economics skirt the ‘soft’ boundary and approach ‘hard’ territory.

This model makes social science out to be the poor cousin of the more prestigious natural sciences.

Whether you agree with that assessment of the social sciences or not, the inferiority complex and sense of always needed to justify the existence of one’s field effects the way social science is done.

As Danish economist and urban planner Bent Flyvbjerg describes, “inspired by the relative success of the natural sciences in using mathematical and statistical modelling to explain and predict natural phenomena, many social scientists have fallen victim to the following pars pro toto fallacy: If the social sciences would use mathematical and statistical modelling like the natural sciences, then social sciences, too, would become truly scientific.”

This pushes the social sciences down a computational path – a route, Flyvbjerg argues, which leads these otherwise valuable disciplines to produce more and more amounting to less and less.

“The more ‘scientific’ academic economics attempts to become,” he writes, “the less impact academic economists have on practical affairs.”

Furthermore, the whole attempt is foolhardy. As Flyvbjerg argues in Making Social Science Matter, “social science never has been, and probably never will be, able to develop the type of explanatory and predictive theory that is the ideal and hallmark of natural science.”

In emulating the computational and analytical approaches of the ‘hard’ sciences, social science aims to be something it is not and looses itself in the process.

As an (aspiring) computational social scientist, this argument seems like something worth thinking about.

Perhaps Flyvbjerg is too quick to write off the value of statistical approaches in social science, but nonetheless I find he has a compelling point.

Rather than trying to capture the episteme of natural sciences, Flyvbjerg argues the social science would do better to embrace phronesis. As he explains:

“In Aristotle’s words phronesis is a ‘true state, reasoned, and capable of action with regard to things that are good or bad for man.’ Phronesis goes beyond both analytical, scientific knowledge (episteme) and technical knowledge or know-how (techne) and involves judgements and decisions made in the manner of a virtuoso social and political actor.”

Essentially, social scientists should not obsess with trying to measure and quantify everything, but should rather aim towards the humanist goal of seeking to understand what is good and what is bad.

Perhaps unlike Flyvbjerg, I don’t see an inherent conflict between these aims. I can imagine that amidst the realities of a bureaucratic academy and fervent publish or perish pressures, scholars might find themselves forced along a too narrow path – but I see this as a broader challenge facing academia, not a singular failing of social sciences.

There is, I think, great value in developing computational models for complex social systems; in seeking to quantify and measure numerous facets of human interaction. The failing in this episteme approach comes only when phronesis is ignored completely.

In his own work on urban development, Flyvbjerg has a great saying: power is knowledge.

“Power determines what counts as knowledge, what kind of interpretation attains authority as the dominant interpretation,” he writes in  Rationality and Power. “Power procures the knowledge which supports its purposes, while it ignores or suppresses that knowledge which does not serve it.”

These words come amidst his in-depth account of the bureaucracy and power which continually corrupts an ambitious urban development project in Aalborg. Most notably, this corruption rarely comes in the form of overt suppression, but rather a subtle, persistent distortion of information. “Power often ignores or designs knowledge at its convenience.”

This reality is in sharp contrast to the democratic ideal which “prescribes that first we must know about a problem, then we can decide about it. For example, first the civil servants in the in the administration investigate a policy problem, then they inform their minister, who informs parliament, who decides on the problem. Power is brought to bear on the problem only after we have made ourselves knowledgeable about it.”

Accepting the distorting effect of power, it’s reasonable to be skeptical of computational “knowledge.” In this sense, an episteme approach would only serve to further the interests of power – adding scientific credibility to an already distorted presentation of knowledge.

This is a valid concern, but again I find it to be a question of extremes. All methodological choices have consequences, all findings require interpretation. Understanding that dynamic has more value than walking away.

Power is knowledge isn’t an admonition that knowledge ought to be abandoned all together – rather it is a reminder: knowledge isn’t produced in a vacuum. Power shapes knowledge. Try as you might to be neutral and unbiased, this dynamic is inescapable. The computational social scientist is intrinsically a part of the system they seek to study.

facebooktwittergoogle_plusredditlinkedintumblrmail

Is Diversity Enough?

I’ve been reading Manin’s critical Democratic Deliberation: Why We Should Promote Debate Rather Than Discussion.

At the core of his argument, Manin complains that liberal theorists traditionally conflate “diversity of views” with “conflicting views.” Holding that a necessary and sufficient condition for good deliberation is “that participants in discussion hold diverse views and articulate a variety of perspectives, reflecting the heterogeneity of their experiences and backgrounds.”

To be clear, Manin isn’t suggesting that diversity of thought isn’t critical to deliberation – rather, he argues, it is not sufficient.

“Diversity of views is not a sufficient condition for deliberation because it may fail to bring into contact opposing views,” he writes. “It is the opposition of views and reasons that is necessary for deliberation, not just their diversity.”

There are many ways in which the mere presence of diversity may not result in the articulation of divergent views. Social psychology research has well documented the challenges of confirmation bias, where people “systematically misperceive and misinterpret evidence that is counter to their preexisting belief.” Or even avoid conflicting evidence all together.

To make matters worse, Manin points to research which further finds that “groups process information in a more biased way than individuals do, preferring information that supports their prior dominant belief to an even greater extent than individual people.”

More broadly, diverse experiences and views may not always translate directly into divergent opinions or perspectives on a given topic. Manin asks us to imagine a community facing a very reasonable and rational fear: say, a serial killer is on the loose. Discussing a proposal to expand police powers at this time of crisis, “the variety of perspectives and dispersion of social knowledge among them will ensure that many arguments, each deriving from the particular perspective, experience, or background of the speaker, are heard in support of expanding the prerogatives to the police.”

That is, the diverse reasons may all support the same view.

And finally, in a large heterogenous society, diverse opinions and experience may become polarized as fragmented, separate communities. That is, “a variety of internally homogeneous communities will coexist, each ignoring the views of the others.”

And, of course, there is the deep problem of power. Divergent perspectives will often go unspoken in situations where one group or groups have been systematically oppressed and silenced. Where even explicit invitations to freely share their views are rightly perceived as hollow or out-right disingenuous. This is a dynamic which John Gaventa documents powerfully in his study of poor, white, coal miners in the Appalachian Valley.

The damaging impact of this dynamic cannot be understated, as Gaventa argues, “power serves to create power. Powerlessness serves to re-enforce powerlessness. Power relationships, once established, are self-sustaining.”

Finally, there is the simple social challenge that “encountering disagreement”, as Manin writes, “generates psychic discomfort.” People don’t really like to argue.

(Of note here, there is little cross-cultural consideration in Manin, so while mainstream America’s distaste for argumentative discourse is well documented in numerous places, I’m not sure how broad a claim this properly ought to be.)

The solution to this seems simple: argue more. Take “deliberate and affirmative measures” to ensure lively debate and critical discussion. Don’t just assume that if diverse people are present, diverse voices will be heard. Seek out divergent views and conflicting arguments. If no one else says them – argue for them yourself.

This last point, I think, is particularly critical in looking at deliberation through a power-lens. If you are a position of power you are responsible for ensuring that diverse view be heard. This can mean working to create a safe space where people genuinely feel welcomed to share their views – or it can mean saying the unpopular thing yourself, putting it out there as a valid idea, worthy of further consideration.

facebooktwittergoogle_plusredditlinkedintumblrmail

Summer Writing Goals

As a Ph.D. student in the summer, I have, it seems, quite a bit of freedom. I’ll be working, of course, and I have no shortage of things to learn, but I’m faced with a vast expanse of time in which there is much to accomplish but little which is due. My time is my own.

It’s a gift, really, but one which requires some discipline and forethought to accept successfully. I spent much of last week putting together week-by-week learning goals for myself; papers to read, specific topics to study.

Looking through it now, though, I am struck by just how mechanical my goals are; I want to learn specific algorithms and approaches, catch up on specific literatures and authors. These are good and important uses of my time, but it strikes me, too, that something is missing.

I started this blog three years ago in part because – as a generically over-busy employee and adult – I wanted to ensure that I took time out in my life to think. I bunt on a lot of days, to be sure, but nonetheless it seems a worthwhile goal to try to think at least one interesting thought a day.

Perhaps what’s most exciting about the summer, then, is that it’s a time that allows for stepping back to look at the big picture; to remember the broader questions which motivate my work. I have learned a great deal of valuable knowledge in my classes, but they have done little to relieve my Wittgensteinian doubt that people can’t deeply communicate or my Lippmannian skepticism that ‘the people’ aren’t ultimately up to the task of governance. I have found, on the whole, my writing drifting away from questions of justice and equality towards questions of implementation and practicality.

I have written before that my primary motivation comes from the civic studies question: what should we do?  In that sense, it seems fair to say that my attention lately has been focused primarily on the ‘what‘ and the ‘do,‘ while, perhaps, neglecting the ‘should.

This is entirely to be expected, of course – the three elements are equally important but traditionally divorced in the academy. If I were a humanities Ph.D. student I’d no doubt be finding that I focus too much on the should with an unfortunately loss of practicality.

My summer writing goal, then, is to explore this should. I will continue to use the bulk of my time to study more practical topics of implementation, but this blog will be my space to step back and think about the broader questions. That’s what I want to make time for.

I’ll also keep writing, of course, about whatever random facts or interesting historical notes come my way. I wouldn’t want to miss out on that.

Broadly, then, and entirely for my own benefit – as this blog unapologetically is – here are just a few of the questions on my mind which I plan to spend some time thinking and writing about this summer:

Power – what is the role of power in the Good Society? How does it shape interactions and experiences? Is it achievable to eliminate power dynamics? Would we want to?

Dialogue – what are the strengths and limitations of dialogue as a tool for the collective work of governance? What institutions ought to be supplemented with public dialogue and when should public dialogue be supplemented by institutions? What are the realities of power dynamics in dialogue? Can they be overcome?

Institutions of democracy – What institutions are vital to democracy? How should they function and what should they accomplish? What institutions detract from democracy?

Historicism and morality – if human institutions and ideals are constantly subject to change, how do we know what is good? What would a continually Good Society look like? How would it change and adapt without simply falling into fads of the day?

Global society – why does it seem that the whole world is going to hell and what do we do about?  What structures and institutions can support the Good Society on a global scale? What are our individual responsibilities as global and local citizens?

Pessimism and skepticism – why hope is not always required. Or warranted.

Divergent views – What does it truly mean to make space for divergent views? How do you distinguish from creating space to consider unpopular opinions and giving a platform to trolls and bigots? Can one be accomplished and the other avoided?

Phronetic and computational social science – what is the role of measurement in social sciences? What does it add? What does it detract? Is social science trying too hard to be ‘science’? What results from seeking predictive social science?

facebooktwittergoogle_plusredditlinkedintumblrmail

Predictive Accuracy and Good Dialogue

While I’m relatively new to the computer science domain, one thing that’s notable is the field’s obsession with predictive accuracy. Particularly within natural language processing, the primary objective of most scholars – or, perhaps, more exactly, the requirement for being published – seems to be producing methods which edge past the accuracy of existing approaches.

I’m not really in a position to comment on the benefit of such a driver, but as an outsider, this focus is striking. I have to imagine there are great, historical reasons why the field evolved this way; that the mentality of constantly pushing towards incremental improvement has been an important factor in the great breakthroughs of computer science.

Yet, I can’t help feel like in this quest for computational improvement, something important is being left behind.

There are compelling arguments that the social sciences have done poorly to abandon their humanistic roots in favor of emulating the fashionable fields of science; that in grasping for predictive measures, social science has failed its duty towards the most critical concerns of what is right and good. Perhaps, after all, questions of such import should not be solely the domain of philosophy departments.

It seems a similar objection could be raised towards computer science; and no doubt someone I’m not aware of has raised these concerns. Such an approach would go beyond the philosophical literature on moral issues in computer science, probing more deeply into questions of meaning, interpretation, and structure.

Wittgenstein questioned fundamentally what it means for two people to communicate. Austin argues that words themselves can be actions. And there is, of course, a long tradition in many cultures of words having power.

None of these topics, while intrinsic to natural language, seem to be deeply embraced by current approaches to natural language processing. Much better to show a two point increase in predictive accuracy.

And to a certain extent, this dismissal is fair. While I myself have a fondness for Wittgenstein, I imagine computer science wouldn’t advance far if, instead of developing algorithms, practitioners spent all their time wondering – if you tell me you are in pain, do I understand you because I, too, have had my own experiences of pain? How can I know what ‘pain’ means to you? 

Yet, while Wittgenstein’s Philosophical Investigations may be too far afield, it does highlight some practical issues. Perhaps metaphysical concerns about what it means to communicate can be safely disregarded, but this still leaves questions about what it looks like to communicate. That is, it seems reasonable to assume that miscommunication does happen, but what happens to dialogue plagued by such problems? What does it look like when people talk past each other or when they recognize a miscommunication and take steps to resolve it? Can an algorithm distinguish and properly parse these differences? Remembering, of course, that a human, perhaps, cannot.

In a recent review of literature around the natural language processing task of argument mining, I was struck by the value of a 1987 paper focused on understanding the structure of a single speech-act. It evoked no Wittgenstein-level of abstraction, and yet brought an important element of theory to the computational task of parsing a single argument.

I couldn’t find – and perhaps I missed it – no similar paper exploring the complex interactions of dialogue. Of course, there is much work done in this area among deliberation scholars – but this effort is not easily translated into the mechanized logic of algorithms.

In short, there seems to be a divide – a common one, I’m afraid, in the academy. In one field, theorists ask, what does it mean to deliberate? What makes good deliberation? And in another they ask, what algorithms can recognize arguments? What algorithms accurately predict stance? 

And, while both pursuing important work, the fields fail to learn from each other.

facebooktwittergoogle_plusredditlinkedintumblrmail

Argument Structure

In her 1987 paper on “Analyzing the structure of argumentative discourse,” Robin Cohen laid out a theory of argument understanding comprised of three core components: coherent structure, linguistic clue interpretation, and evidence relationships.  As the title suggests, this post focuses on the first of those elements: argument structure.

Expecting a coherent structure minimizes the computational requirements of argument mining tasks by limiting the possible forms of input. The coherent structure theory parses arguments as a tree of related statements, with every statement providing evidence for some other statement, and one root statement serving as the core claim of the argument. The theory posits that argument structures may vary, but there are a finite number of unique structures, and those structures are discoverable. Cohen herself introduces two such structures: pre-order “where the speaker presents a claim and then states evidence” and post-order, “where the speaker consistently presents evidence and then states the claim” (Cohen 1987).

Argument structure is a particularly notable and challenging element of argument mining. Identifying argument structures are essential for evaluating the quality of an argument (Stab and Gurevych 2014), but it is a difficult task which has gone largely unexplored. A key challenge is the lack of argument delimiters; one argument may span multiple sentences and multiple premises may be contained in the one sentence. In the resulting segmentation problem, we are able to determine which information for the arguments, but not how this information is split into the different arguments (Mochales and Moens 2011).

To address this challenge, Mochales and Moens have sought to expand models of argument structure, parsing texts “by means of manually derived rules that are grouped into a context-free grammar (CFG)” (Mochales and Moens 2011). Restricting their focus to the legal domain – where arguments are consistently well-formed – Mochales and Moens manually built a context-free grammar in which document has a tree-structure (T) formed by an argument (A) and a decision (D). Further rules elucidated what elements may form the argument and what elements may form the decision. By maintaining a tree-structure for identified arguments, Mochales and Moens broadened the range of possible argument structures without sacrificing too much computational complexity.

Using this approach, Mochales and Moens were able to obtain 60% accuracy when detecting argument structures, measured manually by comparing the structures given by the CFG and the structures given by human annotators. This is a notable advancement over the simple structures introduced by Cohen, but there is still more work to be done in this area. Specifically, as Mochales and Moens point out, future work includes broadening the corpora studied to include additional types of argumentation structure, developing techniques which can identify and computationally handle structures more complex than trees, and incorporating feedback from those who generate the arguments being parsed. The limitation of this model to legal texts is particularly notable, as “it is likely it will not achieve acceptable accuracy when applied to more general texts in which discourse markers are missing or even misleadingly used (e.g. student texts)” (Stab and Gurevych 2014).

facebooktwittergoogle_plusredditlinkedintumblrmail

Deliberative Democracy and Who Gets to Speak

There is a radical idea at the core of deliberative theory: every person’s voice is important.

I say this idea is radical because it’s the kind of thing one generally feels they ought to say without necessarily being the kind of thing one is genuinely inclined to believe.

Believing every voice is important has the virtuous quality of implying an egalitarian sense of justice and equity. Being in favor of the continued oppression of the oppressed is hardly popular in most circles.

But making this claim, truly believing this claim, goes beyond the nobel argument that those who are most vulnerable, who are most silenced should, too, have a voice in our collective creation of the world.

Believing that every voice – every voice – is important means supporting blowhards and bigots, the ignorant and the idiots.

That is a difficult belief to bear.

One can try to resolve this conflict through imposed norms of consideration and inclusion, but such measures fall short of being deeply satisfactory. For one thing, it raises complex normative questions as people’s core identities conflict – cries of religious discrimination and reverse racism are sure to follow; arguably trading one person’s silence for another.

More deeply, while such norms importantly shape the safety of an otherwise hostile environment, they do little to eradicate the deep, systemic issues underneath. Being ‘color blind’  may have made overt racism impolite, but it has done little to resolve the structural racism of our society.

These are, of course, meaningful topics to debate – perhaps it is entirely worthy to ask a person of privilege to step back so that someone else has the opportunity to step up. Perhaps the harm done in silencing a bigot is little compared to the harm done in letting them speak.

But such discourse also highlights the deeper, theoretical tension: who gets to speak? whose voice is important?

So in this sense, believing that every voice is important is indeed radical.

That’s not at all to say that deliberative theorists want to support bigots and idiots, but it’s a narrow path to follow.

In most deliberative discussions, participants begin by setting their own ground rules. Sometimes rules are suggested to get them started, but this is the group’s first critical task of co-creation.

Because no one else can set these rules for them. No facilitator or outside person can tell them what to think or how to behave. The members of the group need to think about what kind of conversation they want to have and they each need to agree to the rules collectively set out.

Respect is typically among the first of these values – respecting the voice and experience of every person; those you agree with and, importantly, those with whom you don’t.

This is the only way out of this tangle.

Because to believe in the value of every voice means also to believe in the power of deliberative dialogue. To believe that when every person is truly valued, when diverse perspectives are thoughtfully exchanged – that it is this collective experience which truly has the power to transform us and move us towards the ideal democracy we all separately seek.

It is radical, this belief, and – despite the possible complications – ultimately the greatest benefit to those who have been silenced; who have been deeply taught to believe that their voices, minds, and experiences don’t matter.

After all, you cannot believe that every voice is important if you don’t first find your own.

facebooktwittergoogle_plusredditlinkedintumblrmail

Networks in Political Theory

While graph theory has a long and rich history as a field of mathematics, it is only relatively recently that these concepts have found their way into the social sciences and other disciplines.

In 1967, Stanley Milgram published his work on The Small World Problem. In 1973 Mark Granovetter studied The Strength of Weak Ties. But while this applied methodology is young, the interest in networks has been in the ether of political theory for some time.

I’ve noted previously how Walter Lippmann can be interpreted as invoking networks in his 1925 book the Phantom Public. Lippmann argues vehemently against this thing we call ‘public opinion’  – a myth Lippmann doesn’t believe truly exists:

We have been taught to think of society as a body, with a mind, a soul, and a purpose, not as a collection of men, women and children whose minds, souls and purposes are variously related. Instead of being allowed to think realistically of a complex of social relations, we have had foisted upon us by various great propagative movements the notion of a mythical entity, called Society, the Nation, the Community.

Rather than thinking of society as a single, collective whole, Lippmann argues that we ought to “think of society not as the name of a thing but as the name of all the adjustments between individuals and their things.

I’d noted this passage in Lippmann when I first read his work several year ago. But I was surprised recently to come across a similarly networked-oriented sentiment in John Dewey’s 1927 rebuttal, The Public and Its Problems:

In its approximate sense, anything is individual which moves and acts as a unitary thing. For common sense, a certain spatial separateness is the mark of this individuality. A thing is one when it stands, lies or moves as a unit independently of other things, whether it be a stone, tree, molecule or drop of water, or a human being. But even vulgar common sense at once introduces certain qualifications. The tree stands only when rooted in soil; it lives or dies in the mode of its connections with sunlight, air and water. Then too the tree is a collection if interacting parts; is the tree more a single whole than it’s cells?

…From another point of view, we have to qualify our approximate notion of an individual as being that which acts and moves as a unitary thing. We have to consider not only its connections and ties, but the consequences with respect to which it acts and moves. We are compelled to say that for some purposes, for some results, the tree is an individual, for others the cell, and for a third, the forest or the landscape…an individual, whatever else it is or is not, is not just the specially isolated thing our imagination inclines to take it to be.

…Any human being is in one respect an association, consisting of a multitude of cells each living its own life. And as the activity of each cell is conditioned and directed by those with which it interacts, so the human being whom we fasten upon as individual par excellence is moved and regulated by his association with others; what he does and what the consequences of his behavior are, what his experience consists of, cannot even be described, much less accounted for, in isolation.

facebooktwittergoogle_plusredditlinkedintumblrmail

“Love the Hell Out of Everybody” – an Evening with John Lewis

There aren’t too many people who get a standing ovation before they even speak.

John Lewis, former chairman of the Student Nonviolent Coordinating Committee (SNCC) and the  last living member of the “Big Six” civil rights leaders, is one of them.

From the movement he walked on stage, I could feel the energy in the room: the overwhelming love and appreciation for this man who endured so many brutal beatings as he strove for justice; the rising hope that tenaciously carries on from the victories of the civil rights movement; and the growing despair that we are sliding backwards in time, regressing towards our darker days of hatred and oppression.

And then he spoke. A deep, melodic voice that rolled across the room, reverberating from every corner. The crowd fell silent.

This was actually the second time I had the pleasure of hearing John Lewis speak. The first was in 2009 when he was my commencement speaker as I finished my Masters’ degree at Emerson College. The second time, last night, he delved even deeper into his experience of the civil right movement as he was hosted by my former colleagues at Tisch College at Tufts University.

He’s a politician now – Lewis has served as Congressman for Georgia’s 5th congressional district since 1987 – but he doesn’t speak with the same canned cadence which is so widespread amongst elected officials.

You get the distinct impression he genuinely believes what he says; and that his beliefs have been shaped by the difficult crucible of experience.

In 1965 he led nearly 600 protestors in a peaceful march over the Edmund Pettus Bridge in Selma, Alabama.

Prepared for being arrested, the 25-year-old Lewis carried the essentials with him: two books, an apple, a banana, and a toothbrush. All the things he thought he might need in prison.

“I thought I’d be arrested an jailed,” Lewis recalled. “But I didn’t think I’d be beaten – that’d I’d be left bloody there.”

Lewis’ skull was fractured by a police nightstick and he nearly lost his life.

It wasn’t the first time Lewis had been beaten, either. At the age of 21, Lewis was the first of the Freedom Riders to be assaulted while working to desegregate the interstate bus system.

This was life for a young, black man in 1960s America.

And, perhaps, most remarkably, through it all Lewis continues to follow the message of his friend and mentor, Dr. Martin Luther King. In response to such brutal attacks, in the face of the terrible of injustices of today, Lewis turns not to anger, but to love.

“To revolutionize America to be better, we must first revolutionize ourselves. We have to humanize our institutions, humanize ourselves,” he argues.

For Lewis, the choice is quite simple, “You can’t set out to build the beloved community and use tactics which are not loving.”

So he endured the bloody beatings, endured the deepest injustices of our system. And in 2009, when a former Klansman apologized for being one of Lewis’ assailants, the two hugged and cried.

“That’s the power of the way of peace, the way of love, the way of non-violence,” Lewis said.

Of course, not all activist share this view – and in remembering the civil rights movement, we too often gloss over or belittle the important contributions of activists like Malcolm X. But that’s a longer debate for another day.

So for now, I will leave you with a final thought from John Lewis, who has endured so much in his continuing fight for the equality of all people. Quoting Dr. King, Lewis just smiles and explains his philosophy simply:

“Just love the hell out of everybody.”

facebooktwittergoogle_plusredditlinkedintumblrmail