Watch Kettering’s “A Public Voice” Event Live, May 5th

Next Thursday, the Kettering Foundation – one of our NCDD member organizations – will report the data from over 250 forums they’ve hosted on the economy and health care costs to DC policymakers during their annual gathering, A Public Voice, and you can participate via their live online video feed! We encourage you to read more about the gathering and how to participate in the Kettering announcement below, or find the original one here. 


kfAs of March 31, there have been more than 250 in-person and online forums on Making Ends Meet and Health Care Costs. Those numbers are, quite simply, amazing – proof that the NIF network is vibrant and ready to engage on timely issues!

These individual forums are impressive on their own, but we know that part of the impetus for participating in NIF is the chance to contribute to a larger national conversation. Kettering has been analyzing forum transcripts,  moderator responses, participant questionnaires and online forum data as it’s come in, and we’re now ready to offer some early insights into the national thinking on these two issues.

We’ll be doing this reporting to policymakers in Washington DC at the National Press Club, Thursday, May 5 from 10 am – 12:30 pm EDT – and we’d like you to join us!

We’ll be livestreaming the entire event so you can hear us, and we want to hear from you! We’ll be live-Tweeting the entire meeting, and we want you to add to the conversation – to let us know if the themes we heard were present in your forum, if there was anything unique that needs to be added, and any questions you might have for elected officials!

So, how can you join in?

  • Host a viewing party
    • Did you convene one of the 200+ forums we’ll be reporting on? This is a great way to reconnect with the participants and let them see how their voice is part of the larger conversation! Invite some people to watch together and let us know what you think via social media – we’ll be taking questions and comments from both Facebook and Twitter throughout and feeding them to the moderator.
    • Viewing parties aren’t just for forum participants either! Are there local elected officials who might be interested in seeing the results of this nationwide conversation? Leaders from other local universities, civic groups, or nonprofits? Use the A Public Voice viewing party as a platform to start a conversation about sparking and listening to the public voice in your own community!
  • Share the livestreaming link with your networks!
    • Can’t host a viewing party, but still want to give your forum participants the chance to see how their voice is making it to Washington? Share the livestreaming link with participants and encourage them to watch and give us their feedback May 5!

Here’s the link where the livestream will be live on May 5th: https://scontent.webcaster4.com/web/apv2016

You can find the original version of this Kettering Foundation post at www.kettering.org/blogs/apv-2016.

State of the Art Techniques in Argument Mining

As part of a paper I’m working on, here’s a review of select recent state of the art efforts in the area of argument mining.

In their work on automatic, topic-independent argument extraction, Swanson, Ecker et al. introduce an Implicit Markup hypothesis which harkens back to the earlier work of Cohen. This hypothesis is built of four elements: discourse relation, dialogue structure, syntactic properties, and semantic density (Swanson, Ecker et al. 2015). In their model, discourse relations can be determined by any two observed arguments. The second argument, or claim, is defined as the one to which a warrant – if observed – is syntactically bound. Dialogue structure considers the position of an argumentative statement within a post. Notably, with its focus on relative position, this is more similar to Cohen’s model of coherent structure than to the concept of schemes introduced by Walton. A sophisticated version of Cohen’s linguistic cues, syntactic properties are a clever way to leverage observed discourse markers in order to infer missing discourse markers. For example, observing sentences such as “I agree that <x>” can help identify other argumentative content of the more general form “I <verb> that <x>.”

The final element, semantic density, is a notable advancement in processing noisy data. Comprised of a number of features, such as sentence length, word length, deictic pronouns, and specificity, semantic density filters out those sentences which are not relevant to a post’s core argument. When dealing with noisy forum post data, this process of filtering out sentences which are harder interpret provides valuable computational savings without loosing an argument’s core claim. Furthermore, this filtering can help with the enthymeme challenge – in fact, Swanson, Ecker et al. filter out most enthymemes, focusing instead on claims which are syntactically bound to an explicit warrant.

With this model, Swanson, Ecker et al. take on the interesting task of trying to automatically predict argument quality – a particularly timely challenge given the ubquity of argumentative data from noisy online forums. With a corpus of over 100,000 posts on four political topics, Swanson, Ecker et al. compare the prediction of their model to human annotations of argument quality. Testing their model on three regression algorithms, they found that a support vector machine (SVM) performed best, explaining nearly 50% of the variance for some topics (R2 = 0.466, 0.441 for gun control and gay marriage respectively).

Stab, Gurevych, and Habernal, all of the Ubiquitous Knowledge Processing Lab, have also made important contributions to the state of the art in argument mining. As noted above, Stab and Gurevych were among the first to expressly tackle the challenge of poorly structured arguments in their work identifying discourse structures in persuasive essays (Stab and Gurevych 2014).

In seeking to identify an argument’s structure and the relationship between its elements, this work has clear ties back to earlier argumentative theory. Indeed, while unfortunately prone to containing poorly-formed arguments, student essays are a model setting for Cohen’s theory: a single speaker does their best to form a coherent and compelling argument while a weary reader is tasked with trying to understand their meaning.

A notable contribution of Stab and Gurevych was to break this effort into a two-step classification task. The first step uses a multiclass identifier to classify the components of an argument, while the second step is a simpler binary classification of a pair of argument components as either support or non-support. As future work, they propose developing this as a joint inference problem, since the two pieces of information are indicators of each other. However, they found current accuracy in identifying argument components to be “not sufficient for increasing the performance of argumentative relation identification” (Stab and Gurevych 2014). Their best performing relation identification classifier, an SVM built with structural, lexical, and syntactic features, achieved “almost human performance” with an 86.3% accuracy, compared to a human accuracy of 95.4%. Emphasizing the challenges of linguistic cues in noisy text, a model using discourse markers in student essays yielded an F1-score of only 0.265.

Finally, in what may be the most promising line of current argument mining work, Habernal and Gurevych build a classifier for their labeled data using features derived in an unsupervised manner from noisy, unlabeled data. Using text from online debate portals, they derive features by “projecting data from debate portals into a latent argument space using unsupervised word embeddings and clustering” (Habernal and Gurevych 2015).

While this debate portal data contains “noisy texts of questionable quality,” Habernal and Gurevych are able to leverage this large, unlabeled dataset to build a successful classifier for their labeled data using a sentence-level SVM-Hidden Markov Model. To do this, they employ “argument space” features; composing vectors containing the weighted average of all word embeddings in a phrase, and then projecting those vectors into a latent vector space. The centroids found by clustering sentences from the debate portal in this way represent “a protypical argument” – implied by the clustering but not actually observed. Labeled data can than be projected into this latent vector space and the computed distance to centroids are encoded as a feature. In order to test cross-domain performance, the model was trained on five domains and tested on a sixth.

While this continues to be a challenging task, the argument space features consistently increased the model’s performance in classifying an argument’s component type. The best classification of claims (F1-score: 0.256) came from combining the argument feature space with semantic and discourse features. This compares to a human-baseline F1-score of 0.739 and a random assignment F1-score of 0.167.

Importantly, Habernal and Gurevych ground their approach in argumentative theory. Building off the work of Toulmin, they take each document of their corpora to contain a single argument composed of five components: a claim to be established, premises which give reason for the claim, backing which provides additional information, a rebuttal attacking the claim, and finally a refutation attacking the rebuttal. Each type of component is classified in their vector space, allowing them to assess which elements are more successfully classified as well as to gain insights into what argument structure prove particularly problematic.

facebooktwittergoogle_plusredditlinkedintumblrmail

10 theses about ethics, in network terms

  1. People hold many morally relevant opinions, some concrete and particular, some abstract and general, some tentative and others categorical.
  2. People see connections–usually logical or empirical relationships–between some pairs of their own opinions and can link all of their opinions into one network. (Note: these first two theses are empirical, in that I have now “mapped” several dozen students’ or colleagues’ moral worldviews, and each person has connected all of his or her numerous moral ideas into a single, connected network. However, this is a smallish number of people who hardly reflect the world’s diversity.)
  3. Explicit moral argumentation takes the form of citing relevant moral ideas and explaining the links among them.
  4. The network structure of a person’s moral ideas is important. For instance, some ideas may be particularly central to the network or distant from each other. These properties affect our conclusions and behaviors. (Note: this is an empirical thesis for which I do not yet have adequate data. There are at least two rival theses. If people reason like classical utilitarians or rather simplistic Kantians, then they consistently apply one algorithm in all cases, and network analysis is irrelevant. Network analysis is also irrelevant if people make moral judgments because of unconscious assumptions and then rationalize them post hoc by inventing reasons.)
  5. Not all of our ideas are clearly defined, and many of the connections that we see among our ideas are not logically or empirically rigorous arguments. They are loose empirical generalizations or rough implications.
  6. It is better to have a large, complex map than a simple one that would meet stricter tests of logical and empirical rigor and clarity. It is better to preserve most of a typical person’s network because each idea and connection captures valid experiences and serves as a hedge against self-interest and fanaticism. The emergent social world is so complex that human beings, with our cognitive limits, cannot develop adequate networks of moral ideas that are clear and rigorous.
  7. Our ideas are not individual; they are relational. We hold ideas and make connections because of what others have proposed, asked, made salient, or provoked from us. A person’s moral map at a given moment is a piece of a community’s constantly evolving map.
  8. We begin with the moral ideas and connections that we are taught by our community and culture. We cannot be blamed (or praised) for their content. But we are responsible for interacting responsively with people who have had different experiences. Therefore, discursive virtues are paramount.
  9. Discursive virtues can be defined in network terms. For instance, a person whose network is centralized around one nonnegotiable idea cannot deliberate, and neither can a person whose ideas are disconnected.  If two people interact but their networks remain unchanged, that is a sign of unresponsiveness.
  10. It is a worthwhile exercise to map one’s own current moral ideas as a network, reflect on both its content and its form, and interact with others who do the same.

New Resource for History and Government!

sips page 1

Students Investigating Primary Sources, or SIPS, is a brand new free resource from the Florida Joint Center for Citizenship. This new resource is a K-12 collection of brief introductory mini-lessons centered on particular topics and primary sources. These materials were created in collaboration with the National Archives, Pinellas County Public Schools, and Brevard Public Schools. We will be adding additional grade level materials as they are developed. Please note that the page has issues loading on Internet Explorer! 

org logos

We are very excited to share this with folks, and we hope that you find this useful. Currently, we have resources for high school US History and US Government, but as stated above, we will be adding additional K-12 resources as they are developed, and you are welcome to adapt these current mini-lessons for use in your state. Most excitingly, as the logos show, these are created in direct collaboration with the National Archives, drawing on their resources in conjunction with some of their excellent personnel. As always, however, we wanted to make sure that teachers have a voice, so we brought teachers and district leaders in from Brevard and Pinellas Counties. The resources are available as a PDF and in Word; simply click on ‘Download Original’ to access the Word version if you wish to modify it!

I hope that you find these useful. Please keep an eye out for additional resources as we move forward. We are grateful for our friends at the National Archives and our county partners for their work with the FJCC team! Oh, and our own Val McVey will be presenting on SIPS at NCSS in DecemberYou can access the free SIPS resources on our website.

sips page 2


Mutualized Solutions for the Precariat

Large companies have long sought to boost profits by converting their employees into “independent contractors,” allowing them to avoid paying benefits.  The rise of the “gig economy” – exemplified by digital platforms such as Uber and Airbnb – has only accelerated this trend.  Business leaders like to celebrate the free agent, free market economy as liberating -- the apex of American individualism and entrepreneurialism.  But the self-employed are more likely to experience a big loss of income, security and collegiality.  There is a reason that this cohort is called “the precariat.”

A new report by Co-operatives UK called “Not Alone:  Trade Union and Co-operative Solutions for Self-Employed Workers” offers a thoughtful, rigorous overview of this neglected sector of the economy.  Although it focuses on the UK, its findings easily apply internationally, particularly for co-operative and union-based solutions. 

The author of the report, Pat Conaty, notes that “self-employment is at a record level” in the UK – some 15% of the workforce – and rising.  While some self-employed workers choose this status, a huge number are forced into through layoffs and job restructuring, with all the downward mobility and loss of security implied by them. 

Few politicians or economists are honestly addressing the implications.  They assume that technological innovation will simply create a new wave of jobs to replace the ones being eliminated, same as it ever was.

The sad truth is that investors and companies benefit greatly from degrading full-time jobs into piecemeal, task-based projects tackled by a growing pool of precarious workers.  This situation is only going to become more desperate as artificial intelligence, automation, driverless vehicles and platform economics offshore and de-skill conventional jobs if they don't permanently destroy them.  

read more