The Ethics of Personalization

Near the beginning of the week, someone asked me about the ethics and effect of algorithms which filter your content for you; “helpfully” prioritizing those items which fit into your existing world view.

I was reminded of that question yesterday when I had an interesting and somewhat similar conversation with computer scientist Vagelis Papalexakis, whose work explores the way different people’s brains respond to various stimuli. Papalexakis discussed the possible implications for improving education: a classroom where teachers could tailor their lessons to the particular neural responses of their students.

While I can see the potential good in such technology, being somewhat cautious of the ills of human nature, I asked Papalexakis about the ethical implications – with access to student neural readings, what would stop ‘big brother’ from punishing children whose minds tend to wander?

While there’s no guaranteed way to prevent such abuse, Papalexakis rightly pointed to this as a broader ethical question – the ethics of personalization.

Filtering algorithms, for example, could easily be misused as tools for efficiently delivering propaganda. There is a value in having this personalization available, but there is also a risk.

What I find particularly interesting about the challenge of filtering is that it is not at all clear that there is a neutral solution to the problem.

In 2012, an estimated 2.5 billion gigabytes of data were generated every day – far more than we would expect ourselves to be able to handle. The reality is that some type of filtering is necessary – so the question becomes one of what type of filtering we think is best.

Imagine for a moment, the “things you wouldn’t enjoy…” filter. That is, rather than having an algorithm that tracks what you like and presents you with similar content, it tracks what you like an intentionally presents you with divergent views.

In theory, I would love to have this. It is a problem that we each tend to fall into our own little filter bubble, with little exposure to opposing views.

But, how would such a tool play out in practice? First, no algorithm can remove the need for human agency – I might be presented with opposing articles, but I would need to actually click on them.

This presents a real challenge for content providers who – even putting aside profit motive – need to serve their customers. If people don’t like the content that is being filtered for them, they will leave for a different service.

Furthermore, research indicates that even when interacting with conflicting information, people are likely to interpret the results with a bias that favors their initial view and even double down on their initial opinion.

So it’s not clear at all that changing a filtering algorithm in such a way is sufficient to relieve polarization and bias.

That’s not to say either, that we should just let filtering algorithms off the hook. They by no means a full solution to the challenges of information bias, but they do play a critical role in shaping the information atmosphere around us.

Markus Prior, for example, has show that when it comes to factual matters, a less-personalized media environment increases people’s political knowledge. On the other hand, he has also found that “there is no firm evidence that partisan media are making ordinary Americans more partisan.” So again, the personalization of the media environment is only part of the solution.

What does all this have to do with using brain scans to tailor information to recipients?

Well, I guess, we need to find ways to get all these moving pieces to work together. Personalization is good. It has real benefits and helps each focus on the signal in a sea of noise. But we should also be weary of too much personalization – a little noise and inefficiency should be intentionally built into the system. And, of course, we have to remember that we are our own agents in this work as well – systems of personalization can shape the broader context, but they cannot determine how we each choose to act.

facebooktwittergoogle_plusredditlinkedintumblrmail

The AP and Nazi Germany

Harriet Scharnberg, German historian and Ph.D. student at the Institute of History of the Martin Luther University of Halle-Wittenberg made waves yesterday with the release, in the journal Studies in Contemporary History, of her paper, Das A und P der Propaganda: Associated Press und die nationalsozialistische Bildpublizistik.

The paper finds that, prior to the expulsion of all foreign media in 1941, the AP collaborated with Nazi Germany; signing the Schriftleitergesetz (editor’s law) which forbid the employment of “non-Aryans” and effectively ceded editorial control to the German propaganda ministry.

These are claims which the AP vehemently denies:

AP rejects the suggestion that it collaborated with the Nazi regime at any time. Rather, the AP was subjected to pressure from the Nazi regime from the period of Hitler’s coming to power in 1933 until the AP’s expulsion from Germany in 1941. AP staff resisted the pressure while doing its best to gather accurate, vital and objective news for the world in a dark and dangerous time.

AP news reporting in the 1930s helped to warn the world of the Nazi menace. AP’s Berlin bureau chief, Louis P. Lochner, won the 1939 Pulitzer Prize for his dispatches from Berlin about the Nazi regime. Earlier, Lochner also resisted anti-Semitic pressure to fire AP’s Jewish employees and when that failed he arranged for them to become employed by AP outside of Germany, likely saving their lives.

Lochner himself was interned in Germany for five months after the United States entered the war and was later released in a prisoner exchange.

Regardless which finding present a more accurate historical truth, I find this controversy quite fascinating.

According to the Guardian, the AP was the only was the only western news agency able to stay open in Hitler’s Germany, while other outlets were kicked out for refusal to comply with Nazi regulations.

This exclusivity lends credence to the claim they the news agency did, in some way, collaborate – since it seems improbably that the Nazis would have allowed them to continue without some measure of compliance. It also suggests a shameful reason for this compliance: choosing to stay, even under disagreeable terms, was a smart business decision.

But it also highlights the interesting challenge faced by foreign correspondents covering repressive regimes.

For German news media, it was a zero-sum game: either comply with the Schriftleitergesetz or face charges of treason – a charge that would likely have serious repercussions for one’s family as well.

The AP, from what I can tell, seems to have skirted some middle ground.

By their account, the AP did work with a “photo agency subsidiary of AP Britain” which, in 1935 “became subject to the Nazi press-control law but continued to gather photo images inside Germany and later inside countries occupied by Germany.”

While images from this subsidiary were supplied to U.S. newspapers, “those that came from Nazi government, government-controlled or government–censored sources were labeled as such in their captions or photo credits sent to U.S. members and other customers of the AP, who used their own editorial judgment about whether to publish the images.”

The line between collaboration and providing critical information seems awfully fuzzy here.

Critics would claim that the AP was simply looking out for it’s own bottom-line, sacrificing editorial integrity for an economic advantage. The AP, however, seems to argue that it was a difficult time and they did what they had to do to provide the best coverage they could – they did not collaborate, but they played by the rules just enough to maintain the accesses needed to share an important story with the world.

facebooktwittergoogle_plusredditlinkedintumblrmail

How Human Brains Give Rise to Language

Yesterday, I attended a lecture by Northeastern psychology professor Iris Berent on “How Human Brains Give Rise to Language.” Berent, who works closely with collaborators in a range of fields, has spent her career examining “the uniquely human capacity for language.”

That’s not to say that other animals don’t have meaningful vocalizations, but, she argues, there is something unique about the human capacity for language. Furthermore, this capacity cannot simply be attributed to mechanical differences – that is, human language is not simply a product of the computational power of our brains or the ability of our oral and aural processing.

Rather, Berent argues, humans have an intrinsic capacity for language. That is, as Steven Pinker describes in The Language Instinct,  “language is a human instinct, wired into our brains by evolution like web-spinning in spiders or sonar in bats.”

While this idea may seem surprising, in some ways it is all together reasonable: humans have specialized organs for seeing, breathing, processing toxins, and more – is it really that much more of a jump to say that the human brain is specialized, that the brain has a specialized biological system for language?

Berent sees this not as an abstract, philosophical question, but rather as one that can be tested empirically.

Specialized biological systems exhibit an invariant, universal structure, Berent explained. There is some variety among human eyes, but fundamentally they are all the same. This logic can be applied to the question of innate language capacity: if language is specialized, we would expect to find for principles: we would expect what Noam Chomksy called a “universal grammar.”

In searching for a universal grammar, Berent doesn’t expect to find such a thing on a macro scale: there’s no universal rule that a verb can only come after a noun. But rather, a universal grammar would manifest in the syllables that occur – or don’t occur – across the breadth of human language.

To this end, Berent constructs a series of syllables which she expects will be increasingly difficult for human brains to process: bl > bn > bd > lb.

That is, it’s universally easier to say “blog” than to say “lbog,” which “bnog” and “bdog” having intermediate difficulty.

One argument for this is simply the frequency of such constructions – in languages around the world “bl” occurs more frequently than “lb.”

Of course, this by no means proves the existence of an innate, universal grammar, as we cannot account for the socio-historical forces that shaped modern language, nor can we be sure such variance isn’t due to the mechanical limitations of human speech.

Brent’s research, therefore, aims to prove the fundamental universality of such syllables – showing that there is a universal hierarchy of what human brain prefers to process.

In one experiment, she has Russian speakers – who do use the difficult “lb” construction – read such a syllable out loud. She then asks speakers of languages without that construction (in this case English, Spanish, and Korean), how many syllables the sound contained.

The idea here is that if your brain can’t process “lbif” as a syllable, it will silently “repair” it to the 2-syllable “lebif.”

In numerous studies, she found that as listeners went from hearing syllables predicted to be easy to syllables predicted to be hard, they were in fact more likely to “repair” the word. Doing the experiment with fMRI and Transcranial Magnetic Stimulation (TMS) further revealed that people’s brains were indeed working harder to process the predicted-harder syllables.

All this, Berent argues, is evidence that a universal grammar does exist. That today’s modern languages are more than the result of history, social causes, or mechanical realities. The brain does indeed seem to have some specialized language system.

For myself, I remain skeptical.

As Vyvyan Evans, Professor of Linguistics at Bangor University, writes, “How much sense does it make to call whatever inborn basis for language we might have an ‘instinct’? On reflection, not much. An instinct is an inborn disposition towards certain kinds of adaptive behaviour. Crucially, that behaviour has to emerge without training…Language is different…without exposure to a normal human milieu, a child just won’t pick up a language at all.”

Evans rather points to a simpler explanation for the emergence of language: cooperation:

Language is, after all, the paradigmatic example of co‑operative behaviour: it requires conventions – norms that are agreed within a community – and it can be deployed to co‑ordinate all the additional complex behaviours that the new niche demanded…We see this instinct at work in human infants as they attempt to acquire their mother tongue…They are able to deploy sophisticated intention-recognition abilities from a young age, perhaps as early as nine months old, in order to begin to figure out the communicative purposes of the adults around them. And this is, ultimately, an outcome of our co‑operative minds. Which is not to belittle language: once it came into being, it allowed us to shape the world to our will – for better or for worse. It unleashed humanity’s tremendous powers of invention and transformation.

facebooktwittergoogle_plusredditlinkedintumblrmail

The Hardest Problems are the Easiest to Ignore

I was somewhat surprised this morning – though perhaps I should not have been – to find coverage of terrorist attacks in Brussels to be the sole focus of the morning news.

I wasn’t surprised by the news of an attack somewhere in the world – a grim reality we’ve all grown sadly accustomed to – but I was surprised at the intensity of coverage. Broadcast morning news coverage isn’t, you see, my typical source for international news.

Suddenly it was all they could talk about.

Where was this attention when a suicide bomber attacked a busy street in Istanbul over the weekend? Or when three dozen people died in the Turkish capital of Ankara last week?

Even from a wholly self-interested perspective, recent attacks in Turkey seem noteworthy as the EU increasingly relies on Turkey to address the Syrian refugee crisis.

But even as I wondered why Belgium elicited so much more concern than Turkey, I felt the sinking sense of an answer.

Where, indeed, was the coverage of attacks in Beirut just days before the now more infamous attacks in Paris?

On it’s surface, this bias in coverage and compassion seems to most obviously be one of culture, or cultural perspective, for lack of a better word. Perhaps people in France and Belgium are perceived to be “more like us” than people in Lebanon or Turkey. The disparity is essentially racism with an international flavor.

Another theory would be one of newsworthiness – Turkey, Lebanon, and many places in the Middle East regularly suffer from terrorist attacks. In a cold sense of the word, such an attack is not news – it is expected.

Such an explanation, though, has the ring of a hollow excuse. The sort of defense you come up with when accused of something unseemly. And the two ideas – that we show greater concern for those in Western Europe because they are “more like us” and that we are more interested in unexpected events – are not entirely unrelated.

In the States, people of color die every day in our cities. And most often, their deaths go unreported and unremarked on by society at large. A murder in a white suburb, though, is sure to grab headlines.

Neighbors grapple to make sense of the shocking news. Things like this don’t happen here. This is a safe community.

It’s not that suburbs are intrinsically more safe, I would argue, but rather that we as a society, would never allow violence in suburbs to rise to the levels it has within the inner-city. Suburbs are already where our wealthy residents live, but in addition to that privilege, we collectively treat them with more time, attention, and care.

Violence in suburbs and attacks in western cities are shocking reminders that we’ve been ignoring the wounds of this world. That we’ve pushed aside our our responsibility to confront seemingly intractable challenges, closing our eyes and hoping those ills only affect those who are different.

All this reminds me of Nina Eliasoph’s thoughtful book, Avoiding Politics: How Americans produce apathy in everyday life.

Working with various civic groups, Eliasoph notes how volunteers eagerly tackle seeming simple problems while avoiding the confrontation that comes from the most complex issues. In one passage, Eliasoph describes the meeting of a parents group in which one of the attendees was “Charles, the local NAACP representative” and “parent of a high schooler himself.”

He said that some parents had called him about a teacher who said “racially disparaging things” to a student…Charles said that the school had hired this teacher even though he had a written record in his file of having made similar remarks at another school. Charles also said there were often Nazi skinheads standing outside the school yard recruiting at lunchtime.

The group of (mostly white) parents quickly shut Charles down. Responding, “And what do you want of this group. Do you want us to do something.” Eliasoph notes this was not “as a question, but with a dropping tone at the end.”

Afterwards, Eliasoph quotes the meeting minutes:

Charles Jones relayed an incident for information. He is investigating on behalf of some parents who requested help from the NAACP.

The same minutes contained “half of a single-spaced page” dedicated to “an extensive discussion on bingo operations.”

Eliasoph’s other interactions with the group indicates that they aren’t intentionally racist – rather, they are well-meaning citizens to whom the deep challenge of race relations seems too much to handle; they would rather make progress on bingo.

And this is where the cruelest twist of power and privilege come in: it is easy to ignore these hard problems, to brush them off as unavoidable tragedies, to simply shake your head and sigh – all of this is easy, as long as it’s not happening to you.

facebooktwittergoogle_plusredditlinkedintumblrmail

Coding The English Language

I have been quite busy this week trying to capture all the rules of the English language.

As you might suspect, this is a non-trivial task.

Having benefited from being a native English speaker and having studied far more regular languages (Latin and Japanese), I always knew that English was a crazy mishmash of rules – but I find I am getting a whole knew appreciation for it’s complexity.

As it stands, my grammar – which has a tiny vocabulary and only rudimentary sentences – has nearly 500 rules. Every time I try to generalize, I find those nagging English exceptions which create a cascade of special case rules.

All this highlights how impressive the advances of Natural Language Processing are – correcting spelling and grammar is hardly easy, much less building an assistant such as Siri which can understand what you say.

It also seems to highlight the concerns of the natural language philosophers – when constructing a thought as an expressible sentences is so hard, how can we be confident our meanings are understood?

Of course, our meanings are very often not understood, which leads to no end of drama and miscommunication. But, putting basic miscommunications aside, what does it really mean to communicate or to understand another person?

Ludwig Wittgenstein poses this questions frequently throughout his work. In Philosophical Investigations he tests numerous thought experiments. If I say I am in pain and you have experienced pain, do I understand your pain?

For practical purposes, we generally have to act as if we understand each other, whether or not some deeper philosophical measure of True understanding has been met.

Wittgenstein also uses a lovely metaphor to describe the complex architecture of human language:

“Our language can be regarded as an ancient city: a maze of little streets and squares, of old and new houses, of houses with extensions from various periods, and all this surrounded by a multitude of new suburbs with straight and regular streets and uniform houses.”

facebooktwittergoogle_plusredditlinkedintumblrmail

Dynamics of Online Social Interactions

I had the opportunity today to hear from Chenhao Tan, a Ph.D. Candidate in Computer Science at Cornell University who is looking at the dynamics of online social interactions.

In particular, Tan has done a great deal of work around predicting retweet rates for Twitter messages. That is, given two tweets by the same author on the same topic, can you predict which one will be retweeted more?

Interestingly, such pairs of tweets naturally occur frequently on Twitter. For one 2014 study, Tan was able to identify 11,000 pairs of author and topic controlled tweets with different retweet rates.

Through a computational model comparing words used as well as a number of custom features, such as the “informativeness” of a given tweet, Tan was able to build model which could correctly identify which tweet was more popular.

He even created a fun tool that allows you to input your own tweet text to compare which is more likely to be retweeted more.

From all this Twitter data, Tan was also able to compare the language of “successful” tweets to the tweets drawn from Twitter as a whole; as well as compare how these tweets fit into a given poster’s tone.

Interestingly, Tan found that the best strategy is to “be like the community, be like yourself.” That is – the most successful tweets were not notably divergent from Twitter norms and tended to be in line with the personal style of the original poster.

Tan interpreted this as a positive finding, indicating that a user doesn’t need to do something special in order to “stand out.” But, such a result to also point to Twitter as an insular community – unable to amplify messages which don’t fit the dominant norm.

And this leads to one of Tan’s broader research questions. Studies like his work around Twitter look at micro-level data; examining words and exploring how individual’s minds are changed. But, as Tan pointed out, the work of studying online communities can also be explored from a broader, macro level: what do healthy, online environments look like and how are they maintained?

There is more work to be done on both of these questions, but Tan’s work an intriguing start.

facebooktwittergoogle_plusredditlinkedintumblrmail

The Oxford Comma

There is a topic which has caused generations of debate. Lines have been drawn. Enemies have been made.

I refer, of course, to the Oxford comma. Should it, or should it not, be a thing?

For those who don’t bask in the depths of English grammar debates, let me explain. The Oxford English Dictionary, a worthy source of knowledge on this subject, defines the Oxford comma as:
a comma immediately preceding the conjunction in a list of items.

“I bought apples, pears, and grapes” employs the Oxford comma while “I bought apples, pears and grapes” does not.

You can see why there are such heated debates about this.

The Oxford comma , so named due to “the preferred use of such a comma to avoid ambiguity in the house style of Oxford University Press,” is also known by the more prosaic name of the “serial comma.”

I have no evidence to verify this, but I believe that what one calls the comma gives insight into a person’s position on the matter. Those who are pro-comma prefer the more erudite “Oxford comma” while those who are anti-comma prefer the uninspiring “serial comma.”

Why do you need another comma? They ask. You already have so many, you don’t need a serial comma as well!

These people are wrong.

As I may have given away from my own references to the “Oxford comma,” I am firmly in the pro-Oxford comma camp.

It is clear that a comma is better there.

Not only because there’s no end to the silly and clever memes you can create mocking the absence of an Oxford comma, but because – more proudly – a sentence just feels more complete, more balanced, and more aesthetic with the comma there. It just feels right.

But, of course, this is what makes language so wonderful. Language is alive, and that life can be seen in all the little debates and inconsistencies of our grammar.

It’s like cheering for your favorite sports team: we can fight about it, mock each other, and talk all sorts of trash, but at the end of the day we can still be friends.

…Wait, we can still be friends, right?

facebooktwittergoogle_plusredditlinkedintumblrmail

Political Advertising and Polarization

I think of typical political attack ads as sounding something like this quote from a new Chris Christie ad: “Hillary Clinton will be nothing more than a third term of Barack Obama.”

Or, perhaps, something like this ad from Ben Carson, “[Barack Obama] doesn’t want you to know that his and Hillary Clinton’s failed tough talk but do-nothing policies are responsible for the meltdown in the Middle East.”

If a candidate is feeling particularly devious, they may attack an opponent by quoting them out of context or by showing unflattering images, but as its most basic, an attack ad is a reiteration – often without validation – of the narrative a candidate is trying to impart upon their opponent.

So I took particular note of a new ad from Hilary Clinton which not only names and quotes several republican opponents, but which uses her air time to share footage from their campaign events.

From a marketing perspective, this is surprising on several fronts. First, there’s that old adage – often, though possibility apocryphally, ascribed to the infamous PT Barnum – “Any publicity is good publicity.” That is, simply giving air time to an opponent – even while attacking them – may ultimately help raise their profile while the details of the context are forgotten. Of course, this expression is hardly an un-alterable fact – as many disgraced companies and candidates can attest.

Second, there’s a lot of debate about the effect of negative ads. Many argue that they are effective because people tend to remember negative things better than positive things. But, as the New York Times writes, “negative ads work, except when they don’t,” and they come with the real risk of dragging the ad’s creator down into the mud as well.

But what’s particularly striking about the Clinton ad is that – aside from a clip of Christie telling someone to “sit down and shut up” – I can imagine most of the republican footage being used by the republican candidate it targets.

For example, Clinton quotes Ted Cruz: “…defund Planned Parenthood.” This isn’t something Cruz would seek to deny or hide – it is, in fact, the main selling point of Cruz’s ad, “Values“.

This type of political campaign highlights the starkness of American political polarization. Yes, the ad includes the typical attack-ad tropes of ominous music and poor lighting, but in many ways…Clinton literally lets her opponents speak for themselves and then mic-drops I rest my case.

She doesn’t need to say any more…to Democrats, the Republican candidates are disturbing enough.

I’ve noticed similar signs in more informal settings – on Facebook, for example, there’s been what I can only describe as an attack on Girl Scout cookies going around. “You deserve to know what Girl Scout cookies fund,” the image reads, going on to list the Girl Scout’s partnership with Planned Parenthood for sex education and the fact they they welcome transgender women as peers.

Of course, in my circles, most people are sharing this “attack” ad with the notes like, “Good! Let’s buy more cookies!”

And, in case you’re worried the whole thing is some sort of elaborate hoax, there are, in fact, real groups raising concerns about the Girl Scouts.

I hardly mean to indicate here that all pro-life advocates are anti-Girl Scouts or anti-sex education – but this is exactly the dichotomy that polarization sets up for us.

It’s a self-fulfilling prophesy, really. I have to imagine that in conservative circles they are similarly mocking liberal paraphernalia, and all of it serves to entrench the “us” versus “them” mindset. All of us equally horrified that the other half the countries feels a certain way.

I don’t know how we change that, or how we break through that. But it does seem like we’ve reached a whole new level of polarization when the exact same message can be greeted so differently.

facebooktwittergoogle_plusredditlinkedintumblrmail

Horse Races and Political Journalism

The advancement of the calendar year has brought a whole new energy to political campaign coverage. The Iowa Caucus is just over two weeks away, with the New Hampshire primary a week and a half after that.

Political journalism is aflutter with polling data and predictions – Cruz is expected to win the Iowa primary, and the second spot seems locked down as well. But other republicans vying for the nomination have the chance to make waves with a surprise third place finish.

“‘Exceeds expectations’ is the best headline a candidate can hope for coming out of Iowa,” a reporter shared in a recent NPR Political podcast while discussing what he referred to as the “Iowa Tango.”

The dance is not dissimilar on the democratic side – Clinton is expected to win Iowa, but Sanders has been slowly chipping away at her lead. An “exceeds expectations” in Iowa – and certainly a win – could lead to a big bump for the Sanders campaign.

This is all very exciting.

For those of us who are political junkies, presidential horse race coverage can be exhilarating. It’s like a (nerdy) action movie where you never know what’s going to happen next, where you’re on the edge of your seat because there’s no guarantee of a (subjectively) happy ending.

This sort of coverage is engaging for a certain segment of the electorate, but is it good journalism?

In We are the Ones We’ve Been Waiting For, my former colleague Peter Levine illustrates an alternative model:

An important example was the decision of the Charlotte Observer to dispense with horse race campaign coverage, that is, stories about how the campaigns were trying to win the election. Instead, the Observer convened representative citizens to choose issues for reporters to investigate and to draw questions that the candidates were asked to answer on the pages of the newspaper.

Rather than asking “who will win the election?” this type of political coverage seeks to answer “who should win the election?”

One could argue that this isn’t an appropriate question for a news outlet to ask. If an ostensibly fair and balanced news outlet was actually biased in a particular candidate’s favor, for example, that would indeed go against the democratic process.

Yet we already know that horse race coverage can be prone to bias – resulting in early or inaccurate calls of elections while voting is still taking place.

Similarly, while certainly prone to bias, the question of who should win is not inherently biased. In the example above the Charlotte Observer answered the question not with their own editorial views, but through a combination of citizen voice and candidate response.

This is hardly the only model for political coverage addressing who should win. For example, outlets could put more emphasis on political investigative journalism – scrutinizing candidate policies for likely impact and outcome. There is certainly some of this already, but it is absent from some outlets while others treat such long-form critiques as secondary to the quick news of poll numbers.

Arguably here we have a market issue – perhaps journalists want to provide this sort of thoughtful analysis, but lack the reader interest to pursue it.

Walter Lippmann – a journalist and WWI propagandist – would certainly agree with that assessment. “The Public” as a faceless, unidentified herd, will always be too busy with other things to invest real time and thought into a deep understanding of political issues.

As Lippmann describes in his 1925, The Phantom Public:

For when private man has lived through the romantic age in politics and is no longer moved by the stale echoes of its hot cries, when he is sober and unimpressed…You cannot move him then with good straight talk about service and civic duty, nor by waving a flag in his face, nor by sending a boy scout after him to make him vote.

To the extent that it is popular, horse race coverage succeeds because it is sexy and exciting. There are some people who have the interest and energy to read more provocative thought pieces on politics, but their numbers are not significant enough to affect so-called “public opinion.”

Lippmann does not fault the generic masses for putting their attention towards other things – it is only natural to have more interest and awareness in those topics which effect you more profoundly.

There is an important and subtle distinction here – just because the unnamed masses have no interest in politics does not mean that all people do not have an interest in politics. In one of my favorite Lippmann quotes, he writes that “The public must be put in its place…so that each of us may live free of the trampling and the roar of the bewildered herd.

Lippmann does not mean to argue for a technocratic society in which the voices of the common people are excluded. Rather, he highlights an aggregation problem – individual voices are important, while the collective voice of “the Public” – while easiest to hear – is nonsense.

This is, perhaps, what is most attractive about a model such as that used by the Charlotte Observer. Individual voices shaped the process, but on a scale that didn’t aggregate to meaninglessness.

A similar strategy can be seen in work such as that by the Oregon Citizen’s Initiative Review. A the review regularly gathers “a panel of randomly-selected and demographically-balanced voters…from across the state to fairly evaluate a ballot measure.” Each panel hears professional testimony about the measure and participates in several days of dialogue before produce a statement “highlighting the most important findings about the measure” which is then included in the official voter pamphlet.

This type of approach provides a balance between engaging diverse citizen voices and the infeasiblity of having every single person participate in such a process.

The Charlotte Observer provides one example of how this balance might be found in political journalism, but there have been so few attempts it’s impossible to know what’s best. It’s an area that’s desperate for greater innovation, for finding new ways to cover politics and new ways to think about journalist’s and citizen roles in politics.

facebooktwittergoogle_plusredditlinkedintumblrmail

Pedagogy and Disciplines

After several years of working in academia, it’s been interesting to be back in the classroom as a student. Teaching per se was not central to my previous role, but a lot of my work focused on student development.

I’ve also had a somewhat untraditional academic path. My undergraduate studies were in physics, I went on to get a Masters in marketing communication, and then through work I had the opportunity to co-teach an undergraduate philosophy seminar course. So, I’ve been particularly struck by the different pedagogical approaches that can be found in different disciplines.

In many ways, these pedagogical approaches can be linked back to different understandings of wisdom: techne, technical knowledge; episteme, scientific knowledge; and phronesis, practical wisdom.

My undergraduate studies in physics focused on episteme – there was some techne as they taught specific mathematical approaches, but the real emphasis was on developing our theoretical understanding.

My master’s program – aimed at preparing people for careers in marketing – lay somewhere between techne and phronesis. Teaching by case studies is typically associated with phronesis – since the approach is intended to teach students how to make good decisions when confronted with new challenges. But the term is not a perfect fit for marketing – phronesis traditionally takes “good decisions” to be ethical decisions, whereas these studies took “good” to mean “good for business.” The term techne, which implies a certain art or craftship, is also relevant here.

The philosophy seminar I co-taught focused on phronesis. This is by no means intrinsic to philosophy as a discipline, but my specific class focused on civic studies, an emergent field that asks, “what should we do?”

This question is inherently linked to phronesis: as urban planner Bent Flyvbjerg writes in arguing for more on phronesis in the socials sciences: “a central question for phronesis is: What should we do?”

Each of these types of wisdom could be tied to different pedagogical methods by exploring the tasks expected by students. To develop phronesis, students are confronted with novel contextual situations asked to develop solutions. For techne students have to create something – this might be a rote recreation of ones multiplication tables, or could involve a more artistic pursuit. Episteme would be taught through problem sets – asking students to apply theoretical knowledge to answer questions with discrete answers.

From my own experience, different disciplines tend to gravitate towards different types of wisdom. But I wonder how inherent these approaches are to a discipline. Episteme may be the norm in physics, for example, but what would a physics class focused on phronesis look like?

 

facebooktwittergoogle_plusredditlinkedintumblrmail