Behavioral Responses to Social Dilemmas

I had the opportunity today to attend a talk by Yamir Moreno, of the University of Zaragoza in Spain. A physicist by training, Moreno has more recently been studying game theory and human behavior, particularly in a complex systems setting.

In research published in 2012, Moreno and his team had people of various age groups play a typical prisoners dilemma game: a common scenario where an individual’s best move is to defect, but everyone suffers if everyone defects. The best outcome is for everyone to cooperate, but that can be hard to achieve since individuals have incentives to defect.

Playing in groups of 4 over several rounds, players were matched by a variable landscape – one group existed on a traditional lattice, while in another incarnation of the game players existed in a scale-free network.

As you might expect from a prisoner’s dilemma, when a person’s neighbors cooperated that person was more likely to cooperate in later rounds. When a person’s neighbors defected, that person was more likely to defect in later rounds.

Interestingly, in this first version of the experiment, Moreno found little difference between the lattice and scale-free structure.

Postulating that this was due to the static nature of the network, Moreno devised a different experiment: players were placed in an initial network structure, but they had the option to cut or add connections to other people. New connections were always reciprocal, with both parties having to agree to the connection.

He then ran this experiment over several different parameters, with some games allowing players to see each others past actions and other games having no memory.

In the setting where people could see past action, cooperation was significantly higher – about 20-30% more than expected otherwise. People who chose to defect were cut out of these networks and ultimately weren’t able to benefit from their defecting behavior.

I found this particularly interesting because earlier in the talk I had been thinking of Habermas. As interpreted by Gordon Finlayson, Habermas thought the result of standard social theory was “a false picture of society as an aggregate of lone individual reasoners, each calculating the best way of pursuing their own ends. This picture squares with a pervasive anthropological view that human beings are essentially self-interested, a view that runs from the ancient Greeks, though early modern philosophy, and right up to the present day. Modern social theory, under the influence of Hobbs or rational choice theory, thinks of society in similar terms. In Habermas’ eyes, such approaches neglect the crucial role of communication and discourse in forming social bonds between agents, and consequently have an inadequate conception of human association.”

More plainly – it is a critical feature of the Prisoners Dilemma that players are not allowed to communicate.

If the could communicate, Habermas offers, they would form communities and associate very differently than in a communications-free system.

Moreno’s second experiment didn’t include communication per se – players didn’t deliberate about their actions before taking them. But in systems with memory, a person’s actions became part of the public record – information that other players could take into account before associating with them.

In Moreno’s account, the only way for cooperators to survive is to form clusters. On the other hand, in a system with memory, a defector must join those communities as a cooperative member in order to survive.

facebooktwittergoogle_plusredditlinkedintumblrmail

Don’t Over Think Your Brand

Last night I had the great honor to participate in an engaging conversation about branding and authenticity hosted by L.I.R. Productions.

The conversation was geared primarily towards helping entrepreneurs navigate the waters of building a business brand that’s intimately linked to your personal self.

I was very impressed by the insight of my fellow panelist: Aja Aguirre, Beauty Editor at Autostraddle;  Joelle Jean-Fontaine, Founder + Designer at KRÉYOL; Natasha Moustache of Natasha Moustache Photography; and Jenn Walker Wall, Research Associate at MIT & Founder/Coach + Consultant at Work Wonders Coaching + Consulting; along with moderator Trish Fontanilla.

Reflecting on the conversation after the panel, I found I had a surprising take-away: don’t over think your brand.

It seems kind of blasphemous to write that: after all, I do have a Masters’ degree in marketing and strategic branding. And it drives me crazy when companies don’t put appropriate resources into to thinking and acting strategically about their brand.

But branding for small businesses, and especially independent entrepreneurs, strikes me as notably different from branding for larger organizations.

I once read – I believe it was in Made to Stick – that the purpose of a good communications strategy is to empower employees to act on behalf of a brand. They compared it to military orders from some far off headquarters: soldiers in the field needed to receive clear instructions but also needed to understand the intent behind those instructions so they could dynamically respond to the context on the ground.

Similarly, people who who speak on behalf a brand need to understand the voice and personality of the brand so they can all be good stewards in their various contexts. A person in customer service needs to be just as empowered to speak on behalf of a brand as the person who runs the official Twitter handle.

It takes a lot of effort and a lot of thought to accomplish that. It takes strategic branding.

A individual proprietor doesn’t need to be so rigorous. An individual person can find their own voice and follow their own authenticity.

That’s not to say that a small businesses brand should be synonymous with the owner’s voice – but an entrepreneur has a lot more flexibility to find their business’ voice just as they find their own voice.

“Branding” is such a buzz word, the tendency it to assume that it is something you have “get.” A business needs a brand.

But really, a brand is just the authentic voice and personality of an organization. You can find that and cultivate that without big budgets and powerpoints. For a small business, you can find that if you just relax and let the brand speak for itself.

 

facebooktwittergoogle_plusredditlinkedintumblrmail

The World is Bleeding

Late last week, deadly twin bombings tore through Beirut. Within a day, similar attacks were carried out in Paris.

The world mourns.

Pundits discuss air strikes. Politicians approve military response. Governors refuse to accept Syrian refugees, with my own Governor explaining that “the safety and security of the people of the Commonwealth of Massachusetts” takes priority.

No Syrian refugees, he says, those people are dangerous.

We saw that from Paris.

Although at least one of bombers was a French national. “A Frenchman born to Algerian parents,” the Telegraph reminds us. We have enough dangerous brown people already.

The world is bleeding.

The New York Times runs the headline: Beirut, Also the Site of Deadly Attacks, Feels Forgotten.

As if they’re the fat kid on the playground. The kid we know should feel sorry for, if only we could stop secretly thinking: thank goodness it’s not me.

Charitably, I’d like to imagine overlooking the tragedy in Beirut as a coping mechanism: there’s just too much terror to take in.

The world is bleeding. And nothing we do can stop it.

Perhaps that thought is just too terrible to accept.

But I suspect that’s not at the root of the disparity in response. Beirut sounds like a place that would get bombed. Paris does not. Do we imagine Beirut as a bustling, urban city, full of young people whose skinny jeans we would silently judge?

We are used to watching brown people die.

That’s so sad, we sigh.

Thank goodness it wasn’t here.

The world is bleeding.

I have no solutions, no glimpses of hope. We are in a dark world of our own and our forefather’s making.

I don’t know how we suture the wounds.

But I do know, as the great Reverend Dr. Martin Luther King, Jr. said, “Hate cannot drive out hate: only love can do that.”

And ultimately, that is all I am left with: love for the people of Lebanon, for the people of Paris, and love, too, for the people of Syria – fleeing a terror I’m secretly glad is not in my back yard.

The least we could do is welcome them.

 

facebooktwittergoogle_plusredditlinkedintumblrmail

Glassy Landscapes and Community Detection

In the last of a series of talks, Cris Moore of the Santa Fe Institute spoke about the challenges of “false positives” in community detection.

He started by illustrating the psychological origin of this challenge: people evolved to detect patterns. In the wilderness, it’s better, he argued, to find a false-positive non-tiger than a false-negative actual tiger.

So we tend to see patterns that aren’t there.

In my complex networks class, this point was demonstrated when a community-detection algorithm found a “good” partition of communities in a random graph. This is surprising because a random network doesn’t have communities.

The algorithm found a “good” identification of communities because it was looking for a good identification of communities. That is, it randomly selected a bunch of possible ways to divide the network into communities and simply returned the best one – without any thought as to what that result might mean.

Moore showed illustrated a similar finding by showing a graph whose nodes were visually clustered into two communities. One side was colored blue and the other side colored red. Only 11% of the edges crossed between the red community and the blue community. Arguably, it looked like a good identification of communities.

But then he showed another identification of communities. Red nodes and blue nodes were all intermingled, but still – only 11% of the edges connected the two communities. Numerically, both identifications were equally “good.”

This usually is a sign that you’re doing something wrong.

Informally, the challenge here is competing local optima – eg, a “glassy” surface. There are multiple “good” solutions which produce disparate results.

So if you set out to find communities, you will find communities – whether they are there or not.

Moore used a belief propagation model to consider this point further. Imagine a network where every node is trying to figure out what community it is in, and where every node tells its neighbors what communities it thinks its going to be in.

Interestingly, node i‘s message to node j with the probability that i is part of community r will be based on the information i receives from all its neighbors other than j. As you may have already gathered, this is an iterative process, so factoring j‘s message to i into i‘s message to would just create a nightmarish feedback loop of the two nodes repeating information to each other.

Moore proposed a game: someone gives you a network telling you the average degree, k, and probability of connections to in- and out-of community, and they ask you to label the community of each node.

Now, imagine using this believe propagation model to identify the proper labeling of your nodes.

If you graph the “goodness” of the communities you find as a function of λ – where λ is (kin – kout)/2k, or the second eigenvector of the matrix representing the in- and out-community degrees of the nodes – you will find the network undergoes a phase transition at the square root of the average degree.

Moore interpreted the meaning of that phase transition – if you are looking for two communities, there is a trivial, globally fixed point at λ = 1/(square root of k). His claim – an unsolved mathematical question – is that below that fixed point communities cannot be detected. It’s not just that it’s hard for an algorithm to detect communities, but rather that the information is theoretically unknowable.

If you are looking for more than two communities, the picture changes somewhat. There is still the trivial fixed point below which communities are theoretically undetectable, but there is also a “good” fixed point where you can start to identify communities.

Between the trivial fixed point and the good fixed point, finding communities is possible, but – in terms of computational complexity – is hard.

Moore added that the good fixed point has a narrow basin of attraction – so if you start with random nodes transmitting their likely community, you will most like fall into a trivial fixed point solution.

This type of glassy landscape leads to the type of mis-identification errors discussed above – where there are multiple seemingly “good” identifications of communities within a network, but each of which vary wildly in how they assign nodes.

facebooktwittergoogle_plusredditlinkedintumblrmail

Difficulties and Dissension

There are two elements were are often – explicitly or implicitly – discouraged in public life. They are separate, but deeply inter-related and their absence or existence really get to the heart of what “good deliberation” should be.

The first issue I’m thinking of is problematizing: raising challenges and concerns that you don’t have solutions for, put time towards issues that seem insurmountably difficult (though worthwhile) to tackle.

The second issue is dissension – disagreement or conflict within a deliberation.

From what I can tell, there has been more thought put towards this second issue, with many notable theorists arguing that debate is in fact critical to the deliberative process.

In Bernard Manin’s Democratic Deliberation, he argues that diversity of perspectives – a common requirement of good deliberation is not enough. “If we wished to keep in check the force of the confirmatory bias, to which groups are particularly susceptible, we should take deliberate and affirmative measures, not just let diverse voices be heard. Conflicting arguments do not automatically get a fair hearing,” he writes.

In this way, the presence of conflict might mitigate Lynn Sanders’ concerns about power inequities going unchecked. In her article, Against Deliberation, Sanders’ eloquently outlines the core problem of assuming respect among diverse views as a core element of deliberation:  “If we assume that deliberation cannot proceed without the realization of mutual respect, and deliberation appears to be proceeding, we may even mistakenly decide that conditions of mutual respect have been achieved.”

This danger is particularly present in contexts where there is no spoken conflict – that is, as Manin argues, if there no opposing views are voiced it’s not intrinsically because no opposing views are held.

If conflicting views are brought to the fore – encouraged and regularly voiced by all present – then this could dissipate concerns about unequal power leading to the exclusion of certain voices.

On its face, resistance to raising problems that are to solve may seem like a wholly different phenomenon. But I’ve been struck by Nina Eliasoph’s observations in this regard. In her sociological work with community volunteer groups, she notes how volunteers constantly silenced discussion of big problems – with good intentions, but ultimately to the detriment of the community.

Furthermore, she connects this aversion to seemingly unsolvable problems to the tendency to avoid conflict in discussion:

“To show each other and their neighbors that regular citizens really can be effective, really can make a difference, volunteers tried to avoid issues that they considered “political.” In their effort to be open and inclusive, to appeal to regular, unpretentious fellow citizens without discouraging them, they silenced public-spirited deliberation…Community-spirited citizens judged that by avoiding “big” problems, they could better buoy their optimism. But by excluding politics from their group concerns, they kept their enormous, overflowing reservoir of concern and empathy, compassion and altruism, out of circulation, limiting its contribution to the common good.”

 

facebooktwittergoogle_plusredditlinkedintumblrmail

Civic Rituals

In Nina Eliasoph’s excellent book Avoiding Politics, she explores, as the subtile indicates, “how Americans produce apathy in every day life.” For this thoughtful, sociological study Eliasoph embedded herself with numerous civic groups – including volunteer, recreational and activist organizations. Through her detailed observations, she notes many factors that impede successful civic and political activity.

This morning I was struck by a passage on civic rituals – practices which are seemingly good for civic life but which ultimately discourage public-minded discussion in the public sphere.

Reflecting on numerous special events organized around various community concerns, Eliasoph observes:

The practice of ritual production was one of the most important messages of the rituals. This sporadic and indirect method of showing concern made “care for fellow humans” seem to be a special occasion, something that could happen just a few times a year, easily incorporated into a busy commuter’s schedule without changing anything else.

Lest this point be misinterpreted coming on the eve of Veterans’ Day, I do think it’s important to mention – and Eliasoph agrees – that civic rituals are not inherently bad.

Voting is, arguably, a civic ritual. It is definitely habitual, with prior voting being a strong predictor of future voting behavior. While one ought to do far more than vote to be civic, I think it’s still important to have this ritual in one’s civic life.

But, I think about rituals like Martin Luther King, Jr. Day. The topics of racial justice surfaced around that holiday are deeply important and critical for us to collectively tackle in our communities. But too often, the day becomes little more than a day for pontificating by public officials. An opportunity for us each to dedicate one day to racial equality, feel good about our commitment to diversity, and then continue to go through life discriminating and blindly committing microaggressions.

In this case, the civic ritual is indeed problematic. We give the issue just enough attention to check it off our list without ever really taking the time to tackle the hard work of confronting it.

Arguably, it’s better to have something than nothing – having no days to acknowledge the realities of racial injustice would indeed be a travesty. But if we didn’t have these simple, ineffective rituals to satisfy our morality – would we then be more likely to tackle the issue more fully?

facebooktwittergoogle_plusredditlinkedintumblrmail

Interdisciplinarity

When I started my Ph.D. program somebody warned me that being an interdisciplinary scholar is not a synonym for being mediocre at many things. Rather, choosing an interdisciplinary path means having to work just has hard as your disciplinary colleagues, but doing this equally well across multiple disciplines.

I suspect that comment doesn’t really do justice to the challenges faced by scholars within more established disciplines, but I can definitely attest to the fact that working across disciplines can be a challenge.

Having worked in academia for many years, I’d been prepared for this on a bureaucratic level. My program is affiliated with multiple departments and multiple colleges at Northeastern. No way is that going to go smoothly. Luckily, due to some amazing colleagues, I’ve hardly had do deal with the bureaucratic issues at all. In fact, I’ve been quite impressed to find that I experience the department as a well-integrated part of the university. No small feat!

But there remain scholarly challenges to being interdisciplinary.

This morning, I was reading through computer science literature on argument detection and sentiment analysis. This relatively young field has already developed an extensive literature, building off the techniques of machine learning to automatically process large bodies of text.

A number of articles included reflections how how people communicate. If someone says, “but…” that probably means they are about to present a counter argument. If someone says, “first of all…” they are probably about to present a detailed argument.

These sorts of observations are at the heart of sentiment analysis. Essentially, the computer assigns meaning to a statement by looking for patterns of key words and verbal indicators.

I was struck by how divorced these rules of speech patterns were from any social science or humanities literature. Computer scientists have been thinking about how to teach a computer to detect arguments and they’ve established their own entire literature attempting to do so. They’ve made a lot of great insights as they built the field, but – at least from the little I read today – there is something lacking from bring so siloed.

Philosophers have, in a manner of speaking, been doing “argument detection” for a lot longer than computer scientists. Surely, there is something we can learn from them.

And this is the real challenge of being interdisciplinary. As I dig into my field(s), I’m struck by the profound quantity of knowledge I am lacking. Each time I pick up a thread it leads deeper and deeper into a literature I am excited to learn – but the literatures I want to study are divergent.

I have so much to learn in the fields of math, physics, computer science, political science, sociology, philosophy, and probably a few other fields I’ve forgotten to name. Each of those topics is a rich field in it’s own right, but I have to find some way of bringing all those fields together. Not just conceptually but practically. I have to find time to learn all the things.

It’s a bit like standing in the middle of a forrest – wanting not just to find the nearest town, but to explore the whole thing.

Typical academia, I suppose, is like a depth first search – you choose your direction and you dig into it as deep as possible.

Being an interdisciplinary scholar, on the other hand, is more of a breadth first search – you have to gain a broad understanding before you can make any informed comments about the whole.

facebooktwittergoogle_plusredditlinkedintumblrmail

It’s No Longer Our Policy to Put Out Fires

There’s a great scene in West Wing about a fire in Yellowstone. “When something catches on fire, it’s no longer out policy to put it out?”

The scene was based off a real incident of fire management strategy. In 1988, Yellowstone suffered the largest wildfire recorded in it’s history, burning 30% of the total acreage of the park. The fires called into question the National Park Service’s “let it burn” strategy.

Implemented in 1972, this policy let natural fires run their course and remains policy today. As per a 2008 order from the director of the National Park Service, “Wildland fire will be used to protect, maintain, and enhance natural and cultural resources and, as nearly as possible, be allowed to function in its natural ecological role.”

The let it burn strategy may have had impact on the Yellowstone fires, but as a 1998 article in Science argued, the problem may have been that they hadn’t implemented the policy soon enough.

Using network analysis to model the spread of forest fires, the researchers conclude, “the best way to prevent the largest forest fires is to allow the small and medium fires to burn.”

This is because forest fires follow a power law distribution: small fires are more frequent and large fires are rare. Most fires won’t reach 1988 magnitude and will burn themselves out before doing much damage. Allowing these fires to burn mitigates the risk of larger fires – because large fires are more likely in a dense forest.

This logic can be generalizable to other systems.

A 2008 paper by Adilson E. Motter argued that cascade failures can be mitigated by intentionally removing select nodes and edges after an initial failure.

Cascade failures are typically caused when “the removal of an even small fraction of highly loaded nodes (due to attack or failure)…trigger global cascades of overload failures.” The classic example is a 2003 blackout of the northeast which was triggered by one seemingly unimportant failure. But that one failure lead to other failures which lead to other failures, and soon a large swath of the U.S. had lost power.

Motter argues that such cascades can be mitigated by acting immediately after the initial failure – intentionally removing those nodes which put more of a strain on they system in order to protect those nodes that can handle greater loads.

This strategy is not entirely unlike the “let it burn” policy of the park service. Cutting off weak nodes protects the whole and mitigates the risk of larger, catastrophic events.

 

facebooktwittergoogle_plusredditlinkedintumblrmail

Feminist Data Visualization

I just had the opportunity to attend a talk by Lauren Klein of Georgia Tech on Feminist Data Visualization: Rethinking the Archive, Reshaping the Field.

Her work, she argued, is feminist not because it includes the works of female data scientists – though it does – but because it seeks to examine the cultural and critical dimensions of data visualization.

Data visualization has the ability to call attention to the scholarly process, and a feminist perspective on data visualization highlights the presence or absence of certain modes of scholarly thought.

Klein began her lecture by exploring the work of Elizabeth Peabody. Quietly at the center of America’s Transcendental movement, Peabody was the business manager of The Dial, the main publication of the Transcendentalists, and is credited with starting the nation’s first kindergarten. She was friends with Emerson and Thoreau. Nathaniel Hawthorne and Horace Mann were her brother in laws.

An educator herself, Peabody’s work probed the question: who is authorized to produce knowledge?

Through the creation of elaborate mural charts, Peabody captured complex tables of historical events as aesthetic visualizations intended to provide historic “outlines to the eye.”

Her charts were challenging to create and to decipher – but that was an intentional pedagogical technique. Peabody believe that through the act of interpreting her work, a viewer would create their own historical narrative – they would have a role in generating knowledge.

Her large mural charts, intended to be physically be spread out on the floor, each took 15 hours of labor to create. Klein commented that this work is reminiscent of quilting – “a system of knowledge making that was considered women’s work and so has been excised from history.”

Klein compared Peabody’s work to that of William Playfair. Widely considered “the father of data visualization,” Playfair is credited with wth creation of the bar chart and the pie chart. His works are recreated by aspiring data artists and new data tools use his work to demonstrate what they can do.

Playfair’s work is beautiful and easy to read.

But, Klein asked, are we losing something by unquestioningly accepting that approach as the standard?

Klein pointed to the work of one other data visualizer – Emma Willard – who created a beautiful graphic, Temple of Time in 1846.

Her work is explicitly framed from the viewers perspective. The viewer stands at the fore as the history of time recedes into the past.

Willard’s work makes the implicit argument that data visualization is inherently a subjective process. While we take our bar charts and graphs to be unquestionable factual – Willard argues that data is inherently subjective.

In that way, we are indeed losing something by neglecting this alternative forms of data visualization and by not questioning the perspectives we take in interpreting data.

facebooktwittergoogle_plusredditlinkedintumblrmail