Knowledge Structures

I’ve been reading educational psychology literature on knowledge structures – a “representation of a person’s knowledge that includes both the definitions of a set of domain-specific concepts and the relations among those concepts,” as Dorsey defines it.

The basic premise here is that people not only store various concepts they’re familiar with, they store an entire network structure detailing the inter-relations between those concepts. Storing information in this way provides valuable heuristic short-cuts when it comes time to retrieve that information.

This claim has direct implications for education and what it means to “learn.”

As Dorsey argues:

…Human knowledge embodies more than just declarative facts…the organization of knowledge stored in memory is of equal or greater significance than the amount or type of knowledge. The construct of knowledge structures implies that the relation between knowledge acquisition and performance in many domains requires not just a set of declarative facts, but a framework or a set of connections that leads to an understanding of when and how a set of facts applies in a given situation.

Having knowledge stored in network form not only allows for easy retrieval, it lays the foundation for problem-solving in the face of new challenges.

As Collins and Quillian argue, “it is by using inference that people can know much more than they learn.”

Interestingly, a core element of these systems is that they are self defining: “Many words acquire most of their meaning through their use in sentences,” Preece argues, “In this respect, word meanings, or concepts, are like mathematical points: They have few qualities other than their relationships with other concepts.”

Shavelson similarly insists on a somewhat tautological definition, writing, “a concept, then, is a set of relations among other concepts.”

And Collins and Quillian argue:

An interesting aspect of such a network is that within the system there are no primitive or undefined terms in the mathematical sense; everything is defined by everything else so that the usual logistical (axiomatic) structure of mathematical systems does not hold. In this respect, it is like a dictionary.

In many of the papers I’ve been reading, these networks are elicited through word association: researchers provide subjects with a word and subjects provide as many associated words as possible.

Shavelson does this experiment with physics terms and compares the development of physics students and non-physics students. Over the course of the semester, the students in a physics class increased the number of words they could associate with a root physics term.

Shavelson also finds a sharp increase in the number of “constrained responses” – e.g., “if the term used in the response was an element in the defining equation for the special stimulus word. For example, the response term ‘mass,’ was scored as a constrained response to the special stimulus ‘force,’ since force equals mass times acceleration.”

Validation of these networks is, of course, a non-trivial process. But scholars have been chipping away at this question for decades. It’s still not clear how to best way to capture or model these knowledge structures, but the body of literature that exists in this space so far indicates that this is a meaningful way to approach human learning and understanding.

facebooktwittergoogle_plusredditlinkedintumblrmail

Lightning Talk

I’m just returning from three back to back conferences: PolNet, hosted by Ohio State University; NetSci, hosted by Indiana University, and Frontiers of Democracy hosted by Tisch College at Tufts University. All three conferences were great, and they all brought together people from various slices of my work at the intersection of political science, network science, and civic studies.

I expect that in the coming week I’ll post more reflecting on each of these conferences, but for now I wanted to share a brief lightning talk I gave to introduce myself at the NetSci satellite session hosted by the Society for Young Network Scientists. We were each restricted to 3 minutes – which isn’t very much time when speaking to a cross-disciplinary group with divergent areas of focus.

But here’s what I came up with, as I tried to explain the motivation behind my (nascent) research:


Good morning everyone. My name is Sarah Shugars and I’m a doctoral student at Northeastern’s Network Science program where I just completed my 2nd year.

My work is driven by the central question: What should we do?

 Every word in this sentence is important:

  • What: What are the specific actions to be taken?
  • Should: What are the right actions and what are the right criteria for making that decision?
  • We: Literally you and I. Humans in this room. As citizens, we are each agents with a role to play in shaping the world around us. We may choose actions aimed at influencing others, but fundamentally we must decide how we will act – individually and together.
  • And of course Do: Once we figure out what actions should be done – we must actually do those actions.

What should we do?

This framework comes from civic studies, specifically Peter Levine at Tufts University.

The question is intended to give agency to individuals, but also to the communities they belong to. As members of a society we should neither act with blind individualism – doing whatever we want whenever we want it – nor should we completely withdraw from political life, abdicating our responsibility to add our unique ideas and perspectives to the collective challenge of tackling complicated problems.

We each have a responsibility to share our own voices – and to ensure that the voices of those around us are heard. We have a responsibility to build spaces were everyone can participate in addressing the fundamental challenge we face: 

What should we do?

You may be wondering what this question has to do with Network Science. Like all of you, my work is also driven by another question:

What are the nodes and what are the links?

On one level we could think of this as a social network problem: Who comes into contact with whom and how are ideas propagated and created throughout the network?

These are important questions, but the core of my work focuses on a different level of analysis: How do we collectively reason about our shared problems?

Under this conception, I take nodes to be ideas, beliefs or concepts. The edges between them represent the logical or conceptual connections between these ideas. I believe A, which is related to my belief B.

Importantly these networks may have seeming inconsistencies – ideas may be in tension with each other and may struggle to co-exist. When coming to a decision about an issue then, I weigh the different factors at stake – these are the nodes in my network – and I come to a conclusion appropriate to the context.

These individual networks of ideas then connect as we reason together. We each shape the networked thinking of those around us while simultaneously shifting our own beliefs. We may discard nodes or edges, or even collectively discover new nodes and edges we hadn’t considered before.

In reasoning together – in collectively searching the solution space – we can find and evaluate solutions, we can work together to answer the question:

What should we do?

Thank you.

facebooktwittergoogle_plusredditlinkedintumblrmail

Gendered Creative Teams: The Challenge of Quantification

I recently had the privilege of being an invited speaker at the Gendered Creative Teams workshop hosted by Central European University and organized by Ancsa Hannák, Roberta Sinatra, and Balázs Vedres.

It was a truly remarkable gathering of scholars, researchers, and activists, featuring two full days of provocations and rich discussion.

Perhaps one of the most interesting aspects of the conference was that most of the attendees did not come from a scholarly background focusing on gender, but rather came at the topic originally through the dimension of creative teams. The conference, then, provided an opportunity to think more deeply about this latent – but deeply salient – dimension of the work.

Because of this, one of the ongoing themes of the conference – and one which particularly stuck with me – focused on the subtle ways in which the patriarchy shapes the creation and distribution of knowledge.

As some of you may know, I am fond of quoting Bent Flyvbjerg’s axiom: power is knowledge.

As he elaborates:

…Power defines physical, economic, ecological, and social reality itself. Power is more concerned with defining a specific reality than understanding what reality is. …Power, quite simply, produces that knowledge and that rationality which is conductive to the reality it wants. Conversely, power suppresses that knowledge and rationality for which it has no use.

This presents a troubling challenge to the enlightenment ideal of rationality. As scientists and researchers, we have a duty and a commitment to rationality; a deep desire to do our best to discover the Truth. But as a human beings, living in and shaped by our societies, we may simultaneously be blind to the assumptions and biases which define our very conception of reality.

If you’re skeptical of that view, consider how the definition of “race” has changed in the U.S. Census over time. The ability to choose your own race – as opposed to having it selected for you by interpretation of a census interviewer – was only introduced in 1960. Multiracial recordings only became allowed in 2000.

These changes reflect shifting social understandings of what race is and who gets to define it.

We see a similarly problematic trend around the social construction of gender. Who gets to define a person’s gender? How many genders are there? These are non-trivial questions, and as researchers we have a responsibility to push beyond our own socialized sense of the answers.

Indeed, quantitative analysis may prove to be particularly problematic – there’s just something so reassuring, so confidence-inducing, about numbers and statistics.

As Johanna Drucker warns of statistical visualizations:

…Graphical tools are a kind of intellectual Trojan horse, a vehicle through which assumptions about what constitutes information swarm with potent force. These assumptions are cloaked in a rhetoric taken wholesale from the techniques of the empirical sciences that conceals their epistemological biases under a guise of familiarity. So naturalized are the Google maps and bar charts of generated from spread sheets that they pass as unquestioned representations of “what it.”

As a quantitive researcher myself – and one who is quite fond of visualizations – I don’t take this as a admonition to shun quantitive analysis all together. But rather, I take it a valuable, humanistic complication of what may otherwise go unobserved or unsaid.

Drucker’s warning ought to resonate with all researchers: our scholarship would be poor indeed if everything we presented was taken as wholesale truth by our peers. Research needs questioning, pushback, and a close evaluation of assumptions and limitations.

We know that our studies – no matter how good, how rigorous – will always be a simplification of the Truth. No one can possibly capture all of reality in a single snapshot study. Our goal then, as researchers, must be to try and be honest with ourselves and critical of our assumptions.

As Amanda Menking commented during the conference – it’s okay if you need to simplify gender down from something that’s experienced uniquely for everyone and provide narrow man/woman/other:___ options on a survey. There are often good reasons to make that choice.

But you can’t ignore that fact that it is a choice.

If you choose to look at a gender binary, ask yourself why you made that choice and explain in at least a sentence or two why you did.

Similarly, there are often good reasons to use previously validated survey measures: such approaches can provide meaningful comparison to earlier work and are likely to be more robust than quickly making up your own questions on the day you’re trying to get your survey live.

But, again, such decisions are a choice.

If you use such measures you should know who created them, what context defined them, and you should critically consider the implicit biases which may be buried in them.

All methodological choices have an impact on research – that’s why we constantly need replication and why we all carry a healthy list of future work. Of course we still need to make these choices – to do otherwise would paralyze us away from doing any research at all – but we have to acknowledge that they are choices.

Ignoring these complication may be an easier path, especially when it comes to aspects which are so well socialized into the broader population. But that easier path reduces scholarship to the level of pop-science. A quick, flashy headline that glosses over the real complications and limitations inherent in any single study.

You don’t have to solve all the complications, but you do have to acknowledge them. To do otherwise is just bad science.

facebooktwittergoogle_plusredditlinkedintumblrmail

Axelrod’s Cognitive Networks

Before introducing the cultural diffusion model he is now better known for, Axelrod proposed mapping individuals’ reasoning process as a causal network.

“A person’s beliefs can be regarded as a complex system,” he argued, and, “given a person’s concepts and beliefs, and given certain rules for deducing other beliefs from them” it is therefore possible to model how “a person would make a choice among alternatives” (Axelrod, 1976).

Axelrod called these networks of beliefs and causal relationships “cognitive maps,” and he engaged other scholars in deriving cognitive maps for select political elites using a detailed hand-coding procedure of a subject’s existing documents.

For Axelrod, the representation of beliefs as a network was a natural and obvious extension of how individuals reason. “People do evaluate complex policy alternatives in terms of the consequences of a particular choice would cause, and ultimately of what the sum of these effects would be,” he argued. “Indeed, such cause analysis is built into our language, and it would be very difficult for us to think complete in other terms, even if we tried” (Axelrod, 1976).

Axelrod takes the nodes of these networks to be concepts, with directed edges between them indicating causal links. Importantly, the nodal concepts are not things but rather “variables that can take on different values.” This makes the cognitive map “an algebraic rather than a logical system.”

Axelrod saw great value in the approach of cognitive mapping – seeing them as tools to understand decision-making, resources capable of meaningful policy suggestions, and imagining how individuals’ maps could aggregate into a collective.

facebooktwittergoogle_plusredditlinkedintumblrmail

Computational Models of Belief Systems & Cultural Systems

Work on belief systems is similar to the research on cultural systems – both use agent-based models to explore how complex systems evolve given a simple set of actor rules and interactions – there are important conceptual differences between the two lines of work.

Research on cultural systems takes a maco-level approach, seeking to explain if, when, and how, distinctive communities of similar traits emerge, while research on belief systems uses comparable methods to understand if, when, and how distinctive individuals come to agree on a given point.

The difference between these approaches is subtle but notable. The cultural systems approach begins with the observation that distinctive cultures do exist, despite local tendencies for convergence, while research on belief systems begins from the observation that groups of people are capable of working together, despite heterogeneous opinions and interests.

In his foundational work on cultural systems, Axelrod begins, “despite tendencies towards convergence, differences between individuals and groups continue to exist in beliefs, attitudes, and behavior” (Axelrod, 1997).

Compare this to how DeGroot begins his exploration of belief systems: “consider a group of individuals who must act together as a team or committee, and suppose that each individual in the group has his own subjective probability distribution for the unknown value of some parameter. A model is presented which describes how the group might reach agreement on a common subjective probability distribution parameter by pooling their individual opinions” (DeGroot, 1974).

In other words, while cultural models seek to explain the presence of homophily and other system-level traits, belief systems more properly seek to capture deliberative exchange. The important methodological difference here is that cultural systems model agent change as function of similarity, while belief systems model agent change as a process of reasoning.

 

facebooktwittergoogle_plusredditlinkedintumblrmail

Computational Models of Cultural Systems

Computational approaches to studying the broader social context can be found in work on the emergence and diffusion of communities in cultural system. Spicer makes an anthropological appeal for the study of such systems, arguing that cultural change can only be properly considered in relation to more stable elements of culture. These persistent cultural elements, he argues, can best be understood as ‘identity systems,’ in which individuals bestow meaning to symbols. Spicer notes that there are collective identity systems (i.e., culture) as well as individual systems, and chooses to focus his attention on the former. Spicer talks about these systems in implicitly network terms: identity systems capture “relationships between human beings and their cultural products” (Spicer, 1971). To the extent that individuals share the same relationships with the same cultural products, they are united under a common culture; they are, as Spicer says, “a people.”

Axelrod presents a more robust mathematical model for studying these cultural systems. Similar to Schelling’s dynamic models of segregation, Axelrod imagines individuals interacting through processes of social influence and social selection (Axelrod, 1997). Agents are described with n-length vectors, with each element initialized to a value between 0 and m. The elements of the vector represent cultural dimensions (features), and the value of each element represents an individual’s state along that dimension (traits). Two individuals with the exact same vector are said to share a culture, while, in general, agents are considered culturally similar to the extent to which they hold the same trait for the same feature. Agents on a grid are then allowed to interact: two neighboring agents are selected at random. With a probability equal to their cultural similarity, the agents interact. An interaction consists of selecting a random feature on which the agents differ (if there is one), and updating one agent’s trait on this feature to its neighbor’s trait on that feature. This simple model captures both the process of choice homophily, as agents are more likely to interact with similar agents, and the process of social influence, as interacting agents become more similar over time. Perhaps the most surprising finding of Axelrod’s approach is just how complex this cultural system turns out to be. Despite the model’s simple rules, he finds that it is difficult to predict the ultimate number of stable cultural regions based on the system’s n and m parameters.

This concept of modeling cultural convergence through simple social processes has maintained a foothold in the literature and has been slowly gaining more widespread attention. Bednar and Page take a game theoretic approach, imagining agents who must play multiple cognitively taxing games simultaneously. Their finding that in these scenarios “culturally distinct behavior is likely and in many cases unavoidable” (Bednar & Page, 2007) is notable because classic game-theoretic models fail to explain the emergence of culture at all: rather rational agents simply maximize their utility and move on. In their simultaneous game scenarios, however, cognitively limited agents adopt the strategies that can best be applied across the tasks they face. Cultures, then, emerge as “agents evolve behaviors in strategic environments.” This finding underscores Granovetter’s argument of embeddedness (M. Granovetter, 1985): distinctive cultures emerge because regional contexts influence adaptive choices, which in turn influence an agent’s environment.

Moving beyond Axelrod’s grid implementation, Flache and Macy (Flache & Macy, 2011) consider agent interaction on the small world network proposed by Watts and Strogatz (Watts & Strogatz, 1998). This model randomly rewires a grid with select long-distance ties. Following Granovetter’s strength of weak ties theory (M. S. Granovetter, 1973), the rewired edges in the Watts-Strogatz model should bridge clusters and promote cultural diffusion. Flache and Macy also introduce the notion of the valiance of interaction, considering social influence along dimensions of assimilation and differentiation, and taking social selection to consist of either attraction or xenophobia. In systems with only positively-valenced interaction (assimilation and attraction), they find that the ‘weak’ ties have the expected result: cultural signals diffuse and the system tends towards cultural integration. However, introduction of negatively valenced interactions (differentiation and xenophobia), leads to cultural polarization; resulting in deep disagreement between communities which themselves have high internal consensus.

facebooktwittergoogle_plusredditlinkedintumblrmail

Economics of Matching

A canonical problem in graph theory is that of matching – pairing people (or nodes) based on mutual preference. The classic example of this – framed, unfortunately, in a cis-heteronormative way – is known as the marriage problem. Assuming knowledge of the whole population, men have a ranked-order list of appropriate female partners and women similarly make a ranked-order list of appropriate male partners. The question then, is can we, as an all-knowing mathematician, make a matching in which no non-matched (opposite sex) pair would prefer to be with each other than with the partners they are matched with?

The mathematical solution to “stable marriage matching” is elegant, and worth at some point a post of its own. But for the moment, I was recently struck by the economic implications of this problem. That is, I had always considered it from the vantage point of the all-knowing observer, with the implicit understanding that such scope of vision is what makes the solution possible.

Roth’s 2008 article, What have we learned from market design?, brings a new perspective to the market failures that can result from the lack of such global coordination. Because it’s such an interesting story, I include below a long excerpt describing the history of today’s residency-matching program for medical school graduates:

The first job American doctors take after graduating from medical school is called a residency. These jobs are a big part of hospitals’ labor force, a critical part of physicians’ graduate education, and a substantial influence on their future careers. From 1900 to 1945, one way that hospitals competed for new residents was to try to hire residents earlier than other hospitals. This moved the date of appointment earlier, first slowly and then quickly, until by 1945 residents were sometimes being hired almost two years before they would graduate from medical school and begin work.

When I studied this in Roth (1984) it was the first market in which I had seen this kind of “unraveling” of appointment dates, but today we know that unraveling is a common and costly form of market failure. What we see when we study markets in the process of unraveling is that offers not only become increasingly early, but also become dispersed in time and of increasingly short duration. So not only are decisions being made early (before uncertainty is resolved about workers’ preferences or abilities), but also quickly, with applicants having to respond to offers before they can learn what other offers might be forthcoming. Efforts to prevent unraveling are venerable, for example Roth and Xing (1994) quote Salzman (1931) on laws in various English market from the 13th century concerning “forestalling” a market by transacting before goods could be offered in the market.

In 1945, American medical schools agreed not to release information about students before a specified date. This helped control the date of the market, but a new problem emerged: hospitals found that if some of the first offers they made were rejected after a period of deliberation, the candidates to whom they wished to make their next offers had often already accepted other positions. This led hospitals to make exploding offers to which candidates had to reply immediately, before they could learn what other offers might be available, and led to a chaotic market that shortened in duration from year to year, and resulted not only in missed agreements but also in broken ones. This kind of congestion also has since been seen in other markets, and in the extreme form it took in the American medical market by the late 1940’s, it also constitutes a form of market failure (cf. Roth and Xing 1997, and Avery, Jolls, Roth, and Posner 2007 for detailed accounts of congestion in labor markets in psychology and law). Faced with a market that was working very badly, the various American medical associations (of hospitals, students, and schools) agreed to employ a centralized clearinghouse to coordinate the market. After students had applied to residency programs and been interviewed, instead of having hospitals make individual offers to which students had to respond immediately, students and residency programs would instead be invited to submit rank order lists to indicate their preferences. That is, hospitals (residency programs) would rank the students they had interviewed, students would rank the hospitals (residency programs) they had interviewed, and a centralized clearinghouse — a matching mechanism — would be employed to produce a matching from the preference lists. Today this centralized clearinghouse is called the National Resident Matching Program (NRMP).

facebooktwittergoogle_plusredditlinkedintumblrmail

Deliberation in a Homophilous Network

The social context of a society is both an input and an output of the deliberative system. As Granovetter argued, “actors do not behave or decide as atoms outside a social context, nor do they adhere slavishly to a script written for them by the particular intersection of social categories that they happen to occupy. Their attempts at purposive action are instead embedded in concrete, ongoing systems of social relations” (Granovetter, 1985). This “problem of embeddedness” manifests in a scholarly tension between studying the role of individual agency and the structures that shape available actions.

Consider, for example, the presence of homophily in social networks. A priori, there is no reason to attribute such a feature to a single mechanism. Perhaps homophily results from individual preference for being with ‘like’ people, or perhaps it results primarily from the structural realities within which agents are embedded: we should not be surprised that high school students spend a great deal of time with each other.

From a deliberative perspective, widespread homophily is deeply disconcerting. Networks with predominately homophilous relationships may indicate disparate spheres of association, even while maintaining a global balance on the whole. The linking patterns between an equal number of liberal and conservative blogs, for example, reveals distinctively separate communities rather than a more robust, crosscutting public sphere (Adamic & Glance, 2005).

Such homophily is particularly troubling as diversity of thought is arguably one of the most fundamental requirements for deliberation to proceed. Indeed, the vision of democratic legitimacy emerging from deliberation rests on the idea that all people, regardless of ideology, actively and equally participate (Cohen, 1989; Habermas, 1984; Mansbridge, 2003; Young, 1997). A commitment to this ideal has enshrined two standards – respect and the absence of power – as the only elements of deliberation which go undisputed within the broader field (Mansbridge, 2015). Furthermore, if we are concerned with the quality of deliberative output, then we ought to prefer deliberation amongst diverse groups, which typically identify better solutions than more homogenous groups (Hong, Page, & Baumol, 2004). Most pragmatically, homophily narrows the scope of potential topics for deliberation. Indeed, if deliberation is to be considered as an “ongoing process of mutual justification” (Gutmann & Thompson, 1999) or as a “game of giving and asking for reasons” (Neblo, 2015), then deliberation can only properly take place between participants who, in some respects, disagree. In a thought experiment of perfect homophily, where agents are exactly identical to their neighbors, then deliberation does not take place – simply because there is nothing for agents to deliberate about.

facebooktwittergoogle_plusredditlinkedintumblrmail

Collective Action and the Problem of Embeddedness

Divergent conceptions of homophily fall within a broader sociological debate about the freedom of an individual given the structural constraints of his or her context. As Gueorgi Kossinets and Duncan Watts argue, “one can always ask to what extent the observed outcome reflects the preferences and intentions of the individuals themselves and to what extent it is a consequence of the social-organizational structure in which they are embedded” (Kossinets & Watts, 2009). If our neighborhoods are segregated is it because individuals prefer to live in ‘like’ communities, or is it due to deeper correlations between race and socio-economic status? If our friends enjoy the same activities as ourselves, is it because we prefer to spend time with people who share our tastes, or because we met those friends through a shared activity?

The tension between these two approaches is what Granovetter called the “problem of embeddedness,” (Granovetter, 1985) because neither the agent-based nor structural view captures the whole picture. As Granovettor argued, “actors do not behave or decide as atoms outside a social context, nor do they adhere slavishly to a script written for them by the particular intersection of social categories that they happen to occupy. Their attempts at purposive action are instead embedded in concrete, ongoing systems of social relations.”

The challenge of embeddedness can be seen acutely in network homophily research, as scholars try to account for both the role of individual agency and the structures which shape available options. In their yearlong study of university relationships, Kossinets and Watts observe that both agent-driven and structurally-induced homophily play integral roles in tie formation. Indeed, the two mechanisms “appear to act as substitutes, each reinforcing the observed tendency of similar individuals to interact” (Kossinets & Watts, 2009). In detailed, agent-based studies, Schelling finds that individual preference leads to amplified global results; that extreme structural segregation can result from individuals’ moderate preference against being in the minority (Schelling, 1971). Mutz similarly argues that the workplace serves as an important setting for diverse political discourse precisely because it is a structured institution in which individual choice is constrained (Mutz & Mondak, 2006).

Consider also Michael Spence’s economic model of gender-based pay disparity (Spence, 1973). Imagine an employee pool in which people have two observable characteristics: sex and education. An employer assigns each employee to a higher or lower wage by inferring the unobserved characteristic of productivity. Assume also that gender and productivity are perfectly uncorrelated. Intuitively, this should mean that gender and pay will also be uncorrelated, however Spence’s game-theoretic model reveals a surprising result. After initial rounds of hiring, the employer will begin to associate higher levels of education with higher levels of productivity. More precisely, because an employer’s opinions are conditioned on gender as well as education, “if at some point in time men and women are not investing in education in the same ways, then the returns to education for men and women will be different in the next round.” In other words, Spence finds that there are numerous system equilibria and, given differing initial investments in education, the pay schedules for men and women will settle into different equilibrium states.

Here again, we see the interaction of agency and structure. Whether initial investments in education differed because of personal taste or as the result of structural gender discrimination, once a gender-based equilibrium has been reached, individual investment in education does little to shift the established paradigm. A woman today may be paid less because women were barred from educational attainment two generations ago. That inequity may be further compounded by active discrimination on the part of an employer, but the structural history itself is enough to result in disparity. Furthermore, this structural context then sets the stage for inducing gender-based homophily, as men and women could be socially inclined towards different workplaces or career paths.

Given these complex interactions, where past individual choices accumulate into future social context, it is perhaps unsurprising that teasing apart the impact of agency and structure is no small feat; one that is virtually impossible in the absence of dynamic data (Kossinets & Watts, 2009). Individuals embedded within this system may similiarly struggle to identify their own role in shaping social structures. As Schelling writes, “people acting individually are often unable to affect the results; they can only affect their own positions within the overall results” (Schelling, 1971). Acting individually, we create self-sustaining segregated societies; opting into like communities and presenting our children with a narrow range of friends with whom to connect.

Yet the very role that individual actions play in building social structures indicates that individuals may work together to change that structural context. It is a classic collective action problem – if we collectively prefer diverse communities, than we must act collectively, not individually. In her extensive work on collective action problems, Elinor Ostrom finds that “individuals frequently do design new institutional arrangements – and thus create social capital themselves through covenantal processes” (Ostrom, 1994). Embeddedness presents a methodological challenge but it need not be a problem; it simply reflects the current, changeable, institutional arrangement. That individual actions create the structures which in turn effect future actions need not be constraining – indeed, it illustrates the power which individuals collectively posses: the power to shape context, create social structures, and to build social capital by working together to solve our collective problems.

____

Granovetter, M. (1985). Economic action and social structure: The problem of embeddedness. American journal of sociology, 481-510.

Kossinets, G., & Watts, D. J. (2009). Origins of homophily in an evolving social network. American journal of sociology, 115(2), 405-450.

Mutz, D. C., & Mondak, J. J. (2006). The Workplace as a Context for Cross‐Cutting Political Discourse. Journal of politics, 68(1), 140-155.

Ostrom, E. (1994). Covenants, collective action, and common-pool resources.

Schelling, T. C. (1971). Dynamic models of segregation. Journal of mathematical sociology, 1(2), 143-186.

Spence, M. (1973). Job market signaling. The Quarterly Journal of Economics, 87(3), 355-374.

facebooktwittergoogle_plusredditlinkedintumblrmail

Noncooperation and the Latency of Weak Ties

As Centola and Macy summarize, the key insight of Granovetter’s seminal 1973 work (Granovetter, 1973) is that ties which are “weak in the relational sense – that the relations are less salient or frequent – are often strong in the structural sense – that they provide shortcuts across social topology” (Centola & Macy, 2007). While this remains an important sociological finding, there are important reasons to be wary of generalizing too far: such ‘weak ties’ may not be sufficient for diffusion in complex contagion (Centola & Macy, 2007) and identification of such ties is highly dependent on how connections are defined and measured (Grannis, 2010).

Furthermore, recent studies probing just how far ‘the strength of weak ties’ can be taken allude to another underexplored concern: the latency of ties. For example, Grannis points to the oft glossed-over result of Milgram’s small world experiment (Milgram, 1967): 71% of the chains did not make it to their target. As Milgram explains, “chains die before completion because on each remove a certain portion of participants simply do not cooperate and fail to send the folder. Thus the results we obtained on the distribution of chain lengths occurred within the general drift of a decay curve.” Milgram and later Dodds et al. (Dodds, Muhamad, & Watts, 2003) correct for this decay by including in the average path length an estimation of how long uncompleted paths would be if they had in fact been completed. For his part, Grannis argues that the failure caused by such noncooperation is exactly the point: “it calls into question what efficiency, if any, could be derived from these hypothesized, noncooperative paths” (Grannis, 2010).

I call this a problem of latency because one can imagine that social ties aren’t always reliably activated. Rather, activation may occur as a function of relationship strength and task burden, or may simply vary stochastically. In their global search email task, Dodds et al. find that only 25% of self-registered participants actually initiated a chain, whereas 37% of subsequent participants – those who were recruited by an acquaintance of some sort – did carry on the chain (Dodds et al., 2003). They attribute this difference to the very social relations they are studying: who does the asking matters.

In their survey of non-participants, the authors further find that “less than 0.3% of those contacted claimed that they could not think of an appropriate recipient, suggesting that lack of interest or incentive, not difficulty, was the main reason for chain termination.” Again, this implies that not all asks are equal – the noncomplying participants could have continued the chain, but they chose not to. In economic terms, it seems that the activation cost – the cost of continuing the chain – was greater than the reward for participating.

One can imagine similar interactions in the job-search domain. Passing on information about a job-opening maybe relatively low-cost while actively recommending a candidate for a position may come with certain risk (Smith, 2005). In many ways, the informational nature of a job search is reminiscent of ‘top-of-mind’ marketing: it is good if customers choose your product when faced with a range of options, but ideally they would think of you first; they would chose to purchase your product before even being confronted with alternatives. In the job-search scenario, unemployed people are often encouraged to reach out to as many contacts as they can, in order keep their name top-of-mind so that these ‘weak ties’ – who otherwise may not have thought of them – do forward information when learning of job openings. Granovetter does not examine the job search process in detail, but his findings – that among people who found a new job through a contact, 55.6% saw that contact occasionally while another 27.8% saw that contact only rarely (Granovetter, 1973) – imply that information was most likely diffused by a job-seeker requesting information. In this case, the job seeker had to activate a latent weak tie before receiving its benefit.

Arguably, the concept of latency is built into the very definition of a weak tie – weak ties are weak because their latency makes them easier to maintain than strong, always-active ties. Yet, the latency of weak ties, or more precisely, their activation costs, are generally not considered. In his detailed study of three distinct datasets, Grannis finds that a key problem in network interpretation is that connections’ temporal nature is often over looked (Grannis, 2010). I would argue that a related challenge is that the observed relations are considered to always be active. Using Grannis’ example, there is nothing inherently wrong with the suggestion that ideas may flow from A to C over the course of 40 years; the problem comes in interpreting this as a simple network where C’s beliefs directly trace to A. Indeed, in the academic context, it’s quite reasonable to think that an academic ‘grandparent’ may influence one’s scholarly work – but that influence comes through in some ideas and not others, it comes through connections whose strength waxes and wanes. To consider these links always present, and always active, is indeed to neglect the true nature of the relationship.

Ultimately, Grannis argues that the core problem in many network models is that the phase transitions which govern global network characteristics are sensitive to local-level phenomena: if the average degree is measured to be 1, there will be a giant component. Given this sensitivity, it becomes essential to consider the latency of weak network ties. A candidate who doesn’t activate weak ties may never find a job, and a message-passing task for which participants feel unmotivated may never reach completion. In his pop-science article, Malcolm Gladwell argues that some people just feel an inherent motivation to maintain more social ties than others (Gladwell, 1999). Given such individual variation in number of ties and willingness to activate ties, it seems clear that the latency of weak ties needs further study, otherwise, as Grannis warns, our generalizations could lead to “fundamental errors in our understanding of the effects of network topology on diffusion processes” (Grannis, 2010).

___

Centola, D., & Macy, M. (2007). Complex contagions and the weakness of long ties. American journal of sociology, 113(3), 702-734.

Dodds, P. S., Muhamad, R., & Watts, D. J. (2003). An experimental study of search in global social networks. Science, 301(5634), 827-829.

Gladwell, M. (1999). Six degrees of lois weisberg.

Grannis, R. (2010). Six Degrees of “Who Cares?”. American journal of sociology, 115(4), 991-1017.

Granovetter, M. S. (1973). The strength of weak ties. American journal of sociology, 1360-1380.

Milgram, S. (1967). The small world problem. Psychology today, 2(1), 60-67.

Smith, S. S. (2005). “Don’t put my name on it”: Social Capital Activation and Job-Finding Assistance among the Black Urban Poor American journal of sociology, 111(1), 1-57.

facebooktwittergoogle_plusredditlinkedintumblrmail