The Chasm and the Bridge: Modes of Considering Social Network Structure

In their respective work, Granovetter and Burt explore roughly the same phenomenon –heterogeneous connection patterns within a social network. However, they each choose different metaphors to describe that phenomenon, leading to differences in how one should understand and interpret social network structure.

Perhaps most famously, Granovetter argues for the ‘strength of weak ties,’ finding that it is the weak, between-group ties which best support information diffusion – as studied for the specific task of finding a job (Granovetter, 1973). For his part, Burt prefers to focus on ‘structural holes’: rather than considering a tie which spans two groups, Burt focuses on the void it covers. As Burt describes, “The weak tie argument is about is about the strength of relationships that span the chasm between two social clusters. The structural hole argument is about the chasm spanned” (Burt, 1995). Burt further argues that his concept is the more valuable of the two; that ‘structural holes’ are more informative than ‘weak ties.’ “Whether a relationship is strong or weak,” Burt argues, “it generates information benefits when it is a bridge over a structural hole.”

While Granovetter’s weak tie concept pre-dates Burt’s structural holes, his paper implies a rebuttal to this argument. Illustrating with the so-called ‘forbidden triad,’ Granovetter argues that in social networks your friends are likely to be friends with each other. That is, if person A is strongly linked to both B and C, it is unlikely that B and C have no connection. Granovetter finds this forbidden triad is uncommon in social networks, arguing that “it follows that, except under unlikely conditions, no strong tie is a bridge.” This implies that Granovetter’s argument is not precisely about identifying whether a relationship is strong or weak, as Burt says, but rather it is about identifying bridges over structural holes. It is merely the fact those bridges are almost always weak which then leads to Granovetter’s interest in the strength of a tie.

This seems to indicate that there is little difference between looking for weak ties or for structural holes: what matters for successful information exchange is that a hole is bridged, and it is only a matter of semantics whether you consider the hole or consider the bridge. Yet in Burt’s later work, he further develops the idea of a hole, building the argument for why this mode of thinking is important. He describes French CEO René Fourtou’s observation that the best ideas were stimulated by people from divergent disciplines. “Fourtou emphasized le vide – literally, the emptiness; conceptually, structural holes – as essential to coming up with new ideas: ‘Le vide has a huge function in organizations…shock comes when different things meet. It’s the interface that’s interesting…If you don’t leave le vide, you have no unexpected things, no creation.’” (Burt, 2004)

It is this last piece which is missing from Granovetter’s conception – Granovetter argues that bridges are valuable because they span holes; Burt argues that the holes themselves have value. You must leave le vide.

Hayek writes that the fundamental economic challenge of society is “a problem of the utilization of knowledge not given to anyone in its totality” (Hayek, 1945). If you consider each individual to have unique knowledge, the question of economics becomes how to best leverage this disparate knowledge for “rapid adaptation to changes in the particular circumstances of time and place.” With this understanding, any network which effectively disseminated information would be optimal for solving economic challenges.

Imagine a fully connected network, or one sufficiently connected with weak ties. In Granovetter’s model – assuming no limit to a person’s capacity to maintain ties – such a network would be sufficient for solving complex problems. If you have full, easy access to every other individual in the network, then you would learn about job openings or otherwise have the information needed to engage in complex, collective problem-solving. A weak tie only provides benefit if it brings information from another community; if it spans a structural hole.

In Burt’s model, however, such a network is not enough – an optimal network must contain le vide; it must have structural holes. Research by Lazer and Friedman (Lazer & Friedman, 2007) gives insight into how these structural holes add value. In an agent-based simulation, Lazer and Friedman examine the relationship between group problem-solving and network structure. Surprisingly, they find that those networks which are most efficient at disseminating information – such as a fully connected network – are better in the short-run but have lower long-term performance. An inefficient network, on the other hand, one with structural holes, “maintains diversity in the system and is thus better for exploration than an efficient network, supporting a more thorough search for solutions in the long run.” This seems to support Burt’s thesis that it is not just the ability to bridge, but the very existence of holes which matter.

There are, of course, drawbacks to these structural holes as well. Burt finds that structural holes help generate good ideas but – as the work of Lazer and Friedman would imply – hurts their dissemination and adoption (Burt, 2004). So it remains to be seen whether the ‘strength of structural holes,’ as Burt writes, is sufficient to overcome their drawbacks. But regardless of the normative value of these holes, Burt is right to argue that this mode of thinking should be side-by-side with Granovetter’s. For thorough social network analysis, it is not enough to consider the bridge, one must consider the chasm. Le vide matters.

___

Burt, R. S. (1995). Structural Holes: The Social Structure of Competition: Belknap Press.

Burt, R. S. (2004). Structural holes and good ideas. American journal of sociology, 110(2), 349-399.

Granovetter, M. S. (1973). The strength of weak ties. American journal of sociology, 1360-1380.

Hayek, F. A. (1945). The use of knowledge in society. The American economic review, 35(4), 519-530.

Lazer, D., & Friedman, A. (2007). The network structure of exploration and exploitation. Administrative Science Quarterly, 52(4), 667-694.

 

facebooktwittergoogle_plusredditlinkedintumblrmail

Epistemic Networks and Idea Exchange

Earlier this week, I gave a brief lightning talk as part of the fall welcome event for Northeastern’s Digital Scholarship Group and NULab for Texts, Maps, and Data. In my talk, I gave a high-level introduction to the motivation and concept behind a research project I’m in the early stages of formulating with my advisor Nick Beauchamp and my Tufts colleague Peter Levine.

I didn’t write out my remarks and my slides don’t contain much text, but I thought it would be helpful to try to recreate those remarks here:

I am interested broadly in the topic of political dialogue and deliberation. When I use the term “political” here, I’m not referring exclusively to debate between elected officials. Indeed, I am much more interested politics as associated living; I am interested in the conversations between every-day people just trying to figure out how we live in this world together. These conversations may be structured or unstructured.

With this group of participants in mind, the next question is to explore how ideas spread. There is a great model borrowed from epistemology that looks at spreading on networks. Considering social networks, for example, you can imagine tracking the spread of a meme across Facebook as people share it with their friends, who then share it with friend of friends, and so on.

This model is not ideal in the context of dialogue. Take the interaction between two people, for example. If my friend shares a meme, there’s some probability that I will see it in my feed and there is some probability that I won’t see it in my feed. But those are basically the only two options: either I see it or I don’t see it.

With dialogue, I may understand you, I may not understanding you, I may think I understand you…etc. Furthermore, dialogue is a back and forth process. And while a meme is either shared or not shared, in the back and forth of dialogue, there is no certainty that an idea is actually exchanged to that a comment had a predictable effect.

This raises the challenging question of how to model dialogue as a process at the local level. This initial work considers an individual’s epistemic network – a network of ideas and beliefs which models an given individual’s reasoning process. The act of dialogue then, is no longer an exchange between two (or more) individuals, it is an exchange between two (or more) epistemic networks.

There are, of course, a lot of methodological challenges and questions to this approach. Most fundamentally, how do you model a person’s epistemic network? There are multiple, divergent way to do this from which you can imagine getting very different – but equally valid results.

The first method – which has been piloted several times by Peter Levine – is a guided reflection process in which individuals respond to a series of prompts in order to self-identify the nodes and links of their epistemic network. The second method involves the automatic extraction of a semantic network from a written reflection or discussion transcript.

I am interested in exploring both of these methods – ideally with the same people, in order to compare both construction models. Additionally, once epistemic networks are constructed, through either approach, you can evaluate and compare their change over time.

There are a number of other research questions I am interested in exploring, such as what network topology is conducive to “good” dialogue and what interactions and conditions lead to opinion change.

facebooktwittergoogle_plusredditlinkedintumblrmail

Multivariate Network Exploration and Presentation

In “Multivariate Network Exploration and Presentation,” authors Stef van den Elzen and Jarke J. van Wijk introduce an approach they call “Detail to Overview via Selections and Aggregations,” or DOSA. I was going to make fun of them for naming their approach after a delicious south Indian dish, but since they comment that their name “resonates with our aim to combine existing ingredients into a tasteful result,” I’ll have to just leave it there.

The DOSA approach – and now I am hungry – aims to allow a user to explore the complex interplay between network topology and node attributes. For example, in company email data, you may wish to simultaneously examine assortativity by gender and department over time. That is, you may need to consider both structure and multivariate data.

This is a non-trivial problem, and I particularly appreciated van den Elzen and van Wijk’s practical framing of why this is a problem:

“Multivariate networks are commonly visualized using node-link diagrams for structural analysis. However, node-link diagrams do not scale to large numbers of nodes and links and users regularly end up with hairball-like visualizations. The multivariate data associated with the nodes and links are encoded using visual variables like color, size, shape or small visualization glyphs. From the hairball-like visualizations no network exploration or analysis is possible and no insights are gained or even worse, false conclusions are drawn due to clutter and overdraw.”

YES. From my own experience, I can attest that this is a problem.

So what do we do about it?

The authors suggest a multi-pronged approach which allows non-expert users to select nodes and edges of interest, simultaneously see a detail and infographic-like overview, and to examine the aggregated attributes of a selection.

Overall, this approach looks really cool and very helpful. (The paper did win the “best paper” award at the IEEE Information Visualization 2014 Conference, so perhaps that shouldn’t be that surprising.) I was a little disappointed that I couldn’t find the GUI implementation of this approach online, though, which makes it a little hard to judge how useful the tool really is.

From their screenshots and online video, however, I find that while this is a really valiant effort to tackle a difficult problem, there is still more work to do in this area. The challenge with visualizing complex networks is indeed that they are complex, and while DOSA gives a user some control over how to filter and interact with this complexity, there is still a whole lot going on.

While I appreciate the inclusion of examples and use cases, I would have also liked to see a user design study evaluating how well their tool met their goal of providing a navigation and exploration tool for non-experts. I also think that the issues of scalability with respect to attributes and selection that they raise in the limitations section are important topics which, while reasonably beyond the scope of this paper, ought to be tackled in future work.

facebooktwittergoogle_plusredditlinkedintumblrmail

Multivariate Network Exploration and Presentation

In “Multivariate Network Exploration and Presentation,” authors Stef van den Elzen and Jarke J. van Wijk introduce an approach they call “Detail to Overview via Selections and Aggregations,” or DOSA. I was going to make fun of them for naming their approach after a delicious south Indian dish, but since they comment that their name “resonates with our aim to combine existing ingredients into a tasteful result,” I’ll have to just leave it there.

The DOSA approach – and now I am hungry – aims to allow a user to explore the complex interplay between network topology and node attributes. For example, in company email data, you may wish to simultaneously examine assortativity by gender and department over time. That is, you may need to consider both structure and multivariate data.

This is a non-trivial problem, and I particularly appreciated van den Elzen and van Wijk’s practical framing of why this is a problem:

“Multivariate networks are commonly visualized using node-link diagrams for structural analysis. However, node-link diagrams do not scale to large numbers of nodes and links and users regularly end up with hairball-like visualizations. The multivariate data associated with the nodes and links are encoded using visual variables like color, size, shape or small visualization glyphs. From the hairball-like visualizations no network exploration or analysis is possible and no insights are gained or even worse, false conclusions are drawn due to clutter and overdraw.”

YES. From my own experience, I can attest that this is a problem.

So what do we do about it?

The authors suggest a multi-pronged approach which allows non-expert users to select nodes and edges of interest, simultaneously see a detail and infographic-like overview, and to examine the aggregated attributes of a selection.

Overall, this approach looks really cool and very helpful. (The paper did win the “best paper” award at the IEEE Information Visualization 2014 Conference, so perhaps that shouldn’t be that surprising.) I was a little disappointed that I couldn’t find the GUI implementation of this approach online, though, which makes it a little hard to judge how useful the tool really is.

From their screenshots and online video, however, I find that while this is a really valiant effort to tackle a difficult problem, there is still more work to do in this area. The challenge with visualizing complex networks is indeed that they are complex, and while DOSA gives a user some control over how to filter and interact with this complexity, there is still a whole lot going on.

While I appreciate the inclusion of examples and use cases, I would have also liked to see a user design study evaluating how well their tool met their goal of providing a navigation and exploration tool for non-experts. I also think that the issues of scalability with respect to attributes and selection that they raise in the limitations section are important topics which, while reasonably beyond the scope of this paper, ought to be tackled in future work.

facebooktwittergoogle_plusredditlinkedintumblrmail

Representing the Structure of Data

To be perfectly honest, I had never thought much about graph layout algorithms. You hit a button in Gephi or call a networkx function, some magic happens, and you get a layout. If you don’t like the layout generated, you hit the button again or call a different function.

In one of my classes last year, we generated our own layouts using eigenvectors of the Laplacian. This gave me a better sense of what happens when you use a layout algorithm, but I still tended to think of it as a step which takes place at the end of an assignment; a presentation element which can make your research accessible and look splashy.

In my visualization class yesterday, we had a guest lecture by Daniel Weidele, PhD student at University of Konstanz and researcher at IBM Watson. He covered fundamentals of select network layout algorithms but also spoke more broadly about the importance of layout. A network layout is more than a visualization of a collection of data, it is the final stage of a pipeline which attempts to represent some phenomena. The whole modeling process abstracts a phenomena into a concept, and the represents that concept as a network layout.

When you’re developing a network model for a phenomenon, you ask questions like “who is your audience? What are the questions we hope to answer?” Daniel pointed out that you should ask similar questions when evaluating a graph layout; the question isn’t just “does this look good?” You should ask: “Is this helpful? What does it tell me?”

If there are specific questions you are asking your model, you can use a graph layout to get at the answers. You may, for example, ask: “Can I predict partitioning?”

This is what makes modern algorithms such as stress optimization so powerful – it’s not just that they produce pretty pictures, or even that they layouts appropriately disambiguate nodes, but they actually represent the structure of the data in a meaningful way.

In his work with IBM Watson, Weidele indicated that a fundamental piece of their algorithm design process is building algorithms based on human perception. For a test layout, try to understand what a human likes about it, try to understand what a human can infer from it – and then try to understand the properties and metrics which made that human’s interpretation possible.

facebooktwittergoogle_plusredditlinkedintumblrmail

Adventures in Network Science

Every time someone asks me how school is going, I have the tendency to reply with an enthusiastic but nondescript, “AWESOME!” Or, as one of my classmates has taken to saying, “WHAT A TIME TO BE ALIVE!”

Truly, it is a privilege to be able to experience such awe.

As it turns out, however, these superlatives aren’t particularly informative. And while I’ve struggled to express the reasons for my raw enthusiasm in more coherent terms, I will to attempt to do so here.

First, my selected field of study, network science, is uniquely interdisciplinary. I can practically feel you rolling your eyes at that tiredly clichéd turn of phrase – yes, yes, every program in higher education is unique interdisciplinary these days – but, please, bear with me.

I work on a floor with physicists, social scientists, and computer scientists; with people who study group dynamics, disease spreading, communication, machine learning, social structures, neuroscience, and numerous other things I haven’t even discovered yet. Every single person is doing something interesting and cool.

I like to joke that the only thing on my to-do list is to rapidly acquire all of human knowledge.

In the past year, I have taken classes in physics, mathematics, computer science, and social science. I have read books on philosophy, linguistics, social theory, and computational complexity – as well as, of course, some good fiction.

I can now trade nerdy jokes with people from any discipline.

And I’ve been glad to develop this broad and deep knowledge base. In my own work, I am interested in the role of people in their communities. More specifically, I’m looking at deliberation, opinion change, and collective action. That is – we each are a part of many communities, and our interactions with other people in those communities fundamentally shape the policies, institutions, and personalities of those communities.

These topics have been tackled in numerous disciplines, but in disparate efforts which have not sufficiently learned from each other’s progress. Deliberative theory has thought deeply about what good political dialogue looks like; behavioral economics has studied how individual choices result in larger implications and institutions; and computer science has learned how to identify startling patterns in complex datasets. But only network science brings all these elements together; only network science draws on the full richness of this knowledge base to look more deeply at interaction, connection, dynamics, and complexity.

But perhaps the most exciting thing about this program is that it truly allows me to find my own path. I’m not training to replicate some remarkable scholar who already exists – I am learning from many brilliant scholars what valuable contributions I will uniquely be able to make.

Because as much as I have to learn from everyone I meet – we all have something to learn from each other.

There are other programs in data science or network analysis, but this is the only place in the world where I can truly explore the breadth of network science and discover what kind of scholar I want to be.

 

I joke about trying to acquire all of human knowledge because, of course, I cannot learn everything – no one person can. But we can each cultivate our own rich understanding of the puzzle. And through the shared language of network science, we can share our knowledge, work together, and continue to chip away at understanding the great mysterious of the universe.

facebooktwittergoogle_plusredditlinkedintumblrmail

Semantic and Epistemic Networks

I am very interested in modeling a person’s network of ideas. What key concepts or values particularly motivate their thinking and how are those ideas connected?

I see this task as being particularly valuable in understanding and improving civil and political discourse. In this model, dialogue can be seen as an informal and iterative process through which people think about how their own ideas are connected, reason with each other about what ideas should be connected, and ultimately revise (or don’t) their way of thinking by adding or removing idea nodes or connections between them.

This concept of knowledge networks – epistemic networks – has been used by David Williamson Shaffer to measure the development of students’ professional knowledge; eg their ability to “think like an engineer” or “think like an urban planner.” More recently, Peter Levine has advanced the use of epistemic networks in “moral mapping” – modeling a person’s values and ways of thinking.

This work has made valuable progress, but a critical question remains: just what is the best way to model a person’s epistemic network? Is there an unbiased way to determine the most critical nodes? Must we rely on a given person’s active reasoning to determine the links? In the case of multi-person exchanges, what determines if two concepts are the “same”? Is semantic similarity sufficient, or must individuals actively discuss and determine that they do each indeed mean the same thing? If you make adjustments to a visualized epistemic network following a discussion, can we distinguish between genuine changes in view from corrections due to accidental omission?

Questions and challenges abound.

But these problems aren’t necessarily insurmountable.

As a starting place, it is helpful to think about semantic networks. In the 1950s, Richard H. Richens original proposed semantic networks as a tool to aid in machine translation.

“I refer now to the construction of an interlingua in which all the structural peculiarities of the base language are removed and we are left with what I shall call a ‘semantic net’ of ‘naked ideas,'” he wrote. “The elements represent things, qualities or relations…A bond points from a thing to its qualities or relations, or from a quality or relation to a further qualification.”

Thus, from its earliest days, semantic networks were seen as somewhat synonymous with epistemic networks: words presumably represent ideas, so it logically follows that a network of words is a network of ideas.

This may well be true, but I find it helpful to separate the two ideas. A semantic network is observed; an epistemic network is inferred.

That is, through any number of advanced Natural Language Processing algorithms, it is essentially possible to feed text into a computer and have it return of network of words which are connected in that text.

You can imagine some simple algorithms for accomplishing this: perhaps two words are connected if they co-occur in the same sentence or paragraph. Removing stop words prevents your retrieved network from being over connected by instances of “the” or “a.” Part-of-speech tagging – a relatively simple task thanks to huge databanks of tagged corpora – can bring an additional level of sophistication. Perhaps we want to know which subjects are connected to which objects. And there are even cooler techniques relying on probabilistic models or projections of the corpus into k-space, where k is the number of unique words.

These models typically assume some type of unobserved data – eg, we observe a list of words and use that to discover the unobserved connections – but colloquially speaking, semantic networks are observed in the sense that they can be drawn out directly from a text. They exist in some indirect but concrete way.

And while it seems fair to assume that words do indeed have meaning, it still takes a bit of a leap to take a semantic network as synonymous with an epistemic network.

Consider an example: if we were to take some great novel and cleverly reduce it to a semantic network, would the resulting network illustrate exactly what the author was intending?

The fact that it’s even worth asking that question to me indicates that the two are not intrinsically one and the same.

Arguably, this is fundementally a matter of degrees. It seems reasonable to say that, unless our algorithm was terribly off, the semantic network can tell us something interesting and worthwhile about the studied text. Yet it seems like a stretch to claim that such a simplistic representation could accurately and fully capture the depth of concepts and connections an author was seeking to convey.

If that were the case, we could study networks instead of reading books and – notably – everyone would agree on their meaning.

A semantic network, then, can be better considered as a representation of an epistemic network. It takes reason and judgement to interpret a semantic network epistemically.

Perhaps it is sufficient to be aware of the gap between these two – to know that interpreting a semantic network epistemically necessarily means introducing bias and methodological subjectivity.

But I wonder if there’s something better we can do to model this distinction – some better way to capture the complex, dynamic, and possibly conflicting essence of a more accurately epistemic network.

facebooktwittergoogle_plusredditlinkedintumblrmail

Networks of Connected Concepts

Yesterday, I ran across a fascinating 1993 paper by sociologist Kathleen Carley, Coding Choices for Textual Analysis: A Comparison of Content Analysis and Map Analysis.

Using the now antiquated term “map analysis” – what I would call semantic network analysis today – Carley explains:

An important class of methods that allows the research to address textual meaning is map analysis. Where content analysis typically focuses exclusively on concepts, map analysis focuses on concepts and the relationships between them and hence on the web of meaning contained within the text. While no term has yet to emerge as canonical, within this paper the term map analysis will be used to refer to a broad class of procedures in which the focus is on networks consisting of connected concepts rather than counts of concepts.

This idea is reminiscent of the work of Peter Levine and others (including myself) on moral mapping – representing an individual’s moral world view through a thoughtfully constructed network of ideas and values.

Of course, a range of methodological challenges are immediately raised in graphing a moral network – what do you include? What constitutes a link? Do links have strength or directionality? Trying to compare two or more people’s networks raises even more challenges.

While Carley is looking more broadly than moral networks, her work similarly aims to extract meaning, concepts, and connections from a text – and faces similar methodological challenges:

By taking a map-analytic approach, the researcher has chosen to focus on situated concepts. This choice increases the complexity of the coding and analysis process, and places the researcher in the position where a number of additional choices must be made regarding how to code the relationship between concepts.

On its face, these challenges seem like they may be insurmountable – could complex concepts such as morality ever be coded and analyzed in such a way as to be broadly interpretable while maintaining the depth of their meaning?

This conundrum is at the heart of the philosophical work of Ludwig Wittgenstein, and is far from being resolved philosophically or empirically.

Carley is hardly alone in not having a perfect resolution dilemma, but she does offer an interesting insight in contemplating it:

…by focusing on the structure of relationships between concepts, the attention of the researcher is directed towards thinking about “what am I really assuming in choosing this coding scheme?” Consequently, researchers may be more aware of the role that their assumptions are playing in the analysis and the extent to which they want to, and do, rely on social knowledge.

A network approach to these abstract concepts may indeed be inextricably biased – but, then again, all tools of measurement are. The benefit, then in undertaking the complex work of coding relationships as well as concepts, is that the researcher is more acutely aware of the bias.

facebooktwittergoogle_plusredditlinkedintumblrmail

The Benefits of Inefficiency

Political scientist Markus Prior has long argued that inefficiency benefits democracy. In much of his work studying the effects of media on political knowledge and participation, Prior has found that an inefficient media environment – in which people have little choice over their entertainment options – is actually conducive to improving political knowledge.

In Efficient Choice, Inefficient Democracy?, Prior explains: “Yet while a sizable segment of the population watches television primarily to be entertained, and not to obtain political information, this does not necessarily imply that this segment is not also exposed to news. When only broadcast television is available, the audience is captive and, to a certain extent, watches whatever is offered on the few television channels. Audience research has confirmed a two-stage model according to which people first decide to watch television and then pick the available program they like best.”

That is, when few media choices are available, people tend to tune in for entertainment purposes. If news is the only thing that’s on, they’ll watch that over turning the TV off.

In a highly  efficient media environment, however, people can navigate directly to their program of choice. Some people may choose to informational sources for entertainment, but the majority of people will be able to avoid exposure to any news, seeing only the specific programming they are interested in. (I should mention here that much of Prior’s data is drawn from the U.S. context.)

As Prior further outlines in Post-Broadcast Democracy, an inefficient media environment therefore promotes what Prior calls “by-product learning”: people learn about current events whether they want to or not. Like the pop song you learn at the grocery store, inefficient environments lead to exposure to topics you wouldn’t explore yourself.

Interestingly, it seems that a similar effect may take place in the context of group problem solving.

In a problem-solving setting, efficiency can be considered as a measure of communication quality. In the most efficient setting, all members of a group would share the exact same knowledge; in an inefficient setting group members wouldn’t communicate at all.

Now imagine this group is confronted with a problem and works together to find the best solution they can.

As outlined by David Lazer and Allan Friedman, this context can be described as a trade off between exploration and exploitation: if someone in your group has a solution that seems pretty good, your group may want to exploit that solution in order to reap the benefits it provides. If everyone’s solution seems pretty mediocre, you may want to explore and look for additional options.

Since you have neither infinite time nor infinite resources, you can’t do both. You have to choose which option will ultimately result in the best solution.

The challenge here is that the globally optimal solution is hard to identify. In a bumpy solution landscape, a good solution may simply point to a local optimum, not to the best solution you can find.

This raises the question: is it better have an efficient network where members of a group can easily share and disperse information, or is better to have an inefficient network where information sharing is hard and information dispersal is slow?

Interestingly, this is an open research question which has seen mixed results.

Intuition seems to indicate that efficient information sharing would be good – allowing a group to seamlessly coordinate. But, there’s also some indication that inefficiency is better – encouraging more exploration and therefore a more diverse set of possible solutions. The risk is that a group with an efficient communications network will actually converge on a local optimum – taking the first good option available, rather than taking the time to fully explore for the global optimum.

facebooktwittergoogle_plusredditlinkedintumblrmail

The Nature of Technology

I recently finished reading W. Brian Arthur’s The Nature of Technology, which explores what technology is and how it evolves.

Evolves is an intentional word here; the concept is at the core of Arthur’s argument. Technology is not a passive thing which only grows in spurts of genius inspiration – it is a complex system which is continuously growing, changing, and – indeed – evolving.

Arthur writes that he means the term evolution literally – technology builds itself from itself, growing and improving through the novel combination of existing tools – but he is clear that the process of evolution does not imply that technology is alive.

“…To say that technology creates itself does not imply it has any consciousness, or that it uses humans somehow in some sinister way for its own purposes,” he writes. “The collective of technology builds itself from itself with the agency of human inventors and developers much as a coral reef builds itself from the activities of small organisms.”

Borrowing from Humberto Maturana and Fransisco Varela, Arthur describes this process as autopoiesis, self-creating.

This is a bold claim.

To consider technology as self-creating changes our relationship with the phenomenon. It is not some disparate set of tools which occasionally benefits from the contributions of our best thinkers; it is a  growing body of interconnected skills and knowledge which can be infinitely combined and recombined into increasingly complex approaches.

The idea may also be surprising. An iPhone 6 may clearly have evolved from an earlier model, which in turn may owe its heritage to previous computer technology – but what relationship does a modern cell phone have with our earliest tools of rocks and fire?

In Arthur’s reckoning, with a complete inventory of technological innovations one could fully reconstruct a technological evolutionary tree – showing just how each innovation emerged by connecting its predecessors.

This concept may seem odd, but Arthur makes a compelling case for it – outlining several examples of engineering problem solving which essentially boil down to applying existing solutions to novel problems.

Furthermore, Arthur explains that this technological innovation doesn’t occur in a vacuum – not only does it require the constant input of human agency, it grows from humanity’s continual “capturing” of physical phenomena.

“At the very start of technological time, we directly picked up and used phenomena: the heat of fire, the sharpness of flaked obsidian, the momentum of a stone in motion. All that we have achieved since comes from harnessing these and other phenomena, and combining the pieces that result,” Arthur argues.

Through this process of exploring our environment and iteratively using the tools we discover to further explore our environment, technology evolves and builds on itself.

Arthur concludes that “this account of the self-creation of technology should give us a different feeling about technology.” He explains:

“We begin to get a feeling of ancestry, of a vast body of things that give rise to things, of things that add to the collection and disappear from it. The process by which this happens is neither uniform not smooth; it shows bursts of accretion and avalanches of replacement. It continually explores into the unknown, continually uncovers novel phenomena, continually creates novelty. And it is organic: the new layers form on top of the old, and creations and replacements overlap in time. In its collective sense, technology is not nearly a catalog of individual parts. It is a metabolic chemistry, an almost limitless collective of entities that interact to produce new entities – and further needs. And we should not forget that needs drive the evolution of technology every bit as much as the possibilities for fresh combination and the unearthing of phenomena. Without the presence of unmet needs, nothing novel would appear in technology.”

In the end, I suppose we should not be surprised by the idea of technology’s evolution. It is a human-generated system; as complex and dynamic as any social system. It is vast, ever-changing, and at times unpredictable – but ultimately, at its core, technology is very human.

facebooktwittergoogle_plusredditlinkedintumblrmail