The Nature of Technology

I recently finished reading W. Brian Arthur’s The Nature of Technology, which explores what technology is and how it evolves.

Evolves is an intentional word here; the concept is at the core of Arthur’s argument. Technology is not a passive thing which only grows in spurts of genius inspiration – it is a complex system which is continuously growing, changing, and – indeed – evolving.

Arthur writes that he means the term evolution literally – technology builds itself from itself, growing and improving through the novel combination of existing tools – but he is clear that the process of evolution does not imply that technology is alive.

“…To say that technology creates itself does not imply it has any consciousness, or that it uses humans somehow in some sinister way for its own purposes,” he writes. “The collective of technology builds itself from itself with the agency of human inventors and developers much as a coral reef builds itself from the activities of small organisms.”

Borrowing from Humberto Maturana and Fransisco Varela, Arthur describes this process as autopoiesis, self-creating.

This is a bold claim.

To consider technology as self-creating changes our relationship with the phenomenon. It is not some disparate set of tools which occasionally benefits from the contributions of our best thinkers; it is a  growing body of interconnected skills and knowledge which can be infinitely combined and recombined into increasingly complex approaches.

The idea may also be surprising. An iPhone 6 may clearly have evolved from an earlier model, which in turn may owe its heritage to previous computer technology – but what relationship does a modern cell phone have with our earliest tools of rocks and fire?

In Arthur’s reckoning, with a complete inventory of technological innovations one could fully reconstruct a technological evolutionary tree – showing just how each innovation emerged by connecting its predecessors.

This concept may seem odd, but Arthur makes a compelling case for it – outlining several examples of engineering problem solving which essentially boil down to applying existing solutions to novel problems.

Furthermore, Arthur explains that this technological innovation doesn’t occur in a vacuum – not only does it require the constant input of human agency, it grows from humanity’s continual “capturing” of physical phenomena.

“At the very start of technological time, we directly picked up and used phenomena: the heat of fire, the sharpness of flaked obsidian, the momentum of a stone in motion. All that we have achieved since comes from harnessing these and other phenomena, and combining the pieces that result,” Arthur argues.

Through this process of exploring our environment and iteratively using the tools we discover to further explore our environment, technology evolves and builds on itself.

Arthur concludes that “this account of the self-creation of technology should give us a different feeling about technology.” He explains:

“We begin to get a feeling of ancestry, of a vast body of things that give rise to things, of things that add to the collection and disappear from it. The process by which this happens is neither uniform not smooth; it shows bursts of accretion and avalanches of replacement. It continually explores into the unknown, continually uncovers novel phenomena, continually creates novelty. And it is organic: the new layers form on top of the old, and creations and replacements overlap in time. In its collective sense, technology is not nearly a catalog of individual parts. It is a metabolic chemistry, an almost limitless collective of entities that interact to produce new entities – and further needs. And we should not forget that needs drive the evolution of technology every bit as much as the possibilities for fresh combination and the unearthing of phenomena. Without the presence of unmet needs, nothing novel would appear in technology.”

In the end, I suppose we should not be surprised by the idea of technology’s evolution. It is a human-generated system; as complex and dynamic as any social system. It is vast, ever-changing, and at times unpredictable – but ultimately, at its core, technology is very human.

facebooktwittergoogle_plusredditlinkedintumblrmail

The Downside of Matrix-Style Knowledge Imports

Sometimes it seems as though it would be most convenient to be able to matrix-style download knowledge into my head. You know – plug yourself into a computer, run a quick program, and suddenly, it’s all, “whoa, I know Kung Fu.

Imagine for a moment that such technology was available. Imagine that instead of spending the next several years in a Ph.D. program, I could download and install everything I needed in minutes. What would that look like?

First of all, either everyone would suddenly know everything or – more likely, perhaps – inequality would be sharpened by the restriction of knowledge to those of the highest social strata.

It seems optimistic to imagine that knowledge would become free.

But, for the moment I’ll put aside musings about the social implications. What I really want to know is – what would such technology mean for learning?

I suppose it’s a bit of a philosophical question – would the ability to download knowledge obliterate learning or bring it to new heights?

I’m inclined to take such technology as the antithesis of learning. I mean that here with no value assumptions, but rather as a matter of semantics. Learning is process, not a net measure of knowledge. Acquiring knowledge instantaneously thus virtually eliminates the process we call learning.

That seems like it may be a worthy sacrifice, though. Exchanging a little process to acquire vast quantities of human knowledge in the blink of an eye may be a fair trade.

All this, of course, assumes that knowledge is little more than a vast quantity of data. Perhaps more than a collection of facts, but still capable of being condensed down into a complex series of statistics.

There’s this funny, thing, though – that is arguably not how knowledge works. At it’s simplest, this can be seen as the wistful claim that it’s not the destination, its the journey. But more profoundly –

Last week, the podcast The Infinite Monkey Cage had a show on artificial intelligence. While discussing the topic guest Alan Winfield made the startling observation: in the early days of AI, we took for granted that a computer would find easy the same tasks that a person finds easy, and that, similarly, a computer would have difficulty with the same tasks a person finds difficult.

Playing chess, for example, takes quite a bit of human skill to do well, so it seemed like an appropriate challenge.

But for a computer, which can quickly store and analyze many possible moves and outcomes – playing chess is relatively easy. On the other hand, recognizing sarcasm in a human-produced sentence is nearly impossible. Indeed, this is one of the challenges of computer science today.

All this is relevant to the concept of learning and matrix-downloads because the groundbreaking area of artificial intelligence is machine learning – algorithms that help a computer learn from initial data to make predictions about future data.

The idea of downloadable knowledge implies that such learning is unnecessary – we only need a massive input of all available data to make sense of it all. But a deeper understanding of knowledge and computing reveals that – not only is such technology unlikely to emerge any time soon, it is not really how computers work, either.

There is something ineffable about learning, about the process of figuring things out and trying again and again and again. To say the process is invaluable is not merely some human platitude, it is a subtle point about the nature of knowledge.

facebooktwittergoogle_plusredditlinkedintumblrmail

Pedagogy and Disciplines

After several years of working in academia, it’s been interesting to be back in the classroom as a student. Teaching per se was not central to my previous role, but a lot of my work focused on student development.

I’ve also had a somewhat untraditional academic path. My undergraduate studies were in physics, I went on to get a Masters in marketing communication, and then through work I had the opportunity to co-teach an undergraduate philosophy seminar course. So, I’ve been particularly struck by the different pedagogical approaches that can be found in different disciplines.

In many ways, these pedagogical approaches can be linked back to different understandings of wisdom: techne, technical knowledge; episteme, scientific knowledge; and phronesis, practical wisdom.

My undergraduate studies in physics focused on episteme – there was some techne as they taught specific mathematical approaches, but the real emphasis was on developing our theoretical understanding.

My master’s program – aimed at preparing people for careers in marketing – lay somewhere between techne and phronesis. Teaching by case studies is typically associated with phronesis – since the approach is intended to teach students how to make good decisions when confronted with new challenges. But the term is not a perfect fit for marketing – phronesis traditionally takes “good decisions” to be ethical decisions, whereas these studies took “good” to mean “good for business.” The term techne, which implies a certain art or craftship, is also relevant here.

The philosophy seminar I co-taught focused on phronesis. This is by no means intrinsic to philosophy as a discipline, but my specific class focused on civic studies, an emergent field that asks, “what should we do?”

This question is inherently linked to phronesis: as urban planner Bent Flyvbjerg writes in arguing for more on phronesis in the socials sciences: “a central question for phronesis is: What should we do?”

Each of these types of wisdom could be tied to different pedagogical methods by exploring the tasks expected by students. To develop phronesis, students are confronted with novel contextual situations asked to develop solutions. For techne students have to create something – this might be a rote recreation of ones multiplication tables, or could involve a more artistic pursuit. Episteme would be taught through problem sets – asking students to apply theoretical knowledge to answer questions with discrete answers.

From my own experience, different disciplines tend to gravitate towards different types of wisdom. But I wonder how inherent these approaches are to a discipline. Episteme may be the norm in physics, for example, but what would a physics class focused on phronesis look like?

 

facebooktwittergoogle_plusredditlinkedintumblrmail

Behavioral Responses to Social Dilemmas

I had the opportunity today to attend a talk by Yamir Moreno, of the University of Zaragoza in Spain. A physicist by training, Moreno has more recently been studying game theory and human behavior, particularly in a complex systems setting.

In research published in 2012, Moreno and his team had people of various age groups play a typical prisoners dilemma game: a common scenario where an individual’s best move is to defect, but everyone suffers if everyone defects. The best outcome is for everyone to cooperate, but that can be hard to achieve since individuals have incentives to defect.

Playing in groups of 4 over several rounds, players were matched by a variable landscape – one group existed on a traditional lattice, while in another incarnation of the game players existed in a scale-free network.

As you might expect from a prisoner’s dilemma, when a person’s neighbors cooperated that person was more likely to cooperate in later rounds. When a person’s neighbors defected, that person was more likely to defect in later rounds.

Interestingly, in this first version of the experiment, Moreno found little difference between the lattice and scale-free structure.

Postulating that this was due to the static nature of the network, Moreno devised a different experiment: players were placed in an initial network structure, but they had the option to cut or add connections to other people. New connections were always reciprocal, with both parties having to agree to the connection.

He then ran this experiment over several different parameters, with some games allowing players to see each others past actions and other games having no memory.

In the setting where people could see past action, cooperation was significantly higher – about 20-30% more than expected otherwise. People who chose to defect were cut out of these networks and ultimately weren’t able to benefit from their defecting behavior.

I found this particularly interesting because earlier in the talk I had been thinking of Habermas. As interpreted by Gordon Finlayson, Habermas thought the result of standard social theory was “a false picture of society as an aggregate of lone individual reasoners, each calculating the best way of pursuing their own ends. This picture squares with a pervasive anthropological view that human beings are essentially self-interested, a view that runs from the ancient Greeks, though early modern philosophy, and right up to the present day. Modern social theory, under the influence of Hobbs or rational choice theory, thinks of society in similar terms. In Habermas’ eyes, such approaches neglect the crucial role of communication and discourse in forming social bonds between agents, and consequently have an inadequate conception of human association.”

More plainly – it is a critical feature of the Prisoners Dilemma that players are not allowed to communicate.

If the could communicate, Habermas offers, they would form communities and associate very differently than in a communications-free system.

Moreno’s second experiment didn’t include communication per se – players didn’t deliberate about their actions before taking them. But in systems with memory, a person’s actions became part of the public record – information that other players could take into account before associating with them.

In Moreno’s account, the only way for cooperators to survive is to form clusters. On the other hand, in a system with memory, a defector must join those communities as a cooperative member in order to survive.

facebooktwittergoogle_plusredditlinkedintumblrmail

Glassy Landscapes and Community Detection

In the last of a series of talks, Cris Moore of the Santa Fe Institute spoke about the challenges of “false positives” in community detection.

He started by illustrating the psychological origin of this challenge: people evolved to detect patterns. In the wilderness, it’s better, he argued, to find a false-positive non-tiger than a false-negative actual tiger.

So we tend to see patterns that aren’t there.

In my complex networks class, this point was demonstrated when a community-detection algorithm found a “good” partition of communities in a random graph. This is surprising because a random network doesn’t have communities.

The algorithm found a “good” identification of communities because it was looking for a good identification of communities. That is, it randomly selected a bunch of possible ways to divide the network into communities and simply returned the best one – without any thought as to what that result might mean.

Moore showed illustrated a similar finding by showing a graph whose nodes were visually clustered into two communities. One side was colored blue and the other side colored red. Only 11% of the edges crossed between the red community and the blue community. Arguably, it looked like a good identification of communities.

But then he showed another identification of communities. Red nodes and blue nodes were all intermingled, but still – only 11% of the edges connected the two communities. Numerically, both identifications were equally “good.”

This usually is a sign that you’re doing something wrong.

Informally, the challenge here is competing local optima – eg, a “glassy” surface. There are multiple “good” solutions which produce disparate results.

So if you set out to find communities, you will find communities – whether they are there or not.

Moore used a belief propagation model to consider this point further. Imagine a network where every node is trying to figure out what community it is in, and where every node tells its neighbors what communities it thinks its going to be in.

Interestingly, node i‘s message to node j with the probability that i is part of community r will be based on the information i receives from all its neighbors other than j. As you may have already gathered, this is an iterative process, so factoring j‘s message to i into i‘s message to would just create a nightmarish feedback loop of the two nodes repeating information to each other.

Moore proposed a game: someone gives you a network telling you the average degree, k, and probability of connections to in- and out-of community, and they ask you to label the community of each node.

Now, imagine using this believe propagation model to identify the proper labeling of your nodes.

If you graph the “goodness” of the communities you find as a function of λ – where λ is (kin – kout)/2k, or the second eigenvector of the matrix representing the in- and out-community degrees of the nodes – you will find the network undergoes a phase transition at the square root of the average degree.

Moore interpreted the meaning of that phase transition – if you are looking for two communities, there is a trivial, globally fixed point at λ = 1/(square root of k). His claim – an unsolved mathematical question – is that below that fixed point communities cannot be detected. It’s not just that it’s hard for an algorithm to detect communities, but rather that the information is theoretically unknowable.

If you are looking for more than two communities, the picture changes somewhat. There is still the trivial fixed point below which communities are theoretically undetectable, but there is also a “good” fixed point where you can start to identify communities.

Between the trivial fixed point and the good fixed point, finding communities is possible, but – in terms of computational complexity – is hard.

Moore added that the good fixed point has a narrow basin of attraction – so if you start with random nodes transmitting their likely community, you will most like fall into a trivial fixed point solution.

This type of glassy landscape leads to the type of mis-identification errors discussed above – where there are multiple seemingly “good” identifications of communities within a network, but each of which vary wildly in how they assign nodes.

facebooktwittergoogle_plusredditlinkedintumblrmail

Phase Transitions in Random Graphs

Yesterday, I attended a great talk on Phase Transitions in Random Graphs, the second lecture by visiting scholar Cris Moore of the Santa Fe Institute.

Now, you may be wondering, “Phase Transitions in Random Graphs”? What does that even mean?

Well, I’m glad you asked.

First, “graph” is the technical math term for a network. So we’re talking about networks here, not about random bar charts or something. The most common random graph is the Erdős–Rényi model developed by Paul Erdős and Alfred Rényi. (Interestingly, a similar model was developed separately and simultaneously by Edgar Gilbert who gets none of the credit, but that is a different post.)

The Erdős–Rényi model is simple: you have a set number of vertices and you connect two vertices with an edge with probability p.

Imagine you are a really terrible executive for a new airline company: there are a set number of airports in the world, and you randomly assign direct flights between cities. If you don’t have much start up capital, you might have a low probability of connecting two cities – resulting in a random smattering of flights. So maybe a customer could fly between Boston and New York or between San Francisco and Chicago, but not between Boston and Chicago. If your airline has plenty of capital, though, you might have a high probability of flying between two cities, resulting a connected route allowing a customer to fly from anywhere to anywhere.

The random network is a helpful baseline for understanding what network characteristics are likely to occur “by chance,” but as you may gather from the example above – real networks aren’t random. A new airline would presumably have a strategy for deciding where to fly – focusing on a region and connecting to at least a few major airports.

A phase transition in a network is similar conceptually to a phase transition in a physical system: ice undergoes a phase transition to become a liquid and can undergo another phase transition to become a gas.

A random network undergoes a phase transition when it goes from having lots of disconnected little bits to having a large component.

But when/why does this happen?

Let’s imagine a random network with nodes connected with probability p. In this network, p = k/n where k is a constant and n is the number of nodes in the network. We would then expect each node to have an average degree of k.

So if I’m a random node in this network, I can calculate the average size of the component I’m in. I am one node, connected to k nodes. Since each of those nodes are also connected to k nodes, that makes k^2 nodes connected back to me. This continues outwards as a geometric series. For small k, the geometric series formula tells us that this function will converge at 1 / (1 – k).

So we would expect something wild and crazy to happen when k = 1.

And it does.

This is called the “critical point” of a random network. It is at this point when a network goes from a random collection of disconnected nodes and small components to having a large component. This is the random network’s phase transition.

facebooktwittergoogle_plusredditlinkedintumblrmail

Cris Moore on computational complexity

I am very excited that Northeastern is hosting a series of lectures by Cris Moore of the Santa Fe Institute. Moore has his background in physics, but he has also distinguished himself in the fields of mathematics, computer science, and network science.

Basically, he knows pretty much everything.

He began his lecture series yesterday with a talk on “computational complexity and landscapes.” Exploring qualitative differences between different types of algorithms, he essentially sought to answer the question: why are some problems “hard”?

Of course, “hard” is a relative term. In computer science, the word has a very specific meaning which requires some history to explain.

In 1971, Stephen Cook posed a question which remains open in computer science today: if a solution to a problem can be (quickly) verified by a computer can that problem also be (quickly) solved by a computer?

This question naturally sorts problems into two categories: those that can be solved in polynomial time (P) and those for which an answer can be can be checked in polynomial time (NP for ” nondeterministic polynomial time”).

Consider the famous Bridges of Königsberg problem. Euler, living in Königsberg in 1736, wanted to find a path that would allow him to cross each of the city’s bridges exactly once without ever crossing the same bridge twice.

Now you may start to appreciate the difference between finding a solution and checking a solution. Given a map of Königsberg, you might try a path and then try another path. If you find that neither of these paths work, you have accomplished exactly nothing: you still don’t have a solution but neither have you proven that none exists.

Only if you spent all day trying each and every possibility would you know for certain if a solution existed.

Now that is hard.

If, on the other hand, you were equipped with both a map of Königsberg and a Marauder’s map indicating the perfect route, it would be relatively easy to verify that you’d been handed a perfect solution. Once you have the right route it is easy to show that it works.

Of course, if the story were that simple, “P versus NP” wouldn’t have remained an open problem in computer science for forty years. Euler famously solved the Bridges of Königsberg problem without using the exhaustive technique described above. Instead, in a proof that laid the foundations of network science, Euler reduced the problem to one of vertices and edges. Since the connected path that he was looking for requires that you both enter and leave a land mass, an odd number of bridges meeting in one place poses problems. Eventually you will get stuck.

In this way, Euler was able to prove that there was no such path without having to try every possible solution. In computer science terms, we was able to solve it in polynomial time.

This is the fundamental challenge of the P versus NP dilemma: a problem which initially appears to be “NP” may be reducible to something that is simply “P.”

There are many problems which have not been successfully reduced, and there’s good reason to think that they never will be. But, of course, this is the typical NP problem: you can’t prove all NP problems can’t be reduced until you exhaust all of them.

As Cris Moore said, “the sky would fall” if it turned out that all NP problems are reducible to P.

To get a sense of the repercussions of such a discovery: a problem is defined as NP-hard if it is “at least as hard as the hardest problems in NP.” That is, other NP problems can be reduced, in polynomial time, to NP-hard problems.

That is to say, if you found a quick solution to an NP-hard problem it could also be applied as a quick solution to every NP problem. Thousands of problems which currently require exhaustive permutations to solve could suddenly be solved in polynomial time.

The sky would fall indeed.

His lecture series will continue on Thursdays through November 12.

facebooktwittergoogle_plusredditlinkedintumblrmail

New Horizons

In January 2006, I’d just wrapped up a year working at the planetarium at the Museum of Science.

At the time I could have told you exactly what was visible in that night’s sky. I could have told you which planets were in retrograde and I could have told you where to point your junior telescope to see something interesting.

Also in January of 2006 – nearly 10 years ago – NASA launched a new and ambitious mission. One that would take a decade and three billion miles to complete. A flyby of Pluto by the New Horizons spacecraft.

That icy underdog that was just eight months away from being reclassified as a dwarf planet.

One of the largest of the icy “Kuiper Belt objects,” Pluto and its largest moon Charon will add important knowledge to our understanding of the objects at the edge of our solar system.

Interestingly, Pluto is the only planet(like object) in our solar system whose atmosphere is escaping into space. This flyby could help us understand important things about our own atmosphere as well.

In February of 2007, I don’t even know what I was doing because it was so long ago I can’t quite remember clearly.

But at that time the New Horizons spacecraft was determinedly chugging along, passing that popular gas giant, Jupiter. Slingshoting off that planet’s gravity cut three year’s off the weary spacecraft’s journey while capturing over 700 separate observations of the Jovian system.

In December of 2011, New Horizons drew closer to Pluto than any spacecraft has ever been.

And on July 14, 2015, 3463 days after its mission began, New Horizons made it’s closest flyby of Pluto, capturing so much data that it will take us 16 months to download it all.

That’s incredible.

I don’t know what to say other than that. It’s incredible. It’s incredible what can be accomplished with a passionate team of people and a whole lot of patience.

Congratulations, New Horizons, you made it.

facebooktwittergoogle_plusredditlinkedintumblrmail

Okinawa

I recently heard a story that I’ve heard a few times before:

The Battle of Okinawa was one of the bloodiest skirmishes of World War II. The 82-day battle claimed the lives of 14,000 Allied forces and 77,000 Japanese soldiers. Most tragically, somewhere between 100,000 to 150,000 Japanese civilians died.

There’s just one thing: that story is a bit of WWII era political propaganda. Or at best, a misunderstanding of Japanese geopolitics.

The horror of Okinawa was used in part to justify the dropping of the atomic bomb on Nagasaki and Hiroshima.

The two bombings claimed at least 129,000 lives – including many civilians in Hiroshima. But ultimately, we are to believe, the act was just. The Battle of Okinawa showed that the Japanese were exactly the monsters our propaganda made them out to be – cold and bloodthirsty. Willing to sacrifice themselves and their civilian population for a cause they foolishly found to be noble.

Using that logic, the bombings were a mercy, really.

Some estimates put the cost of a land war at 400,000 to 800,000 American fatalities and a shocking five to ten million Japanese fatalities.

The atomic bomb may have been a drastic assault, but ultimately it ended the war faster leading to fewer fatalities for Americans and the Japanese alike.

Now let’s back up a minute.

Let’s put aside the fact that its hard to be precise about the number of deaths in Nagasaki and Hiroshima, in part due to the terrible health impacts from radiation exposure.

What exactly did happen in Okinawa?

The number of deaths cited above are about as accurate as war fatality counts are likely to be. Many American’s died, many more in the Japanese army died, and even more civilians died.

But they weren’t Japanese civilians. They were Okinawans . Even amongst the military dead many of those “Japanese” soldiers were Okinawan conscripts.

Why does that distinction matter?

For centuries Okinawa had been an autonomous regime with it’s own distinct culture. The Okinawans faced increasing encroachment from Japanese forces and was officially annexed in 1879 – a mere 66 years before the Battle of Okinawa.

All of that is to say – the Okinawans were not Japanese. They were Okinawan. Culturally distinct and treated as second class citizens or worse by their Japanese oppressors. The Okinawans had no military tradition and “frustrated the Japanese with their indifference to military service.”

Those were the people who died in Okinawa.

Not rabid Japanese nationalists determined to do anything for victory. Simply civilians and civilians dressed up as soldiers. Forced into service for a repressive regime.

Casualties were so high at Okinawa because the Japanese didn’t really care whether the Okinawans lived or died.

We’d be right to judge the Japanese harshly for their disdain for Okinawan life – but we must find ourselves equally wanting. The American government has always cared more for American lives.

Perhaps that is right. And perhaps the nuclear bomb really was the moral thing to do.

But let’s always dig a little deeper, try a little hard to understand a people apart from ourselves. And let us not base our understanding off a caricature or off an outdated piece of propaganda.

And let us remember: Okinawans died here.

facebooktwittergoogle_plusredditlinkedintumblrmail

E=MC^2

Einstein’s famous formula is truly a work of art.

Perhaps it would be more accurate to say that nature is a work of art, but the beauty that can be contained in a seemingly simple mathematical formula is truly astounding.

I have a very distinct memory of learning the famous E=MC^2 equation in high school, at which point it was explained something like this:

Energy equals Mass times the Speed of Light squared. That means that the amount of energy in a hamburger (yes, that was the example) is equal to the mass of that hamburger times the the speed of light squared. The speed of light in a vacuum is 299,792,458 m/s, so that number squared is really really big. Therefore the amount of energy in a hamburger is really really big.

Mind = blown.

Well, sort of.

The above description is accurate and it is, in fact, remarkable that so much energy could be contained in something of small mass. But that explanation is so flat, so uninspiring, so…uninformative.

Why should anyone care that energy equals mass times the speed of light squared? And what does the speed of light have to do with anything? It is just some magic number that you can throw into an equation to solve all your problems and sound really smart?

The famous E=MC^2 equation is the most practical form if you’d like to calculate energy, but I personally prefer to think in terms of that mysterious constant, c:

The speed of light – in a vacuum, a critical detail – is equal to the square root of energy over mass.

That is to say, energy and mass have an inverse relationship, and their ratio is constant. That ratio is the square of the maximum speed an object with no mass can travel –

The speed of light in a vacuum.

facebooktwittergoogle_plusredditlinkedintumblrmail