Bitcoin has taken quite a beating for its libertarian design biases, price volatility due to speculation, and the questionable practices of some currency-exchange firms. But whatever the real or perceived flaws of Bitcoin, relatively little attention has been paid to its “engine,” known as “distributed ledger” or “blockchain” technology. Move beyond the superficial public discussions about Bitcoin, and you’ll discover a software breakthrough that could be of enormous importance to the future of commoning on open network platforms.
Blockchain technology is significant because it can validate the authenticity of an individual bitcoin without the need for a third-party guarantor such as a bank or government body. This solves a vexing collective-action problem in an open network context: How do you know that a given bitcoin is not a counterfeit? Or to extend this idea: How do you know that a given document, certificate or dataset -- or a vote or "digital identity" asserted by an individual -- is the “real thing” and not a forgery?
Blockchain technology can help solve this problem by using a searchable online “ledger” that keeps track of all transactions of all bitcoins. The ledger is updated about six times an hour, each time incorporating a new set of transactions known as the “block” into the ledger. What makes the blockchain so revolutionary is that the information on it is shared by everyone on the network using the Bitcoin software. The ledger acts as a kind of permanent record maintained by a vast distributed peer network, which makes it far more secure than data kept at a centralized location. You can trust the authenticity of a given bitcoin because it’s virtually impossible to corrupt a ledger that is spread across so many nodes in the network.
What does all this have to do with the commons? you might ask. A recently released report suggests that blockchain technology could provide a critical infrastructure for building what are called “distributed collaborative organizations.” (One variation is called “decentralized autonomous organizations.”) A distributed organization is one that uses blockchain technology to give its members specified rights within the organization, which are managed and guaranteed by the blockchain. This set of rights, in turn, can be linked to the conventional legal system to make those rights legally cognizable.
In an amazingly prescient essay, “The Exit From Capitalism Has Already Begun,” journalist and social philosopher André Gorz in 2007 explained how computerization and networks are causing a profound crisis in capitalism by making knowledge more shareable. He argues that shareable knowledge and culture undercuts capitalist control over the global market system as the exclusive apparatus for production and consumption (and thus our "necessary" roles as wage-earners and consumers).
The essay, translated by Chris Turner, originally appeared in the journal EcoRev in Autumn 2007 and was reprinted in Gorz’s 2008 book Ecologica. It’s worth revisiting this essay because it so succinctly develops a theme that is now playing out, one that Jeremy Rifkin reprises and elaborates upon in his 2014 book The Zero Marginal Cost Society.
Let’s start with the conundrum that capital faces as computerization makes it possible to produce more with less labor. Gorz writes:
The cost of labor per unit of output is constantly diminishing and the price of products is also tending to fall. The more the quantity of labor for a given output decreases, the more the value produced per worker – productivity – has to increase if the amount of achievable profit is not to fall. We have, then, this apparent paradox: the more productivity rises, the more it has to go on rising, in order to prevent the volume of profit from diminishing. Hence the pursuit of productivity gains moves ever faster, manpower levels tend to reduce, while pressure on workers intensifies and wage levels fall, as does the overall payroll. The system is approaching an internal limit at which production and investment in production cease to be sufficiently profitable.
Over time, Gorz explains, this leads investors to turn away from the “real economy” of production, where productivity gains and profits are harder to achieve, and instead seek profit through financial speculation in "fictitious" forms of value such as debt and new types of financial instruments. The value is ficititious in the sense that loans, return on investment, future economic growth, trust and goodwill are social intangibles that are quite unlike physical capital. They depend upon collective belief and social trust, and can evaporate overnight.
Still, it is generally easier and more profitable to invest in these (fictitious, speculative) forms of financial value than in actually producing goods and services at a time when productivity gains and profit are declining. No wonder speculative bubbles are so attractive: There is just too much capital is sloshing around looking for profitable investment which the real economy is less capable of delivering. No wonder companies have so much cash on hand (from profits) that they are declining to invest. No wonder the amount of available finance capital dwarfs the real economy. Gorz noted that financial assets in 2007 stood at $160 trillion, which was three to four times global GDP – a ratio that has surely gotten more extreme in the past eight years.
Creative Commons has just issued a report documenting usage patterns of its licenses. It’s great to learn that the number of works using CC licenses has soared since this vital (and voluntary) workaround to copyright law was introduced twelve years ago, in 2003.
According to a new report, the State of the Commons, recently released by Creative Commons, the licenses were used on an estimated 50 million works in 2006 and on 400 million works in 2010. By 2014, that number had climbed to 882 million CC-licensed works. Nine million websites now use CC licenses, including major sites like YouTube, Wikipedia, Flickr, Public Library of Science, Scribd and Jamendo. The report includes a great series of infographics that illustrate key findings.
For any latecomers, CC licenses are a free set of public licenses that let copyright holders of books, films, websites, music, photography and other creative works choose to make their works legally shareable. The licenses are necessary because copyright law makes no provisions for sharing beyond a vaguely defined set of “fair use” principles. Copyright law is mostly about automatically locking up all works in a strict envelope of private property rights. This makes it complicated and costly to let others legally share and re-use works.
The CC licenses were invented as a solution, just as Web 2.0 was getting going. It has functioned as a vital element of infrastructure for building commons of knowledge and creativity. It did this by providing a sound legal basis for sharing digital content, helping to leverage the power of network-driven sharing.
Is it possible to imagine a new sort of synthesis or synergy between the emerging peer production and commons movement on the one hand, and growing, innovative elements of the co-operative and solidarity economy movements on the other?
That was the animating question behind a two-day workshop, “Toward an Open Co-operativism,” held in August 2014 and now chronicled in a new report by UK co-operative expert Pat Conaty and me. (Pat is a Fellow of the New Economics Foundation and a Research Associate of Co-operatives UK, and attended the workshop.)
The workshop was convened because the commons movement and peer production share a great deal with co-operatives....but they also differ in profound ways. Both share a deep commitment to social cooperation as a constructive social and economic force. Yet both draw upon very different histories, cultures, identities and aspirations in formulating their visions of the future. There is great promise in the two movements growing more closely together, but also significant barriers to that occurring.
The workshop explored this topic, as captured by the subtitle of the report: “A New Social Economy Based on Open Platforms, Co-operative Models and the Commons,” hosted by the Commons Strategies Group in Berlin, Germany, on August 27 and 28, 2014. The workshop was supported by the Heinrich Böll Foundation, with assistance with the Charles Léopold Mayer Foundation of France.
Below, the Introduction to the report followed by the Contents page. You can download a pdf of the full report (28 pages) here. The entire report is licensed under a Creative Commons Attribution-ShareAlike (BY-SA) 3.0 license, so feel free to re-post it.
As one of the countries hardest hit by austerity politics, Greece is also in the vanguard of experimentation to find ways beyond the crisis. Now there is a documentary film about the growth of commons-based peer production in Greece, directed by Ilias Marmaras. "Knowledge as a common good: communities of production and sharing in Greece” is a low-budget, high-insight survey of innovative projects such as FabLab Athens, Greek hackerspaces, Frown, an organization that hosts all sorts of maker workshops and presentations, and other projects.
A beta-version website Common Knowledge, devoted to “communities of production and sharing in Greece,” explains the motivation behind the film:
“Greece is going through the sixth year of recession. Austerity policies imposed by IMF, ECB and the Greek political pro-memorandum regimes, foster an unprecedented crisis in economy, social life, politics and culture. In the previous two decades the enforcement of the neoliberal politics to the country resulted in the disintegration of the existed social networks, leaving society unprepared to face the upcoming situation.
During the last years, while large parts of the social fabric have been expelled from the state and private economy, through the social movements which emerge in the middle of the crisis, formations of physical and digital networks have appeared not only in official political and finance circles, but also as grassroots forms of coexistence, solidarity and innovation. People have come together, experimenting in unconventional ways of collaboration and bundling their activities in different physical and digital networks. They seek answers to problems caused by the crisis, but they are also concerned about issues due the new technical composition of the world. In doing so they produce and share knowledge.”
George Papanikolaou of the P2P Foundation in Greece describes how peer production is fundamentally altering labor practices and offering hope: “For the first time, we are witnessing groups of producers having the chance to meet up outside the traditional frameworks – like that of a corporation, or state organization. People are taking initiatives to form groups in order to produce goods that belong in the commons sphere.”
Today saw the beginning of the biennial conference on Internet, Politics and Policy, convened by the Oxford Internet Institute (University of Oxford) and OII-edited academic journal Policy and Internet. This year’s conference theme is Crowdsourcing for Politics and Policy. Skimming over some papers and abstracts, here are some of my first (and rather superficial) impressions:
Despite the focus of the conference, there are few papers looking at an essential issue of crowdsourcing, namely its potential epistemic attributes. That is, when, why and how “the many are smarter than the few” and the role that technology plays in this.
In methodological terms, it seems that very little of the research presented takes advantage of the potential offered by ICT mediated processes when it comes to i) quantitative work with “administrative” data and ii) experimental research design.
On the issue of deliberation, it is good to see that more people are starting to look at design issues, slowly moving away from the traditional fixation on the Habermasian ideal (I’ve talked about this in a presentation here).
It seems that the majority of the papers focus on European experiences or those from other developed countries. At first, this is not surprising given the location of the conference and the resources that researchers from these countries have (e.g. travel budget). Yet, it may also suggest limited integration between North/South networks of researchers.
With regard to the last point above, it appears that there is a bridge yet to be built between the community of researchers represented by those attending this conference and the emerging community from the tech4accountability space. There’s lots of potential gain for both sides in engaging in a dialogue and, as importantly, a common language. The “Internet & Politics” community would benefit from the tech4accountability’s focus – although sometimes fuzzy – on development outcomes and experiences that emerge from the “South”. Conversely, the tech4accountability community would benefit a great deal by connecting with the existing (and clearly more mature) knowledge when it comes to the intersection of ICT, politics and citizen engagement.
Needless to say, all of the above are initial impressions and broad generalizations, and as such, may be unfair. The OII biennial conference remains, without a doubt, one of the major conferences in its field. You can view the full program of the conference here. I have also listed below in a simplified manner the links to the available papers of the conference according to their respective tracks.
Michel Bauwens and Vasilis Kostakis have just published a new book that offers a rich, sophisticated critique of our current brand of capitalism, and looks to current trends in digital collaboration to propose the outlines of the next, network-based economy and society.
Bauwens is the founder of the P2P Foundation, and Kostakis is a political economist and founder of the P2P Lab. He is also a research fellow at the Ragnar Nurkse School of Innovation and Governance at Tallinn University of Technology, Estonia.
Kostakis and Bauwens write:
The aim of this book is not to provide yet another critique of capitalism but rather to contribute to the ongoing dialogue for post-capitalist construction, and to discuss how another world could be possible. We build on the idea that peer-to-peer infrastructures are gradually becoming the general conditions of work, economy, and society, considering peer production as a social advancement within capitalism but with various post-capitalistic aspects in need of protection, enforcement, stimulation and connection with progressive social movements.
The authors outline four scenarios to “explore relevant trajectories of the current techno-economic paradigm within and beyond capitalism.” They envision the rise of "netarchical capitalism," a network-based capitalism, that sanctions several types of compatible and conflicting forms of capitalism – what they call “the mixed model of neo-feudal cognitive capitalism.” There are variations that are possible, including "distributed capitalism, resilient communities and global Commons."
I'm happy to announce that a new collection of essays that I've co-edited with John Clippinger, executive director of ID3, has been published. It's called From Bitcoin to Burning Man and Beyond. The fifteen essays in the book explore a new generation of digital technologies that are re-imagining the foundations of digital identity, governance, trust and social organization.
ID3 is a Boston-based nonprofit affiliated with the M.I.T. Media Lab, and was co-founded by Clippinger and social computing and data expert, Professor Pentland, who directs M.I.T.’s Human Dynamics Laboratory.
The book is focused on the huge, untapped potential for self-organized, distributed governance on open platforms. There are many aspects to this challenge, but some of the more interesting prospects include evolvable digital contracts that could supplant conventional legal agreements; smartphone currencies that could help Africans meet their economic needs more effective; the growth of the commodity-backed Ven currency; and new types of “solar currencies” that borrow techniques from Bitcoin to enable more efficient, cost-effective solar generation and sharing by homeowners.
A chapter on the 28-year history of Burning Man, the week-long encampment in the Nevada desert, traces the arc of experimentation and innovation in large communities devising new forms of self-governance.
I co-authored an essay in the book, "The Next Great Internet Disruption: Authority and Governance," which appeared in an earlier form here.
The book is published by ID3 in association with Off the Common Books, and is available in print and ebook formats from Amazon.com and Off the Common Books. A free, downloadable pdf of the book is available at the ID3 website. (The book is licensed under a Creative Commons BY-NC-SA license.)
Among the contributors to From Bitcoin to Burning Man and Beyond are Alex “Sandy” Pentland of the M.I.T. Human Dynamics Laboratory; former FCC Chairman Reed E. Hundt; long-time IBM strategist Irving Wladawksy-Berger; Silicon Valley entrepreneur Peter Hirshberg; monetary system expert Bernard Lietaer; journalist and author Jonathan Ledgard; and H-Farm cofounder Maurizio Rossi.
In addition to explorations of self-governance, From Bitcoin to Burning Man and Beyond introduces the path-breaking software platform that ID3 has developed called “Open Mustard Seed,” or OMS. The just-released open source program enables the rise of new types of trusted, self-healing digital institutions on open networks, which in turn will make possible new sorts of privacy-friendly social ecosystems.
Rio Grande do Sul Participatory Budgeting Voting System (2014)
Within the open government debate, there is growing interest in the role of technology in citizen engagement. However, as interest in the subject grows, so does the superficiality of the conversations that follow. While the number of citizen engagement and technology events is increasing, the opportunities for in-depth conversations on the subject do not seem to be increasing at the same rate.
This is why, a few weeks ago, I was pleased to visit the University of Westminster for a kick-off talk on “Technology and Participation: Friend or Foe?”, organized by Involve and the Centre for the Study of Democracy (Westminster). It was a pleasure to start a conversation with a group that was willing to engage in a longer and more detailed conversation on the subject.
My talk covered a number of issues that have been keeping me busy recently. On the preliminary quantitative work that I presented, credit should also go to the awesome team that I am working with, which includes Fredrik Sjoberg (NYU), Jonathan Mellon (Oxford) and Paolo Spada (UBC / Harvard). For those who would like to see some of the graphs better, I have also added here [PDF] the slides of my presentation.
I have skipped the video to the beginning of my talk, but the discussion that followed is what made the event interesting. In my opinion, the contributions of Maria Nyberg (Head of Open Policy Making at the Cabinet Office) Catherine Howe (Public-i), as well as those of the participants, were a breath of fresh air in the current citizen engagement conversation. So please bear with me and watch until the end.
I would like to thank Simon Burral (Involve) and Graham Smith (Westminster) for their invitation. Simon leads the great work being done at Involve, one of the best organizations working on citizen engagement nowadays. And to keep it short, Graham is the leading thinker when the issue is democratic innovations.
Below is also an excellent summary by Sonia Bussu (Involve), capturing some of the main points of my talk and the discussion that ensued (originally posted here).
***
“On technology and democracy
The title of yesterday’s event, organised by Involve and Westminster University’s Centre for the Study of Democracy, posed a big question, which inevitably led to several other big questions, as the discussion among a lively audience of practitioners, academics and policymakers unfolded (offline and online).
Tiago Peixoto, from the World Bank, kicked off the debate and immediately put the enthusiasm for new technologies into perspective. Back in 1795, the very first model of the telegraph, the Napoleonic semaphore, raised hopes for – and fears of – greater citizen engagement in government. Similarly the invention of the TV sparked debates on whether technology would strengthen or weaken democracy, increasing citizen awareness or creating more opportunities for market and government manipulation of public opinion.
Throughout history, technological developments have marked societal changes, but has technological innovation translated into better democracy? What makes us excited today about technology and participation is the idea that by lowering the transaction costs we can increase people’s incentives to participate. Tiago argued that this costs-benefits rationale doesn’t explain why people continue to vote, since the odds of their vote making a difference are infinitesimal (to be fair voter turnouts are decreasing across most advanced democracies – although this is more a consequence of people’s increasing cynicism towards political elites rather than their understanding of mathematical probabilities).*
So do new technologies mobilise more people or simply normalise the participation of those that already participate? The findings on the matter are still conflicting. Tiago showed us some data on online voting in Rio Grande do Sul participatory budgeting process in Brazil, whereby e-voting would seem to bring in new voters (supporting the mobilisation hypothesis) but from the same social strata (e.g. higher income and education – as per the normalisation hypothesis).
In short, we’re still pretty much confused about the impact of technology on democracy and participation. Perhaps, as suggested by Tiago and Catherine Howe from Public-i, the problem is that we’re focusing too much on technology, tempted by the illusion it offers to simplify and make democracy easy. But the real issue lies elsewhere, in understanding people and policymakers’ incentives and the articulation (or lack thereof) between technologies and democratic institutions. As emphasised by Catherine, technology without democratic evolution is like “lipstick on a pig”.
The gap between institutions and technology is still a big obstacle. Catherine reminded us how participation often continues to translate into one-way communication in government’s engagement strategies, which constrains the potential of new technologies in facilitating greater interaction between citizens and institutions and coproduction of policies as a response to increasing complexity. As academics and practitioners pitch the benefits of meaningful participation to policy makers, Tiago asked whether a focus on instrumental incentives might help us move forward. Rather than always pointing to the normative argument of deepening democracy, we could start using data from cases of participatory budgeting to show how greater participation reduces tax evasion and corruption as well as infant mortality.
He also made a methodological point: we might need to start using more effectively the vast array of data on existing engagement platforms to understand incentives to participation and people’s motivation. We might get some surprises, as findings demystify old myths. Data from Fix My Street would seem to prove that government response to issues raised doesn’t increase the likelihood of future participation by as much as we would assume (28%).** But this is probably a more complicated story, and as pointed out by some people in the audience the nature and salience of both the issue and the response will make a crucial difference.
Catherine highlighted one key problem: when we talk about technology, we continue to get stuck on the application layer, but we really need to be looking at the architecture layer. A democratic push for government legislation over the architecture layer is crucial for preserving the Internet as a neutral space where deeper democracy can develop. Data is a big part of the architecture and there is little democratic control over it. An understanding of a virtual identity model that can help us protect and control our data is key for a genuinely democratic Internet.
Maria Nyberg, from the Cabinet Office, was very clear that technology is neither friend nor foe: like everything, it really depends on how we use it. Technology is all around us and can’t be peripheral to policy making. It offers great opportunities to civil servants as they can tap into data and resources they didn’t have access to before. There is a recognition from government that it doesn’t have the monopoly on solutions and doesn’t always know best. The call is for more open policy making, engaging in a more creative and collaborative manner. Technology can allow for better and faster engagement with people, but there is no silver bullet.
Some people in the audience felt that the drive for online democracy should be citizen-led, as the internet could become the equivalent of a “bloodless guillotine” for politicians. But without net neutrality and citizen control over our own data there might be little space for genuine participation.
*This point was edited on 12/07/2014 following a conversation with Tiago.
** This point was edited on 12/07/2014 following a conversation with Tiago.”
—————————
I am also thankful to the UK Political Studies Association (PSA), Involve and the University of Westminster for co-sponsoring my travel to the UK. I will write more later on about the Scaling and Innovation Conference organized by the PSA, where I was honored to be one of the keynote speakers along with MP Chi Onwurah (Shadow Cabinet Office Minister) and Professor Stephen Coleman (Leeds).
The ongoing Snowden revelations about NSA surveillance have all sorts of implications for the rule of law, constitutional democracy, geopolitical alignments, human rights and much else. The disclosures deserve our closest attention for these reasons alone. But what do these revelations have to do with the commons?
If we regard the act of commoning as a genre of citizenship – acts of voluntary association and action that are critical to human freedom and democracy – we can see that snooping by both the NSA and its corporate brethren are profoundly hostile to the future of the commons. They violate some fundamental notions of human rights, civil freedoms and the ability of individuals to protect their privacy and thus their sovereignty.
If the market/state apparatus can digitally monitor our reading habits and telephone calls, email correspondence and purchases, physical movements and much else, then it has effectively snuffed out the sovereignty of a free people. The barrage of the successive Snowden disclosures has been followed by a relentless government propaganda war, cable TV denunciations and even attacks on Greenwald by the liberal nomenklatura (Michael Kinsley, George Packer). It’s as if "respectable opinion" did not care to note or defend the elemental human freedoms that a functioning democracy requires.
It was such a pleasure therefore to (belatedly) encounter a series of four lectures delivered last fall by Eben Moglen, a law scholar and historian at Columbia Law School, founder of the Software Freedom Law Center, and former general counsel of the Free Software Foundation. The four talks -- "Snowden and the Future" -- offer one of the most eloquent and historically informed critiques of the Snowden revelations and their implications for freedom, democracy and – I would add – the capacity of people to common.