More than a year ago, in an article called How Internet structure affects content pluralism, I voiced some concerns which in the meantime have been confirmed and even exceeded. It is the most recent WikiLeaks action known as “Cablegate” that demonstrates most notably how vulnerable free speech and pluralism on the Internet actually are and for which reasons, while it has at the same time implications for journalism and the media at large.
There is a great public controversy over the merits of WikiLeaks’ latest coup, but I’ll leave that discussion to others. In this follow-up on my earlier post I will rather focus on (infra-)structural lessons to be learned from the incident and the responses to it, which have elicited a heated debate as well. While I am at it, I will also touch upon some other recent observations.
The WikiLeaks crisis
The WikiLeaks website was the target of several fairly large-scale Distributed Denial of Service (DDoS) attacks, as network specialist Craig Labovitz with the Internet security firm Arbor Networks has reported. DDoS attacks bring down a website basically by automatically calling it up from multiple places more often and with higher frequency than its server can manage. It is a bit like rush hour traffic, when too many people try to drive to their inner-city workplaces, thus effectively blocking each other’s progress to the point of standstill.
It is very difficult to find out who is behind a DDoS attack, because they are typically performed indirectly by way of a network of computers that have been hijacked with the help of computer viruses, often unbeknownst to their owners. Basically anybody with high computer programming skills can be behind it.
The victim of a DDoS attack can hardly do anything but shut down their server and wait for it to pass. Or, as the Berkman Center’s Hal Roberts points out, they can turn to “only a couple dozen organizations … at the core of the Internet who have sufficient amounts of bandwidth, technical ability, and community connections to fight off the biggest of these attacks”. The powerful organisations Roberts refers to include companies such as Google (as well as their subsidiary YouTube), Facebook, Microsoft, Apple, Limelight Networks, Akamai Technologies, and Amazon. Indeed, WikiLeaks turned to Amazon, only to be switched off again after what Amazon says was a violation of their terms of service, and what others feel was political pressure from US government.
In parallel, WikiLeaks was dropped by their Domain Name Service (DNS) hosting company EveryDNS.net, by PayPal, and by several credit card companies, also ostensibly for violations of both firms’ respective terms of service. A DNS service translates an address such as wikileaks.org into the numeric IP address of the server that actually holds the content. Even if DNS does not work, the server can be reached, yet only by those who know the hard-to-remember IP address.
WikiLeaks themselves and the Internet community at large took several measures to mitigate both the DDoS attacks and the DNS switch-off. The actual “Cablegate” content is distributed independently from the WikiLeaks website by way of Peer-to-Peer (P2P) file sharing, meaning that it can be downloaded from a large number of distributed private computers all over the world rather than from one dedicated server. At the same time, WikiLeaks moved their website once again while the corresponding IP address (http://188.8.131.52) was widely disseminated across the Web. WikiLeaks supporters also provided replacement DNS service using, inter alia, the domain wikileaks.ch, and made many copies of the WikiLeaks site available on their own web servers (which put some of them into trouble with their own providers).
Vulnerability by oligopoly
Anyway, and irrespective of the WikiLeaks case in particular (which is, in fact, only the most prominent example of a number of similar ones in the international arena), there are a few rather worrying points to be taken away from this example:
- Any website except if it belongs to one out of only a few dozen Internet hypergiant companies can be brought down by DDoS or other cyber attacks without much chance of countermeasures;
- The victim of a DDoS or virus attack cannot really know who initiated it and why, since cyber attacks are extremely hard to trace and therefore often remain basically anonymous;
- While in theory, anybody can turn to the hypergiants in order to securely host and distribute content, their service is very expensive and therefore beyond the reach of many people or organisations who publish websites;
- All hypergiants are private companies, most of them operating under US law, and they are free to decide of their own accord to whom they provide service, but they can just as well be put under pressure by government institutions, too.
Just to be clear: When I say government pressure, I am not referring to website shut-downs carried out by way of proper legal procedures. I have no objections when a website that violates a law gets amended, closed, and/or prosecuted in due process. While there may be plenty of cases where a democratic review of the legal act concerned is advised, the rule of law must still be obeyed.
My concern is something else: There is an oligopolistic infrastructure emerging on the Web that facilitates the manipulation and exploitation of the public as well as censorship and obstruction of inconvenient content at a mere whim of a handful of private companies, or by stealthy government influence. In the words of Rebecca McKinnon: “One politician has more power than ever to shut down controversial speech unilaterally with one phone call.”
For instance, as long as there are thousands of book stores and public libraries, it does not matter so much if a number of them decide not to carry a particular legitimately published book for whichever reason. A person looking for that book may have to put up with some inconveniences, but will still be able to obtain it. However, if there is nobody but Amazon left for a book author who needs a distributor, and Amazon removes that same book from its offerings, the book becomes effectively unavailable.
Now, it is certainly the privilege of Amazon as a private company to decide which items and services they want to sell. But are we really prepared to leave cases where pluralism and freedom of speech are at stake to the discretion Amazon and their likes alone? Fortunately, we are still a few steps removed from a situation as dramatic as that; however, recent indicators suggest that we are drawing closer.
Utility regulation of Internet freedom?
This is the reason why Danah Boyd feels that once Internet services become indispensable and vital utilities they should be regulated accordingly. Facebook, to quote a relatively innocuous example, decides what we get to know about our “friends” based on Facebook’s own self-interested algorithm, as Thomas E. Weber has demonstrated. Our personal preferences, or what our “friends” want us to become aware of, do not weigh in just as much. Facebook has, of course, a business rationale, yet not one caring about the public sphere. As Ethan Zuckerman put it, “we risk our freedoms of speech by allowing critical debates that should take place in the public sphere [to] be hosted on private platforms.”
At the same time, Facebook (as well as Google and a few others) has become so big and permeating that it is no longer just “a” social network, but “the” social network you cannot easily do without. Accordingly, companies such as Facebook should be held to public standards, as has been good practice with plenty of utilities for a long time, such as broadcasting, telephony, postal services, power supply, and so on.
I have argued before that sometimes it might actually be in the best public interest to allow one or a handful of actors to break the ground on a new technology or business model and let them grow into delivering a quasi-monopolistic universal service, and only at that point apply regulation to render them benign and competition-friendly. Such incumbents – salient examples are formerly state-owned telcos in Germany, France, and the UK – will typically remain dominant and profitable players, but they will no longer be able to stifle smaller competitors, or to prevent usage of their infrastructure for purposes they do not like. At the same time, they will have to comply with high standards of security and privacy.
But there is an alternative remedy as well – though one that is way more difficult to achieve then regulation of the big players. In Germany, a similar notion is anchored in the principles of public broadcasting. Broadly speaking, the German constitution says that it is the responsibility of the state to make sure there is a well-functioning pluralistic media sphere that allows citizens to exercise their democratic rights. If the market provides sufficient choice and variety, that is fine. If not, the state must pro-actively set up appropriate media. However – and this is the particular beauty of the approach –, such media may not be financed nor controlled by state or government, but only directly by society.
Fortunately, on the Internet we may no longer need this kind of pretty heavy-handed action involving the passing of appropriate laws, the establishment of a complicated bureaucracy, and implementing representative mechanisms of control. On the web, it is possible to launch bottom-up projects, or projects that empower users and members by way of their very architecture, such as Lawrence Lessig has made a compelling case for in his groundbreaking book “Code and Other Laws of Cyberspace”. Actual examples are the fledgling open source social networking software Diaspora, or ProjectVRM that aspires to hand back to consumers control over the information generated by their online purchasing activities.
What happened to WikiLeaks and others is a major and up to now widely ignored risk of the highly touted trend towards Cloud computing. The “Cloud” basically means that rather than storing and processing your personal data on your own computer, you entrust your data to service providers and access and manipulate it via broadband connections whenever necessary. In a way, this is much safer than having everything on a hard drive in your home, which may get stolen or just simply break.
On the other hand, you relinquish control over your data since you have no way of telling where exactly it is located, what the hosting company does with it (for instance in order to sell products or target you with advertising), or whether in the country where the data storage company is registered the same privacy and data protection rules apply as in your home country. Apple’s app store is a famous example where content is judged by US decency standards, which often may appear somewhat prudish to Europeans, and by the question whether it may be detrimental to Apple’s business interests.
Your Cloud data may be scrutinised or purged without your knowing, or even taken offline. The ongoing and highly visible controversy over Facebook is only the tip of the iceberg. Columbia law professor and free software advocate Eben Moglen has suggested that as an antidote “we need a really good web server that you can put in your pocket and plug in any place. It shouldn’t be any larger than the charger for your cellphone… You can always tell what’s happening in your server and if anybody else wants to know they can get a search warrant.”
Please see also Craig Labovitz: The Internet Goes to War (Security to the Core) for a diligent analysis of and statistics about DDoS attacks, Ethan Zuckerman: New Berkman Paper on DDoS – silencing speech is easy, protecting is hard on the wider implications of DDoS, as well as Jonathan Zittrain: The Personal Computer is Dead (Technology Review) on the subtle power shift actuated by the manufacturers of operating systems.
Permalink for this article: http://wordpress.karstens.eu/wikileaks-the-cloud-and-internet-pluralism-a-roundup-of-emerging-lessons-learned/