How Internet structure affects content pluralism

Two current Internet trends give food for thought.

First, the collateral effect of Search Engine Optimisation (SEO), ie., the efforts of website operators to cater specifically to the way services such as Google analyse and later publicly reference the contents of their sites in order to attract as many visitors to their offerings as possible. One rather unsurprising SEO method is to make a lot of original content available, because – obviously – a wider choice and more topics mean that a greater number of people will find (through Google and others) something useful on the website and thus visit.

This may seem a platitude, but many media websites have long adhered to the old news paradigm assuming that only fresh, up-to-date and very recent content is worthwhile. Consequently, they often either deleted older stuff from their sites or moved it behind a pay wall and into an archive, which effectively eliminated it from a search engines’ field of vision. Since most websites obtain a major share of their total traffic through Google, Bing and Yahoo, a reduction of content accessible for search engines meant fewer visitors and page views.

While this correlation was always considered a tenet by many webmasters (including the manager of ejc.net), quite a number of media websites have realised this only relatively recently. The result is a growing abundance of content. It is becoming increasingly easier to retrieve material on historical topics, or research the history of a topic using trusted quality sources. Incidentally, this goes to show the absurdity of slapping a pull date on – of all things – online content created by public service broadcasters, as recent German media legislature does.

The second trend: according to the US firm Arbor Networks, peer-to-peer (P2P) traffic on the Internet is on the decline. P2P technology is used to share files in a distributed fashion: Rather than holding a file on only one server and making everybody interested in this file access one and the same location to download it, many computers store copies of the file (or parts thereof) and share it in an ad-hoc network. Thus anyone downloading the file can draw from many sources at the same time, and Internet traffic gets more evenly distributed.

P2P has gained a bad reputation because it was one of the pillars of illegal music and movie sharing, but it is actually a pretty intelligent and cheap way to distribute any kind of files to a large audience in a decentralised fashion.

However, the web has changed. Available bandwidth has increased significantly, so that saving traffic does not have the same urgency it had a few years ago. On the other hand, the structure and the approach of the web and entertainment industry have changed as well.

Music companies, TV stations and the like make their content freely available not for download, but to listen to or watch online. In such a way, they manage to retain control over their products while making money through subscriptions and advertising. If a person repeatedly listens to a piece of music on her mp3 player, the original producer does not earn anything extra. But each time that same person logs on to a website to listen to the same piece, she is forced to watch commercials in the process and hence directs some revenues to the industry. Users accept this because it is both legal and convenient – no organising your own music archive, no downloading, no pangs of a troubled conscious. Everything is instantly available.

This development coincides with a concentration process on the web. According to the abovementioned Arbor Networks study, a mere 30 large companies now attract around 30 percent of the entire Internet traffic. Among them are the obvious suspects such as Google, Facebook, YouTube, and Microsoft, but also businesses lesser known to the general public, such as Limelight Networks or Akamai Technologies, major Content Delivery Network (CDN) providers. CDN firms set up powerful server farms and high-capacity data connections around the world to efficiently distribute files (including, for instance, online video) on behalf of business customers. The basic principle is to host copies of any content in high demand as close as possible to the location of the end customer, thus avoiding costly and slow long-distance transmissions.

At the same time, the CDN providers —as well as the huge Internet companies — circulate increasingly more online traffic directly among each other, or keep it within their own networks in the first place, without using any intermediary third-party communication infrastructures. In the long run, this renders independent backbone and Internet access providers more and more obsolete. While we have ever more content at our fingertips, access to this content is under the control of ever-fewer “hyper giant” Internet companies.

Of course, in many cases this makes perfect sense and constitutes a welcome improvement of Internet service. Online video or security-enhancing software updates would not be nearly as easily obtainable as they are today without the CDNs and Google – to name only one. These companies provide amazing tools, applications and added value to the web.

The current concentration and consolidation processes affecting the infrastructure of the Internet make it more efficient, yet at the same time more vulnerable. Most service providers are private companies. There is no telling whether any of them might go out of business for whichever reason. The financial crisis has taught us that seemingly indestructible and highly reputable banks can go bust within days. Management failure, major legal problems, buyouts by competitors, or the bursting of a new “dot-com bubble” could impact the Internet economy in a similar way.

Another issue is the risk of intentional restriction of certain uses of the Internet. This may happen under political pressure (just think of the controversy around Google China), in order to protect disputed copyrights (as, for instance, in Viacom vs. YouTube), or simply by way of bandwidth management and traffic prioritisation. The latter are good examples of inherently Janus-faced technologies: They are necessary and desirable to make sure that a network functions to the best of its capacity and does not break down, but they can also easily be used to obstruct unwanted services. Imagine that you could not access iTunes anymore because your Internet access provider prefers that you use its own music shop, or that Skype does not work because your provider happens to be a phone company anxious not to lose business.

The recent structural developments have dramatically improved Internet performance in terms of choice, speed and quality of service, not to mention legitimacy. Nonetheless, it might be time to think about safeguards for alternative, open, and resilient networks with a multitude of independent distributed hubs and relays (which was, by the way, the very idea that originally led to the development of the Internet), and to revitalise P2P. We should also start to re-assess Internet infrastructure and content repositories from the point of view of a public service that must not be entrusted to the business interests of a handful of companies alone.

The ongoing debates over net neutrality or the implications of the Google Book Settlement are good spaces in which to look for a much wider range of issues in the gray area between content, technology and infrastructure. These need a bigger share of public attention.

This article was originally published on the website of the European Journalism Centre (EJC).

Thanks Flickr user ~Firas for the photography.

Please see also the following related articles:


Posted

in

by