Writings on technology and society from Wellington, New Zealand

Thursday, December 31, 2009

Why censoring the Internet won’t work

Governments around the world are trying to get to grips with the notion that the Internet allows unfettered communications between individuals. This is a threat to almost all societies, and leads to “moral” arguments to control people’s access to, and activities on the Internet. It’s hard to draw a hard and fast line globally about what is moral to suppress and what is not, unless you take the view that the sharing of any kind of information is acceptable under any circumstances. I don’t take that view; there are some things in my view which are reprehensible or harmful and I am happy that my government tries to deal with them. The main area that comes to mind is child abuse images (CAI), a.k.a child pornography. However, agreeing that governments have the right to control some kinds of information on the Internet does leave us open to the “slippery slope” argument, which we have already seen operating across the government where the Australian government has tried to censor access to public information site Wikileaks because it published a list of sites already censored by the Australian government.

There are various measures available to Internet censors. China, for instance, runs the so-called “Great Firewall” – a single point of access for all Internet traffic entering and leaving the country. Centralized national firewalls offer a high level of control, but they find it hard to deal with traffic which is encrypted (as a lot of Internet traffic is, routinely). Almost invariably, they have to block a lot of material which is wider than their intended purpose, just to be sure. You can’t allow free access to Google if you don’t your population to even be able to search for specific concepts. Another issue is that the engineering for the great firewall gets quite problematic. It needs to be able to pass a great deal of traffic very quickly while filtering out the “bad” stuff. Finally, there needs to be a staff who are dedicated to controlling the filter, adding new sites to it, perhaps removing old ones, and generally dealing with issues it throws up.

A more limited technical measure is to control the Domain Name System (DNS) in the country. This means that people typing the address of a “bad” site into their browser would instead get a page saying “naughty naughty” or some such. In fact, if they knew the IP number to go to – and it wouldn’t be hard for a determined person to find this – they will evade this form of censorship altogether. This technique would involve its own engineering challenges as well as the problem of managing the list of bad sites.

And deciding what gets blocked is the core of the problem with automated, technical measures like the two described above. There’s no way for the general public to inspect the list of what gets blocked – if you publish the list, you are just publishing a list of sites that you don’t want people to go to. If you don’t publish the list, there is no accountability that governments will only block CAI (or whatever they have said they will). The list can and will expand for several reasons: incompetence, in the case of the Queensland dentist’s site blocked by the Australian filter; a desire to protect the filter itself (Wikileaks); and an extension or what we regard as repugnant or harmful, but don’t necessarily want a public debate about.

There is another technique that governments use to control what people do on the Internet. That is, simply, to watch what is going on within their country and apply real-world sanctions to people breaking the law. All countries do this to a greater or lesser extent. In New Zealand, for instance, the Department of Internal Affairs looks for images of child abuse (i.e. child pornography) and prosecutes people involved in making or trading them. The recent charges brought against a blogger for allegedly breaking a suppression order are another example. This approach seems the natural one for an open society like New Zealand to take. It relies on humans to detect and discern illegal activity rather than machines. That’s how our court system works. It’s also how law enforcement works. We don’t require people to have licences for cameras; of course not, cameras are widely used for a variety of entirely legal purposes. We prosecute people who use cameras to break the law. It should be the same for computers and the Internet.

To summarise: filtering the Internet is problematic technically, but most of all it is incompatible with a democratic open society. Prosecute the wrongdoers but leave the Internet alone.

posted by colin at 4:54 pm  

1 Comment

  1. Yes indeed, a democratic and open society. Yet one of your most vociferous correspondents has recently asked you to police your own site – by identifying me and “doing something about it”. And why is that?

    It’s simply because my name is Ivor Hughes – and so, allegedly is his. Not only is he an expert in everything, he also demands the censorship of anyone whose parents had the temerity to give their son the same name as his.

    Mind you, he may be younger than I. Then does he expect all “Ivor Hugheses” (sorry don’t know the proper collective noun) to give way to him?

    What about the IH who is a lay minister in England? Or the tragically killed speedway rider? Or the extraordinarily generous philanthropist? Or the photographer? Or the patents lawyer? Or the fishing tutor? The schoolboy who won the 100m? And all the others on the web?

    The fact that he automatically assumes that he is the only IH in the world entitled to use his own name tells you an awful lot about him.

    Comment by Ivor Hughes — 14 March 2010 @ 10:05 am

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

Powered by WordPress