Fri | Apr 20, 2018

Colin Steer | Who should vet the Internet?

Published:Sunday | August 21, 2016 | 12:00 AM
Colin Steer
The Twitter app is displayed on a iPhone screen.

In the mid-1990s when Internet usage was just moving past the toddler stage in Jamaica, before Facebook and Twitter and prior to our phones becoming smart, I wrote an article for the Sunday Observer titled 'Libel in cyberspace' examining the potential dangers of Internet postings, especially by anonymous writers.

Some of the articles being circulated on the Internet and in discussions on message boards at the time - blogs were a distant dot in the creative imagination - had begun to spawn more questions about the adequacy of libel laws in determining jurisdictions, the extent of the damage caused, and who should or might be held primarily responsible when something potentially libellous was published. Should it be the owners of the medium through which the comment was aired? Should it be the Internet service provider? And how far could damages be pursued when an article had circulated the globe, and where different laws govern different territories. Since then, there have many scholarly articles published and even specialist media courses addressing the subject globally.

The point of reference for that article were two incidents, independent of each other, involving Sunday Gleaner columnist Dawn Ritch and talk-show host Wilmot Perkins. In the first case, Ritch had carried, as part of one of her columns, a sanitised version of a parody of Psalm 23 which made fun of then Prime Minister P.J. Patterson and which was circulating on the Internet. A few days later, a caller to Perkins attempted to read the balder Internet version on his show, but before the punch line could be delivered, it appeared that the studio technicians, or 'Motty' himself, pressed the seven-second delay/dump button, preventing the material from actually being aired. In referring to the two incidents, I noted that their own presence of mind, and perhaps the gatekeepers and systems, had guided both the Gleaner columnist and the radio station past potential costly litigation.

It was noted, too, that with the new Internet phenomenon, there would be fewer and fewer avenues of intervention to stop stuff from being launched into cyberspace that could do much damage to the reputations of companies and individuals.




Fast-forward almost 20 years later and the dangers loom larger than ever - even if the concerns, as was demonstrated this past week in the controversial LASCO employee posting about one of our Olympians, are not primarily centred on libel but bad judgement and poor taste. In today's world where traditional media are trying to keep pace with the social-media sprinters by allowing commentary sections under articles, there are many inconsistencies and uneven standards. The New York Times is a model of civil, yet often hard-hitting discourse. CNN, Yahoo and others allow profanity, sometimes barely fig-leafed, and often outrageous racist comments in the name of free speech.

Locally, the newspapers routinely do the usual vetting for profanity, vulgarity and libel, yet stuff still slips through. I have seen in one of our newspapers the posting of a comment that a public figure, who was being criticised, had made it as far as she had "only by working hard on her back". There was the case of the publication of the wedding of a young couple, and an anonymous post was allowed that the woman was, essentially, a whore. That word was not used, but it was implied.

These are instances where one would have expected the gatekeepers to do some vetting. Ironically, as a friend complains to me sometimes, mild critiques of media stories are routinely held as 'pending' or not published at all.

Of course, publishing material by anonymous writers is not new. Newspapers have been doing that for decades, but the writer(s) would be known to at least one editor. In cyberspace, anything goes. Profane cussing out and unchecked or uncorroborated information comprises the new normal.

But the challenges are not just for traditional media. When a private company opts to create a Facebook page, for example, and then allows comments tangentially related to the sector in which it operates to be published, does it have any responsibility to filter what comes up on its own page? I saw a recent example.




Company A has on its Facebook page a comment from an identifiable writer complaining about Company B, which he described as behaving like "some dutty whoring gyal who tek you money and don't deliver the service". I brought this to the attention of someone I knew at Company A and questioned whether that should be there, however valid the complainer's concerns. I was told they could not censor social-media commentary. Perhaps. But it could be deleted from the page, no?

Several writers here and overseas have already pointed out that social-media platforms are often a cesspool of bile appealing to our baser instincts, where anonymity enables vulgarity and degeneracy to thrive.

Public institutions, on the other hand, can tamp down on the worst aspect of this by developing and implementing clear guidelines about what is posted on their various pages and ensuring that even where linked articles are embedded, that there are gatekeepers to police their sites.

As for Facebook and Twitter, except for the obvious promotion of terrorism and crime such as child pornography, these are largely self-policing. It is really up to the individual, guided by his or her own sense of values and judgement, who will determine what is put in the public space for wider consumption.

Public- and private-sector institutions have to be more vigilant in policing their Internet usage policies, and more of us need to heed the old Jamaican adage: not everything good fi eat good fi talk.

- Colin Steer is a communications specialist. Email feedback to and