Sunday, February 27, 2011
Who is liable for prosecution for uploading objectionable content onto social networking sites?
No one it seems. Even a crisp globally accepted definition for what one may consider "objectionable" which I believe comprises of offensive, abusive and obscene content does not exist due to vastly differing interpretation of these three terms.
User generated content, typically posts, videos and music has grown so significantly that one large social networking site uploaded over ten billion content items last year. Typically 5%, is considered offensive comprising of pornographic material, hate posts/pictures and derogatory remarks against individuals, religious figures and politicians.In most countries the ISP’s, content or social networking sites are not liable if they have adequate mechanisms to educate users on what content can be uploaded, monitor and filter of objectionable material. It is not an easy task for global sites as the definition of objectionable content such as pornography differs by country; so even if the pornographer is legally distributing pornography, the person receiving it may not be legally doing. The second part of the problem and the larger social challenge is dealing with abusive content targeting individuals or small sections of society.
Last week, near my residence there were a mob of 500 protestors blocking roads and pelting stones on vehicles as an unknown user had created a defamatory page on Facebook. It’s quite common to find abusive posts and defamatory fake profiles created to settle grudges. There are other forms of objectionable content, unwittingly put which violate an individual’s data protection rights such as the video of a child being cyber bullied in school.
Prosecution is hampered by the relative anonymity social media sites provide and limitation in global cyber law even in cases that involve the use and distribution of child pornography. This must not imply that the content should not be promptly removed. I did realize that this process as in the situation where riots took place outside my residence was in no way speedy. A court order had to be obtained and faxed to Facebook to shut down the page. I do believe that social networking sites should have a quick method to alleviate the issue, such as temporary suspension, though in no way recommend a role of a cyber moderator or tne shutdown of sites because a section of people protest, or politicians think so. Social networking sites should also take responsibility for the trauma of an aggrieved individual, because they allow anonymous users to create such profiles.
What can be done if a fake profile or objectionable material is posted against users on social networking sites?
Most sites have a reporting mechanism. Facebook, for example allows user to report other users who violate Facebook’s Statement of Rights and Responsibilities by clicking the "Report/Block this Person" link in the bottom left column of the profile. Users can report profiles that impersonate, use a users photos, list a fake name, that do not represent a real person and abusive posts, improper images, nudity, illegal drug use, terrorism and cyber harassment. There are no statistics on the effectiveness of these measures and steps followed once reported.
Some countries which have specific laws relating to objectionable content have blocked these sites to prevent users within the country from browsing objectionable content.
In the future social networking sites should institute mechanisms to verify users prior to registration. It may not be a welcome idea as it add extra costs and slows down the rate of growth of subscribers to these sites. In the long run however, the online world will cease to be as anonymous as it is now and a beginning is needed.
Secondly, if objectionable content is quickly removed it will reduce the motivation of individuals to perpetuate such acts. This process should be simple, quick and region specific.