WhatsApp & other Platforms are not doing enough to Combat Fake News, Rumours

It’s society, not social networks. It’s government, not companies. That’s been the sum and substance of many commentators’ arguments as they have responded to lynchings engendered by rumours passed around in WhatsApp messages.

Such a view is not wrong, not by a long shot.

It’s true that if a group of people choose to believe rubbish and act violently on it, the ultimate responsibility for any horrors perpetrated is theirs.

It’s also true that in many lynching cases policing has been ineffective. Reportage in the national media has established that local police has (a) often been outnumbered by and felt powerless against a mob (b) sometimes failed to gather intelligence on dangerous rumours and (c) in a few cases exhibited appalling lack of urgency which seems best explained by the fact that some victims were ‘outsiders’, low income migrants, for example.

True, too, are observations by commentators that toxic social behaviour is often a close cousin of toxic politics. And that political toxicity has increased recently.

Equally valid are arguments that the Indian state is prone to censuring and hectoring carriers of information, whatever the technology employed, because that’s easier than, say, fixing law and order, and also because the private sector makes for good villains in the eyes of almost all governments.

But while all that’s true, this proposition is not true: that companies that provide communication services that allow instantaneous spread of vicious and false rumours have no responsibility at all.

WhatsApp in this case and social networks in general have a responsibility to make their networks less salubrious for peddlers and merchants of falsity, fakery and downright dangerous stuff.

No one is arguing that a messaging platform creates such a system that only “good” people use it for “good” purposes. That’s impossible and not even desirable because pre-defining what’s “good” is a road that inevitably leads to illiberal choices.

The likes of WhatsApp, though, can and must do more, and commentators who say otherwise are being at best naive. In fact, even companies offering these services don’t seem to agree with pundits who say they are not to be blamed.

As has been reported, WhatsApp is working on labelling forwarded messages so that users can at least be aware that the sender may just be passing on something he received. It’s also reportedly developing machine learning capacities to identify false/dangerous messages.

Facebook is reportedly hiring what it calls public policy experts and employing people and technology to kill fake profiles, which are often sources of fake news/dangerous rumours. It’s also preventing pages that are habitual hosts of false information from carrying advertising.

Little wonder then reported, Facebook has promised the Election Commission of India that it will employ fact checkers to weed out fake news aimed at influencing voter decisions during elections, especially the next Lok Sabha polls. Facebook, again a point to be seriously noted, is voluntarily offering to run checks and block false information.

All this is good. But it’s nowhere near enough.

That Facebook, which also owns WhatsApp, is even doing this much is of course thanks to the big scandal it got mired in over data privacy and misuse of personal information.

That WhatsApp is responding to tragedies like lynching is because it feels that another scandal, in the country that hosts its biggest user base, may put it in serious trouble with authorities.

But as technology reporters have noted WhatsApp barely has a management team in India, even though 200 million Indians use the messaging service. Of course, technology services can be run remotely with very few local hires. But that model doesn’t work when your technology service is front and centre in several public tragedies.

WhatsApp doesn’t have a public policy team in India, media reports have said. That’s a good indicator of how far away the messaging service is when it comes to behaving like a responsible corporate citizen.

Technology experts have pointed out that social networking sites created the problem of fake news, not by inventing fake news, but by (a) creating a transmission technology that allows super quick spread of dangerous rubbish and (b) by taking away business from mainstream media, which generally weeds out ridiculous falsities, but not investing enough in fact checking.

What’s being promised now is not enough given the scale of the problem. Studies have shown that correcting and/or debunking a piece of fake news/dangerous rumour online takes an average of 12 hours.

In 12 hours, a dangerous rumour can travel online several times around the world. It’s also established that online the volume of fact checking content is far less than the volume of fake news/ rumour.

There are also behavioural issues studies have demonstrated that people tend to circulate falsities far faster than facts, especially if such false information conforms to users’ biases.

Given all this, corrective efforts by technology companies have only begun. They need to invest a whole lot more, in human resources and technology, to combat the flood of trash that flows through their communication channels.

It’s not just the message, it’s also the medium.

Show More

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button