Widely-distributed hoax messages sent on the app, which Facebook owns, have been blamed for more than a dozen lynchings across India. Facebook has deployed a series of new features in recent weeks to try to fix the problem.
The latest feature, rolled out last week for testing, uses a red label to warn users that a link they’ve been sent is suspicious.
The changes to WhatsApp are part of a much larger effort by Facebook (FB) to get a grip on its role in spreading misinformation.
Facebook’s stock plunged 19% on Thursday after executives warned that revenue growth would slow as the company focuses on user privacy, vaporizing $119 billion in market value.
“Facebook is currently facing a global problem of trust,” said Nikhil Pahwa, co-founder of India’s Internet Freedom Foundation. “Regulators do not trust them at this point in time because they have allowed the certain activity to fester without dealing with it.”
Facebook previously attracted the attention of the Indian government in 2016, when it was criticized for offering a free internet service that connected to only a limited number of websites (including Facebook). Called Free Basics, the program was shot down by the Indian government because it violated net neutrality.
Attention has since turned to WhatsApp, which Facebook purchased in 2014.
WhatsApp has introduced at least four new features over the past month that are designed to combat the mass messaging of rumors that have fueled mob violence and killings in India, the service’s largest market with over 200 million users.
According to WhatsApp’s website, its latest test feature “automatically performs checks to determine if a link is suspicious” and advises users to exercise caution when receiving and opening links.
WhatsApp previously started labeling messages to indicate they’ve been forwarded rather than composed by the sender. It’s also testing limits on how many chats (individuals or groups) a message can be forwarded to simultaneously — 20 for the rest of the world, five for India.
Newspaper ads, which contain tips like “check information that seems unbelievable” and “be thoughtful about what you share,” began appearing in national and regional newspapers across nine Indian states earlier this month.
Sparked by rumors of child abduction, mob killings have continued. The most recent took place two weeks ago, after some of WhatsApp’s new features were rolled out.
WhatsApp declined to comment for this story, but said in a letter to the government earlier this month that it was “horrified by these terrible acts of violence” and that it would “work proactively to prevent misuse” of its platform.
The Indian government has accused Facebook and WhatsApp of not doing enough to combat the spread of misinformation.
“Social media platforms … are being abused as a vehicle for weaponization of information,” Ravi Shankar Prasad, India’s technology minister, said Thursday in parliament.
Prasad singled out Facebook’s handling of Indian user data related to the Cambridge Analytica scandal and WhatsApp’s role in lynchings.
He recognized the steps WhatsApp has taken to fight the spread of rumors, but said the measures were “not adequate to meet the challenges of the situation.”
Pahwa, who helped lead the fight against Free Basics, welcomed WhatsApp’s latest move to flag suspicious links but said it doesn’t go far enough.
“You see a manifestation of the same problem on Twitter — people only see the text and they don’t click on the link,” he said. “I don’t really see it addressing issues of misinformation.”
WhatsApp should allow users to choose whether a message can be forwarded, and label every forwarded message with an attribution to the original composer, Pahwa suggested.
Pressure from the Indian government is unlikely to ease anytime soon, particularly ahead of national elections in 2019.
WhatsApp is already working with India’s Election Commission and various political parties to “prevent misuse” of the platform, a company spokesperson said in a statement.
“In India the government seems very concerned about the elections and the impact that misinformation and disinformation are having on elections,” Pahwa said. “My sense is that they are looking to act very quickly and regulate platforms like Facebook.”