Wed. Apr 24th, 2024

The deadly Facebook cocktail: Hate Speech & Misinformation in Sri Lanka

Facebook Misinformation F8 Fake News Featured Image Sri Lanka

Following the deadly Easter Sunday bombings, access to social media was blocked. The only way we could get around it was by using a VPN. And then on the morning of the 1st of May 2019, this ban was lifted. Only 12 hours later, Facebook held the keynote of its annual F8 developer conference. In the days that followed, we’ve been subjected to social media blocks again following episodes of mob violence. The justification of these blocks has been to combat hate speech and misinformation. So what do the announcements from F8 this year mean for us in light of these social media bans?

The future is private?

“We don’t exactly have the strongest reputation on privacy right now, to put it lightly. But I’m committed to doing this well and starting a new chapter for our product,” said Mark Zuckerberg – CEO of Facebook during the opening keynote. He’s right about that. The past few years have seen the company deal with multiple scandals. These have ranged from data breaches to interfering with elections.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
“The future is private,” said everyone at F8 (Image credits: Digitrends)

So it’s not surprising that practically everyone taking the stage at F8 said, “The future is private.” But outside the euphoric halls of F8, few would believe these words anymore.  But earlier this year, we saw the departure of Chris Cox from Facebook. He served as Chief Product Officer. He’s also considered to be the architect behind the News Feed. Following his departure, Mark Zuckerberg unveiled a new philosophy for the company in a 3,200-word blog post.

Describing this new philosophy Mark Zuckerberg said, “I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won’t stick around forever.” In other words, the company will move away from the iconic News Feed. Instead, it will invest its efforts in private messaging.

Redesigning for groups, events, and friends: FB5

“This is the biggest change we’ve made to the Facebook app and site in five years,” is how Mark Zuckerberg described it. At F8 2019, he announced that the entire Facebook app would be redesigned. Dubbed FB5, this redesign would focus more on Groups and Events. The update immediately landed on iOS and Android. A desktop version is expected to arrive in the coming months.

The new update puts a dedicated tab for groups in the center of the app. Inside this tab, you’ll find a feed with updates from all the groups you’re in. There’s also a discovery tab for new groups you might be interested in. Over time, Facebook has stated that it aims to make groups more integral across other parts of the app.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
The redesigned Facebook. Focused on groups and events (Image credits: Techcrunch)

Alongside this groups, the tab will be another for events. This updated feed aims to give you a better idea of events around you and find ones you might like. Facebook also announced a new feature called Meet New Friends. As the name suggests, this aims to help you find new friends that share your interests. An expansion on the existing, People You May Know feature.

This is the future of Facebook. Groups where you would ideally make real-world connections with people. But amidst all these announcements, it was obvious that something was missing. There was hardly any mention of the iconic News Feed. Yet, its disappearance hardly surprised anyone. After all, much of Facebook’s woes have been attributed to it.

Hate Speech and Misinformation: A Sri Lankan Tale

Only a few hours after the deadly Easter Sunday bombings, the government announced its second social media ban. Following, the riots in Negombo, the government once again blocked access to social media for a brief period. At the time of writing, we’re living through the fourth social media ban in Sri Lankan history following another episode of mob violence in the North Western Province.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
Following the deadly Easter Sunday bombings, Sri Lanka has seen multiple social media bans (Image credits: Asian Correspondent)

As we learned during the first social media ban during the Digana riots, the President has full authority to do so during a state of emergency. However, while the initial social media ban following the Easter Sunday Bombings was announced on the 21st of April, the state of emergency came into effect on the 22nd of April. This, of course, raises questions as to whether the ban was legal.

However, one could argue that drastic actions are necessary when fighting terrorists. Hardly an hour passed after the tragedy before fake news started popping across social media. It wasn’t long before this transformed into hate speech. Yet, it’s an argument that sounds nice in theory. But in practice, it doesn’t work.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
re was hardly any reduction in the production of content following the social media ban (Image credits: Sanjana Hattotuwa)

Sanjana Hattotuwa – Senior Researcher at the Center for Policy Alternatives has the data to prove it. His data shows us that during the social media ban following the Easter Sunday bombings, there was hardly any reduction in content produced and engagement on gossip, memes, and Sinhala media pages. Furthermore, even the government was producing content on Facebook during this social media ban.

Ray Serrato – Social Media Analyst at Avaaz also analyzed the frequency of posts on 16 Sri Lankan Facebook Groups. His data also shows no significant drops in activity following the ban after the Easter Sunday bombings. Based on this data, we can see that the social media ban wasn’t effective. People may not understand their privacy settings. But they do know how to use a VPN.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
An analysis of the frequency of posts in 16 Facebook groups following the social media ban (Image credits: Ray Serrato)

The block didn’t stop them from posting a fake bomb scare or a spoiler for Game of Thrones. But when we look at how fake news and hate speech spread following the deadly Easter Sunday bombings, we can’t ignore WhatsApp. Another Facebook product that’s famous for spreading fake news and hate speech at an industrial scale.

WhatsApp and a systematic approach to misinformation

We’re seeing this phenomenon unfold in India. WhatsApp has become an important tool for Prime Minister Narendra Modi and his party the BJP. It has an army of volunteers spreading fake messages that are anti-muslim and critical of rival political parties. This is the strategy that was utilized by Brazilian President Jair Bolsonaro to come into power. All it takes for misinformation to spread is a single message in a WhatsApp group.

That message is then forwarded to other groups. That’s how it lands in one of the groups you’re in. To you, it’s a message from an old friend in the school batch group or a coworker in the office WhatsApp group. You don’t think too much. You know these people. So you forward it to your groups and people you care about.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
WhatsApp has become a platform that makes fake news go viral (Image credits: Time Magazine)

This is how misinformation systematically spread across WhatsApp. It’s simple yet, blazingly effective. That’s why we’re seeing it happen everywhere including Sri Lanka. All it takes is a few taps and a message goes viral with the words, “Forwarded as received.” But at F8, there was no mention of anything to address this.

Instead, the announcements were focused on new features aimed at businesses. At F8 the company announced that it would now allow businesses to have product catalogs on WhatsApp. Furthermore, it would also allow businesses to accept payments directly through the app. Facebook expects these features to be widely adopted by small businesses like home bakers. Though whether we in Sri Lanka will see these any time soon is a mystery.

What Facebook has done in Sri Lanka

A Facebook spokesperson shared with ReadMe, “People rely on our services to communicate with their loved ones and offer help to those in need, and we remain committed to helping our community connect safely during difficult times. There is no place for hate, violent or extremist content on our services, and we are taking all steps to remove content that is violating our policies. This includes working with partners in the region to help identify misinformation that has the potential to contribute to imminent violence, identifying content which violates our policies, and ensuring language support for content review.”

Facebook | F8 | Fake News | Misinformation | Sri Lanka
Facebook said that its working with civil society organizations to identify and take down hate speech and misinformation (Image credits: Wired)

The company shared with ReadMe that it immediately designated the deadly Easter Sunday bombings as an act of terrorism. In doing so, it banned organizations and individuals involved in the attack. It also began removing any content that was found praising or supporting the attacks and those involved.

Facebook’s Community Operations team is also working with a number of civil society organizations. The purpose of this partnership is to identify misinformation that has the potential to contribute to imminent violence or physical harm. The civil societies assist the Community Operations team in identifying such content in Sinhala or Tamil. Additionally, the company also stated that it’s working with AFP, which its fact-checking partner to tackle misinformation.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
Facebook has formed a team tackling misinformation and other harmful content in Sri Lanka (Image credits: VICE)

The Facebook spokesperson also shared with ReadMe, “Facebook is committed to helping Sri Lanka and its communities. Earlier this year, we created a dedicated team across product, engineering, and policy to work on issues specific to Sri Lanka and other countries where online content can lead to violence.”

The work this team has done covers multiple aspects. It has worked to help update and enforcing Facebook’s community policies. It has also formed partnerships with many civil societies. The team has also conducted digital literacy workshops with local Sri Lankan non-profits. The company stated that it has trained 15,000 students on how to stay safe on the internet. Facebook aims to train 20,000 students by the end of June 2019.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
Facebook stated that its AI tools are now able to moderate content on its platforms in Sinhala (Image credits: Wired)

This team at Facebook is also tasked with helping improve Facebook’s technology to tackle misinformation and hate speech. Interestingly, the Facebook spokesperson shared with ReadMe, “The rate at which bad content is reported in Sri Lanka, whether it’s hate speech or misinformation, is low. So we’re investing heavily in artificial intelligence that can proactively flag posts that break our rules.”

We also learned that the company has expanded its automatic machine translation to Sinhala to better identify harmful content. The spokesperson also added, “To help foster more authentic engagement, teams at Facebook have reviewed and categorized hundreds of thousands of posts to inform a machine learning model that can detect different types of engagement bait. Posts that use this tactic will be shown less in News Feed. For Sri Lanka, particularly we have extended this to Sinhalese content as well.”

Why it’s an uphill battle: Language

We can see that Facebook relies on two things to identify a violation of its community guidelines. This includes using the platform to both spreading fake news and extremist ideas. The first is the traditional reporting mechanism by users. Once reported, these are reviewed by moderators who check and take it down if it violates Facebook’s community guidelines. The second is by utilizing AI to proactively hunt down such content that might not be caught by moderators.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
A glimpse inside Facebook’s moderation center in Berlin (Image credits: DPA)

These methods have stopped videos of decapitations going viral. But they aren’t perfect and more needs to be done. For starters, the moderators at Facebook work in horrifying and traumatic conditions. Furthermore, Zahran Hashim promoted his venomous extremist ideology in videos that were published on Facebook. Yet, these were never caught by any of Facebook’s systems.

This isn’t the first time it’s happened. When hate speech went viral during the Digana riots, the company stated that it couldn’t moderate content in Sinhala. Similarly, it failed moderate hate speech in Myanmar that was produced in Burmese. The result of this failure was a horrific campaign of ethnic cleansing.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
In Myanmar Facebook failed to moderate hate speech, which resulted in a horrific campaign of ethnic cleansing (Image credits: Social Songbird)

But these failures were in the past. As mentioned above, the company has stated that it has expanded its efforts to monitor content in Sinhala. This includes forming partnerships with civil society organizations. The company is also investing in its AI tools to better moderate content in Sinhala. However, researchers disagree with the effectiveness of these efforts by Facebook

Yudhanjaya Wijeratne – Data Scientist at LIRNEAsia shared on Twitter why AI systems fail at monitoring languages that aren’t English. In heavily simplified terms, these AI systems rely on natural language processing to translate and identify hate speech. But natural language processing is designed to work primarily with English. But with languages like Sinhala and Tamil, you’ll get weird results.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
Researchers disagree that Facebook’s AI systems are up for the challenge of moderating content in Sri Lanka due to the challenge for such systems to process languages like Sinhala & Tamil (Image credits: Luc Devroye, School of Computer Science, McGill University)

Yudhanjaya elaborates on this saying, “Languages such as Sinhala and Tamil are what practitioners call “resource-poor”—the ones that just don’t have the statistical resources required for ready analysis. Years of work are required before firms can do for these languages what can be done for English with a few lines of code. Until the fundamental data is gathered, these are difficult nuts to crack, even for Facebook.”

What Facebook can do tomorrow & today

It’s clear that Facebook’s army of moderators can only go so far without being traumatized. As such, much of the moderation in the future has to be carried out by automated systems. But to help these systems overcome the language barrier, Facebook has to invest in research. That means not only pumping in money but also closely working with researchers.

This is necessary not only in Sri Lanka but also in many other countries. Take India, which has 22 official languages where the challenge is compounded further due to multiple regional dialects. Unless companies like Facebook closely work with researchers in these countries, it’ll never overcome this barrier. Such collaboration is not alien to the company either.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
If Facebook’s AI tools are to effectively moderate content in Sinhala and Tamil the company will have to invest more in research (Image credit: Axios)

Facebook like many other tech companies built its AI systems on the work done by academics in various universities. Of course, it could take years before we see tangible results from this research. So what can Facebook do right now? Well, it could start by expanding it’s misinformation efforts to the rest of the world.

For the European Parliamentary elections, the company had created an operations room. Located inside Facebook’s Ireland HQ, this room was staffed with a team of 40 with speakers of all 24 official EU languages. It’s purpose? To monitor misinformation, fake accounts, and election interference. Previously, the company enacted such measures for the US midterm elections and the Brazillian presidential elections.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
Inside the Facebook war room for the US mid-term elections (Image credits: Facebook / The Verge)

One could argue that the effectiveness of these efforts can be questionable. After all, Brazil’s President Jair Bolsonaro was accused of running a misinformation campaign. But we live in Sri Lanka where social media is blocked at the first sign of a crisis. As such, one could further argue that Facebook should implement such teams whenever a crisis occurs. Why? Because the company could better support the authorities in their efforts to fight misinformation and hate speech online.

Working with the authorities to respond

Once a crisis occurs, it won’t take long before panicked rumors started spreading. This is a problem that’s been around long before social media. However, social media allows such rumors to spread like wildfire. To help mitigate this, Facebook should ideally have a team monitoring its platforms and combating misinformation.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
To contain misinformation & hate speech, Facebook should actively work with the authorities. Especially during a crisis (Image credits: Ishara S. Kodikara/AFP/Getty Images)

Such teams shouldn’t work in isolation either. They should be working in collaboration with the local authorities. For example, let’s assume that the Police find a viral video promoting hate speech and more violence. In such instances, the Police should be able to immediately contact this team from Facebook and have that video taken down.

Yet, this is only Facebook. There are many more instances of misinformation spreading across WhatsApp. These are harder to intercept when messages on the app are encrypted. If so, this team should be able to take extreme measures such as removing forwarding or other features.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
Hate speech & misinformation goes viral on WhatsApp due to forwarding. If needed Facebook should be willing to remove such features during a crisis (Image credits: The Next Web)

Such actions may not eliminate misinformation. But it can severely slow it down and hamper its impact. In an ideal world, we’d already had a team from Facebook here. One that can proactively fight or react to misinformation at a moment’s notice. This would be far more effective than a blanket social media ban. After all, the data has shown us that such bans are ineffective.

Facebook and its Future with Groups

If we’re attempting to look at the future misinformation, we can’t ignore Facebook’s redesign. This grand redesign, which was announced at F8 2019 emphasized groups. One could argue by doing so it takes away prominence from the news feed. This, in turn, would make it harder for pages to spread hate speech and misinformation. Even if they throw millions at ad campaigns, their false content won’t reach groups.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
The future of Facebook is groups. But does that mean the same tactics on WhatsApp can be used to spread malicious content on Facebook? (Image credits: Fast Company)

Yet, one could also argue that it makes the process easier. Why? Because by focusing on groups, we allow the same structure of communities to be recreated on Facebook. Therefore, the same tactics to spread misinformation across groups on WhatsApp could now be applied to Facebook.

The Facebook app on Android and iOS already allows you to post content in any group with a quick tap when publishing a status. At the time of writing, this feature is yet to be added on desktop and other forms of Facebook. But with the company focusing on groups, it likely won’t be long before we see such features rolling out.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
By putting too much power in the hands of group moderators, Facebook risks facing the same problem Reddit faces with toxic communities (Image credits: Eshwar Chandrasekharan)

Yet, by focusing on groups, one could argue that Facebook gains an army of new moderators. But this again is an argument that sounds good in theory. In practice, this is the exact problem Reddit has to deal with. The social network where the responsibility of moderating content is left to individual moderators. As a result, we’ve seen communities like r/beatingwomen, r/CringeAnarchy, r/deepfakes, r/Incels, and so many more, which were only taken down after intense media backlash.

The failure of good governance

At the end of the day, Facebook can only go so far. The company can invest millions if not billions into research. Hopefully, one day it’ll have AI systems that can detect and remove hate speech, misinformation, and other harmful content. But it will be years before we see that day.

The company can hire more moderators and form more war rooms. But more and more, we are seeing how this approach is ineffective. Invariably, some of these malicious content will slip through the cracks. Hence the need for a strong relationship with the authorities to conduct more detailed investigations when required. But a chain is only as strong as its weakest link.

The government had advance warning about the deadly Easter Sunday bombings. Yet, it didn’t act on it. Prime Minister Ranil Wickremesinghe’s excuse was that there was a communication breakdown (Image credits: CNN)

As we’ve seen over the past three weeks, the weakest link in the chain has been our government. Prior to the deadly Easter Sunday bombings, Indian intelligence agencies had shared multiple warnings (three in April alone) that were ignored. Even as far back as March 2017, the Muslim community held protests against them and had warned the authorities about these terrorists spreading their venomous extremist ideology.

But these were also ignored. It’s not like we don’t have laws covering hate speech either. Under Sections 219A&B of the penal code, you can get arrested for hate speech. Therefore, the authorities could’ve easily arrested them in 2017. Had they done so, we wouldn’t be having this conversation after the tragic deaths of over 250 people and terrifying episodes of mob violence.

Sadly, the reality is often disappointing. Even now the Ministry of Defence has stated that those spreading fake news would be prosecuted under the Emergency Regulations. The Police have also established a special unit to identify and arrest those spreading hate speech on social media. However, only a handful of people have been arrested under these laws to date. Some examples being 2 from Colombo 15, 1 cleric from Vavuniya, and 1 from Chilaw.

Facebook | F8 | Fake News | Misinformation | Sri Lanka
When left unchecked, misinformation & hate speech can easily transform into mob violence (Image credits: AFP / Strait Times)

Meanwhile, hate speech and misinformation continues to run rampant across social media. It has now morphed into mob violence. This begs the question of how effective efforts by companies like Facebook can truly be? They can invest in advanced AI systems or an army of moderators to identify and take down hate speech and misinformation.

However, that won’t stop terrorists and other criminals from simply moving to another platform. Hence the need for a strong relationship with the authorities to conduct further investigations and take action where necessary. But if the authorities simply ignore these warnings, then the entire system fails. Thus the entire cycle of terror continues as we live in fear and praying for peace.

By Mazin Hussain

Arteculate is your guide to the Asian tech industry. We give you unparalleled insights, accurate, local tech news, thoughtful features and sometimes scathing opinions on where things are headed. Stay tuned for the best of Asia!

Related Post