When TikTok failed them, Kenyans began policing their own feeds

From the moment Bereket Tsegay began working as a moderator for TikTok in Kenya, a hub for social media moderation in Africa, the job felt impossible. Each shift, he was tasked with reviewing several hundred videos that had been reported for violating the platform’s guidelines. As images – many of them deeply disturbing – flashed across his screen, he had to make a split-second decision: leave it up or take it down.

Mr. Bereket was hired because he spoke Ethiopia’s lingua franca Amharic, but the videos in his queue came in dozens of African languages, most of which he didn’t know.

If he didn’t understand the audio and the visuals weren’t suspect, Mr. Bereket says he usually just left the video on the site. That is, unless many users had reported it. Then he took the video down.

Why We Wrote This

Social media moderation is always an imperfect science. But it’s especially challenging when machines and human moderators are asked to judge content in languages they don’t understand.

It wasn’t a very accurate way to judge, admits Mr. Bereket, who no longer works in the field. But “it is bound to happen … because there are never enough moderators.”

Everywhere in the world, social media moderation is an imperfect science. Machines and humans trawl through vast seas of content, making rapid subjective judgments. But the challenges are even bigger in places where both human and AI filters struggle to understand what’s being said.

“We’re talking about an algorithm, trained predominantly in English, being trusted to take down … harmful content, while a huge percentage of TikTok users in Kenya are using TikTok in their mother tongue,” says Mercy Mutemi, director of the Oversight Lab, a Kenyan legal advocacy group focused on technology.

Source link

Related Posts

Load More Posts Loading...No More Posts.