Social media grapples with live audio moderation

Washington, Feb 25 (Reuters):
The explosive growth of Clubhouse, an audio-based social network buoyed by appearances from tech celebrities like Elon Musk and Mark Zuckerberg, has drawn scrutiny over how the app will handle problematic content, from hate speech to harassment and misinformation.
Moderating real-time discussion is a challenge for a crop of platforms using live voice chat, from video game-centric services like Discord to Twitter Inc’s new live-audio feature Spaces. Facebook is also reportedly dabbling with an offering.
‘Audio presents a fundamentally different set of challenges for moderation than text-based communication. It’s more ephemeral and it’s harder to research and action,’ said Discord’s chief legal officer, Clint Smith, in an interview. Tools to detect problematic audio content lag behind those used to identify text, and transcribing and examining recorded voice chats is a more cumbersome process for people and machines. A lack of extra clues, like the visual signals of video or accompanying text comments, can also make it more challenging. ‘Most of what you have in terms of the tools of content moderation are really built around text,’ said Daniel Kelley, associate director of the Anti-Defamation League’s Center for Technology and Society. Not all companies make or keep voice recordings to investigate reports of rule violations. While Twitter keeps Spaces audio for 30 days or longer if there is an incident, Clubhouse says it deletes its recording if a live session ends without an immediate user report, and Discord does not record at all.
Instead, Discord, which has faced pressure to curb toxic content like harassment and white supremacist material in text and voice chats, gives users controls to mute or block people and relies on them to flag problematic audio. Such community models can be empowering for users but may be easily abused and subject
to biases.

Related Articles

Back to top button