The Challenge of Moderating Audio-First Platforms

Published on May 28, 2021
Last Updated on August 25, 2022

We all believed VR and AR experiences would be the next big thing in social, but it turns out that good old-fashioned audio is where the real buzz is! 

Audio chat rooms have exploded in popularity over the last year, and it is quickly becoming the hottest new way to share and consume content. The pandemic has fundamentally altered our lives, so it’s understandable why audio platforms are so popular - people are tired of visual stimulation, endless Zoom calls, watching Netflix, reading, and responding to emails. Audio is easy to consume, places less burden on the listener to engage fully at all times, which means you can keep it in the background while you’re doing something else. 

Live audio chat rooms such as those on Clubhouse and Discord are especially popular since they allow users to hear the live, unfiltered thoughts of celebrities and entrepreneurs, talk to friends or strangers, or just sit back and listen.  It feels like a combination of a live podcast and a group phone call, and you can make it feel as personal or as distant as you like. The discussions start spontaneously but disappear as soon as they are over. In a way, this experience evokes the fluidity and impermanence of an IRL conversation, which makes it compelling and captivating. 

Furthermore, major platforms are making inroads and developing their own audio products: Twitter launched Spaces, Spotify acquired Locker Room, and now Facebook and LinkedIn are working on their own version of it. All of these platforms are laser focused on attracting creators by providing avenues for monetisation and incredible content development toolkits. This indicates the appeal is here to stay and is influencing product development across the tech ecosystem.

Industry challenges

There are incredible examples of creativity sparked by this newly rediscovered medium, important discussions about society, remote work, parenting, climate change, mental health, and more.  We could hear Elon Musk providing commentary on whether humans will be able to live on Mars and asking the CEO of a major share trading company why their app prevented users from buying Gamestop stock. Facebook’s CEO Mark Zuckerberg was interviewed by Casey Newton on Discord where he announced major audio product launches in the next 3-6 months.

As more users flock to these communities, there will inevitably be bad actors creating and sharing content that violates the community's guidelines.  

Some of social media's problems have already reached audio platforms - such as antisemitism, bullying, misogyny, disinformation, and, in some cases, coordinated harmful behavior that encourages real-world violence.  

Balancing Speech with Safety

In light of how tough it has been to get content moderation “right”, it’s not surprising people are concerned about the challenges that live audio presents from a moderation and community safety perspective. Balancing issues of free speech, safety, and censorship is no easy task, but it is crucial in ensuring the long-term success of these platforms. 

Interpreting context and intent in an audio environment will be challenging for AI technologies, which either match findings to already known harmful content or predict whether content will be in violation based on learned patterns. 

Audio can be a significant source of misinformation and coordinated harmful behavior, as was evident during the Capitol Riot of January 6th, 2021. However, automated moderation solutions have traditionally focussed on text and visual content. We've also seen podcasts used to spread misinformation and thinly veiled incitement to violence. Some bad actors have become so expert at walking the line to avoid sanctioning by platforms that the phenomenon even has a name: “lawful but awful.”

And how do you even start moderating live, ephemeral conversations at scale?

The Flow of Harmful Content

Usually, groups that engage in co-ordinated harmful behavior first congregate in more obscure platforms such as 8Kun or 4chan. They create meme content, establish “battle plans,” and organize themselves around the goal of sowing as much disruption and harm as possible. Once topics or targets are selected, the content is shared and amplified on major social media platforms and, more recently, on audio-first platforms. These tried and tested tactics are designed to spread misinformation, attack individuals or groups online, and inspire real-world violence in the most egregious cases. 

Potential Solutions

Moderating these communities requires a different approach. There will be no recorded content to remove in most situations, and it will be difficult to document violations. 

Read more

Interested in Working With Us?

References

Phil Tomlinson
VP, Trust + Safety
Leading the Trust + Safety service line at TaskUs, Phil spearheads strategic partnerships and works at the intersection of content moderation and platform policy. His team helps define online safety's purpose in our broader society.