Sound Check! Addressing Challenges and Safety in Audio Moderation

Published on December 7, 2022
Last Updated on December 7, 2022

Audio streaming  has increased significantly in recent years across the globe. A record of 73% percent of the U.S. ages 12 and up (an estimated 209 million people) listened to online audio in February 2021—up from 68% in 2021—according to Edison Research's Infinite Dial 2022. With more people voicing their opinions through podcasts, audio rooms, and songs, steps to curb abuse and misinformation that comes with user-generated audio content must be taken as soon as possible.

Audio platforms ranging from music and podcasts to live audio now contain violations than ever, such as hate speech, self-harm, sexual service and nudity, sale of illegal goods, and misinformation. Racist content can be found on these audio platforms; users on live audio platforms can be seen hosting rooms that contain hate speech against Jews, LGBTQIA+, and Muslims. Playlists encouraging suicide and self-harm, podcasts spreading vaccine hoaxes leading to vaccine misinformation, and platforms/accounts offering escort services are exponentially increasing on audio platforms. Users participating in live audio conversations can also sell drugs and other illegal sexual services.

Live audio services always need help finding the right balance between content regulation and privacy issues. Although the medium is fundamentally based on the idea that live audio is short-lived, it is challenging to monitor these talks in real-time is challenging.

How Can TaskUs Help?

  • We review Internal Guidelines and support the implementation of explicit public-facing policies (analyzing and filling the policy gaps). 
  • We institute calibration for policy enforcement.
  • We analyze error and escalation rates to spot loopholes and collaborate on training documentation improvement. 
  • We implement side-by-side with operation teams to highlight tooling opportunities to improve SLA.
  • We deploy human resources around the clock and support with 30+ languages, across 24 sites, in 12 countries with 24x7x365 coverage and a fully functional W@H solution.
  • We can provide a list of harmful keywords, slurs, hashtags, and sexual emojis to train ML algorithms.
Want to know more about our studies and significant findings?


Siva Raghava
Senior Director, Trust + Safety
As a Senior Director, he and his team focus primarily on scaling Trust & Safety practices globally. Developed expertise while helping organizations in Product Operations, Content Moderation and Management, Project Mgmt., Global Solutioning & Vendor Management, Digital Marketing Ops, Content Policy Creation, and Content Policy Enforcement. Siva is a "truly diversified" Trust and Safety professional driving a purpose on platform safety for online communities at large for over 17 years. Worked with some of the premium brands in this space building deep domain expertise globally.