Protecting Innocence: Strengthening Online Safety For Underage Users

Understand the importance of online safety and how TaskUs Policy Research Lab provides consultative and value-add services in Trust & Safety and Content Moderation.

Published on September 28, 2022
Last Updated on November 6, 2023

Increase of CSAM

According to many reports, the impact of the COVID-19 pandemic-induced lockdown significantly increased the spread of Child Sexual Abuse Material (CSAM) compared to the pre-pandemic level, with an almost 200% rise in CSAM removals for some platforms.

Regulations Ahead

As a result, stricter regulations to protect children on the Internet are imposed to fight against child abuse or sexual solicitation of minors. In the coming years, platforms will have to be more efficient in removing illegal content; we have described the strategy of regulations to be introduced in the European Union here

Grooming Threats

Underage users have become easier to contact online, develop relationships with, and groomed as potential victims of sexual exploitation. Online grooming tactics that predators exploit consist mainly in:

  1. Pretending to be a minor (use of emojis, mangas) 
  2. Contacting underage (using flattery in comments or bribing proposal in a message, for instance)
  3. To obtain private information (more images of underage users, their account names, handles of other platforms, and the most dangerous: address).

It seems that the most dangerous online spaces and content for underage are Online Chat Rooms,  Unsupervised Accounts (preferred target: underage dancing videos), Accounts or Websites depicting (sexualized) underage models—usually wearing lingerie, bikini, or gymnastic outfits— 

CSAM Trades

Despite moderation efforts, CSAM trade and coordination with organized discussions to promote and recruit other predators are rising. Content often circulates through encrypted links across various platforms and servers associated with passwords valid for a short time before expiration. While the phenomenon is global, we have observed coordination of one type CSAM in specific regions with coded languages in group names and online chats. In addition, fan pages that gather collections of pictures (underage in school uniforms, minor foot, underage models in bikinis) facilitate the connection between abusers and their trade materials.

An alarming trend has emerged in the creation of self-generated CSAM by underage users who sell their own sexual content online. But also disguised predators using emojis and hashtags of the “MAP: Minor Attracted People” phenomenon. (Umbrella term used to designate self-identifying pedophiles who have admitted that they are facing their disorders.)

CPAM online risks

Online safety for children has always been a primary concern of tech and social media platforms. Unfortunately, new tactics are constantly developed by bad actors to bypass moderation. For example, we came across Child Physical Abuse Material (CPAM), in other words, content depicting and glorifying underage physical abuse mainly on video-sharing platforms, where users and dedicated accounts encourage others to share this type of imagery (student punishment videos, dangerous challenges, minors school fight). Furthermore, we noticed that in most of the analyzed content, the underage victims seem close to their abusers (parents, teachers, schoolmates, and friends).

Increasingly, underage users engage in dangerous behaviors online, reflecting some of the risks they face in the real world. As a result, there is a stronger need for human and technological efforts that protect underage users' safety.

How Can TaskUs help?

Policy Research Lab

  • We review Internal Guidelines and support the implementation of explicit public-facing policies (analyzing and filling the policy gaps). 
  • We propose calibration for policy enforcement.
  • We analyze error & escalation rates to spot loopholes and collaborate on training documentation improvement. 
  • We can help platforms to anticipate upcoming regulations to align their policies and comply with a new legal draft (e.g., European Union legislation).
  • We provide side-by-side with operation teams to highlight tooling opportunities to improve SLA.
  • To train ML algorithms, we build a list of harmful coded language (keywords, hashtags, and sexualized emojis).
  • We provide human resources around the clock and support with 30+ languages, across 24 sites, in 12 countries with a 24x7x365 coverage and a fully functional W@H solution.

Recommendations & Conclusion

We recommend platforms to do any of the following

  • To disable auto-complete features for CSAM-specific searches. 
  • To set up automatic blocking of hashtags and keywords related to CSAM.
  • To provide resources for parents to educate them on keeping their children safe on online platforms. 
  • To make accounts private by default for users aged under 18.

Note that every violation spotted through our investigation has been reported on respective platforms, which are legally obliged to collaborate with Law Enforcement and NCMEC for any content they assess as illegal.

Want to know more about our studies and significant findings?

References

TaskUs