Online Self-Harm in the Pandemic

TaskUs Policy Research Lab studies the rise in online self-harm during the pandemic and recommends an urgent need for online platforms to address policy, product and content moderation gaps.

In 2020, a surge in depression and mental health issues was observed due to the effects of the global pandemic–especially among the demographic group of teens aged 13-18. There has been an alarming increase in behavior linked to these mental health issues among the youth. Based on insurance claims in the USA, there has been a reported 84% increase in anxiety; 94% increase in depression; 119% increase in overdosing; and 91% increase in self-harm, with some more isolated areas of the USA experiencing an increase of up to 300%.[1]

Many of the teens from this age group spend a significant amount of their time online, and unfortunately, many engage in unsafe or self-harming behavior on social media, or become increasingly exposed to triggering or enabling content. Teens create private groups to discuss eating disorders and promote unhealthy views of their bodies, using slang and code words to bypass content moderators. “Challenges” or “games” that involve dangerous and even violent behavior quickly go viral among teens on social media, especially with the recent rise of short form content. 

These problems are rampant and have serious repercussions, and thus require more stringent, proactive moderation and decisive action from online platforms. However, most messaging services and search engines today lack clear self-harm policies. Many platforms similarly have gaps in their policies regarding sensitive issues, including viral content or behavior that poses risks to users. To make matters worse, these platforms lack disclaimers and help resources to assist affected users and their loved ones, and lack efficient, easy-to-access reporting flows for those who encounter egregious content. 

Call for Action: 

After doing a deep dive into these trends, TaskUs’ Policy Research Lab has developed several recommendations for online platforms in an effort to mitigate harm among users, especially younger demographics:

  • To monitor self-harm keywords on desktop and in-app, correctly spelled or not, in order to reduce word completion suggestions when it comes to self-harm.
  • To monitor posts containing ‘’Ana y Mia Princessa + Emojis’’ (a trend related to eating disorders observed in Spanish-speaking markets) on popular instant messaging platforms, and to include specific trends and keywords such as these as ones to watch out for in internal guidelines. 
  • To include policy content and help resources related to dangerous games and challenges; and to provide a disclaimer on what and who to consider reaching out to should a user come across this content.
  • To provide the same level of assistance to users (including minors) searching for self-harm in other languages than English.
  • To increase the linguistic scope and provide disclaimers more globally (Hindi and Spanish are particularly at risk on certain platforms; also note that the Indian population has a high rate of suicide in Southeast Asia).
  • To provide disclaimers for self-harm keywords irrespective of these language settings.
  • To pick up keyword results that are not spelled correctly (e.g. “suicide” being typed as su*cide or suic1d3) and provide disclaimers and support resources to the users searching for these; to change the search feature condition type from “exact match” to “contains”.

The TaskUs Policy Research Lab provides consultative and value-add services in Trust & Safety and Content Moderation. We service a wide range of policy areas and content types, with the end goal of helping create a safer internet for all.

Want to know more about our studies and significant findings?

Get in touch with Us >


References

  1. The Impact of COVID-19 on Pediatric Mental Health – A Study of Private Healthcare Claims – A FAIR Health White Paper