Minor Safety on the Internet: Creating Safe Online Spaces for Young Users

It takes a partnership with humans and technology to make online environments safe especially for minors.

Published on January 27, 2022
Last Updated on August 24, 2022

One in three internet users is a minor, according to UNICEF1. The majority of them are ages  9-17 who connect using a personal mobile phone. They do a number of online activities—watching video clips, playing games, making new connections, creating content, and learning new information—to cultivate their interests, develop their social skills, and participate in relevant causes.

Increased exposure and lowering the age threshold of using social media, however, pose a high risk for minor safety on the internet. In a report by University of Michigan’s C.S. Mott Children’s Hospital National Poll on Children’s Health, children as young as seven years old2 already use social media apps. Moreover, a study from the Journal of American Medical Association shared that the average screen time of adolescents between ages 10-14 years old is already eight hours a day, excluding time spent on online classes.

How are Tech and Social Media Companies Making the Internet Safe?

Online safety for children has always been a primary concern of tech and social media platforms. According to the National Center for Missing and Exploited Children3, companies such as Google, Dropbox, Microsoft, Snapchat, TikTok, and Verizon Media have reported over 900,000 cases related to online child exploitation. Research from the Internet Watch Foundation, a UK-based online safety organization, revealed that more than 250,000 websites4 contain child sexual abuse materials (CSAM)—a number that has been doubled since 2020. 

From policies to controls, tech and social media platforms grapple to protect children online. 


TikTok has a younger user version of the app that limits users under 13 years old to create but not post. It has also published a Guardian’s Guide5 that includes Family Pairing features to let parents link their account to their child’s and manage content and privacy settings.


Apple has expanded features in its OS6 to keep children safe. For example, warnings appear when receiving or sending photos that contain nudity, and a parent or guardian can use Apple’s Content & Privacy Restrictions as a parental control. Siri, Spotlight, and Safari Search provide resources regarding online safety and encountering unsafe situations. Moreover, they have been updated to intervene in related child exploitation activities.


Instagram announced that it will soon be launching tools for parents and guardians to guide and support minors on Instagram. Among the upcoming features will be related to consumption, report notification, and additional educational resources.

Teenage users can also make use of the “Take a Break”7 feature, which encourages healthy social media use and protects one’s mental health.


Google takes on a slightly different approach. Rather than enumerating on the “dont’s”, the internet giant inspires minors to “Be Internet Awesome”8. Through positive language, Google teaches kids to be discerning about what they see on the internet and encourages them to create a positive impact for others.

Project Protect by Technology Coalition

Project Protect9 is a strategic plan primarily targeted to combat child sexual exploitation and abuse through technology innovation, collective action, independent research, information and knowledge sharing, and transparency and accountability. The project aims to support “actionable research that will lead to real, lasting change for children’s digital safety.” Technology Coalition brings together leading technology firms like Google, Microsoft, Facebook, Apple, and Twitter to “build tools and advance programs that protect children from online sexual exploitation and abuse.”


Safety tech solutions builder Thorn has come up with Safer10, a powerful and holistic tool that combines the use of AI and machine learning to identify, remove and report CSAM cases real-time.

How Parents and Guardians can Protect Minors Online

With all the policies, controls, and guides in place, parents and guardians must also help form digital smart habits for the family. Here are some suggestions:

Be informed. Engage with your child in learning about the devices and social media apps. Treat it as a bonding experience. Familiarize yourself with security features that would help you monitor your child’s online activities.

Limit screen time. Set up parental controls that allow children to be using devices at a limited time only. Be a role model of the rules by putting down devices during family activities and before bedtime.

Foster open communication. Have a conversation on how online safety looks like. When they encounter inappropriate or questionable material, they must be open to talking it out with you. Educate them on what to do in case they come across something like that again.

A Reliable Partner on Keeping Online Spaces Safe

To combat this, companies leverage policy enforcement through content moderation services like TaskUs. At TaskUs, we understand the growing need to safeguard online spaces for Trust & Safety services, which is why we are committed to creating the most secure online environment possible, while still allowing users to express themselves safely and with digital civility.

We work across a wide range of policy areas and content types including User Safety. We create safer and more trustworthy experiences in platforms by identifying and removing possible harmful user generated content.

Content moderation can be extremely rigorous. Our solution-based approach for handling sensitive content combines tech-enabled interventions and evidence-based psychology and comprehensive wellness initiatives to protect our human moderators. Our Teammates go through specialized training and 24/7 research-led support, backed up with a strong clinical team. In addition, we also train their immediate superiors to know how they can help and care for their team.

We also have a unique and dedicated Policy Research Lab, which pairs TaskUs domain experts with cutting-edge technology in the abuse and misinformation detection space, enabling us to bring deep insights and off-platform analysis to our clients. Among its developed recommendations for online platforms involve online self-harm and hate speech.

For a leading social media platform, we act as digital first responders in cases involving child safety, trafficking, and endangerment. Thousands of reports are made per year to law enforcement, with some cases leading to the arrests of offenders and predators. Our Teammates' monitoring of the suicide and self-injury queues has also led to life-saving intervention and support.

Together, we can keep online spaces safe.


Phil Tomlinson
VP, Trust + Safety
Leading the Trust + Safety service line at TaskUs, Phil spearheads strategic partnerships and works at the intersection of content moderation and platform policy. His team helps define online safety's purpose in our broader society.