Social media platforms have been under criticism in recent years for their harmful effects on users, especially adolescents. Research has consistently shown that the use of social media has been linked to a range of mental health issues, including depression, anxiety, eating disorders, sleep disturbances, and suicide among teenagers. These findings highlight the need for greater awareness and understanding of the potential risks associated with social media use, particularly among young people, and effective strategies to mitigate these risks.
The Protecting Kids on Social Media Act is the US Congress’ latest attempt to find a solution to this problem. Like the European Union’s Digital Services Act (EU DSA), this bipartisan bill aims to create a safer online environment for minors.
According to the proposed legislation, social media platforms must verify the users’ age, limiting the age requirement to at least 13 years old, and “prohibit the use of algorithmic recommendation systems on individuals under 18.” More so, parental or guardian consent will be required for social media users aged 13 to 18.
This bill sparked a lot of debate on its various aspects:
Should the permissible age be higher or lower? Most would agree that kids under 13 years old are too young. Users under 13 can still view certain online content that does not require them to log in to an account. This should be carefully considered by parents or guardians, depending on the child's maturity, understanding of online safety, and ability to handle the potential risks associated with social media.
The proposed legislation faces opposition due to its requirement of parental or guardian consent for minors under 18 to create social media accounts. This has raised concerns over privacy and free speech rights. But, contrary to these apprehensions, the bill does not give parents control over their kids’ online activities. In fact, it allows minors to view content on platforms, provided that they cannot interact or log in without an adult’s approval. Kids could, for example, still view posts on Facebook and watch videos on YouTube even without an account. These provisions balance parental consent and personal freedoms for kids, enabling online minor safety.
Critics of the bill also expressed worries that teens with an unsupportive family could be isolated if left without access to social media. Those include vulnerable LGBTQ+ children whose parents do not support them. This situation may prevent these kids from accessing necessary resources and communities. In addition, a tumultuous family could lead to children pressuring their parents into giving consent to create a social media account.
There isn't an argument against banning social media platforms from using teens’ data to run algorithmic recommendation systems. Scholars have documented the damaging, mind-warping effects of content propped up by social media algorithms. Minors don’t have the critical thinking skills to filter truths from falsehoods among volumes of micro-targeted content designed to persuade and manipulate.
To prevent illegal underage social media accounts, the US government and companies must create a system that verifies documents confirming the age of their users. However, entrusting personal data to external entities remains contentious for many.
Under this social media child protection act, the Department of Commerce will oversee a new government-run age-verification pilot program allowing children and parents to prove their ages through identification. The pilot, which platforms could adopt voluntarily for their verification work, can process the following documents:
This system could significantly expand the government’s role in the online ecosystem. It also puts personally identifiable information, which could not be used for reasons beyond age verification or could not be forwarded to any law enforcement agency, at risk of data breaches.
The problems inherent in age verification systems are well known. In particular, these systems can function as identity verification and surveillance systems. This means these systems would subject every social media user to privacy-invasive identity verification if they want to use social media.
While the debate continues, the Protecting Kids on Social Media Act seems to be a thoughtfully crafted legislation representing the best compromise between freedoms and restrictions to safeguard young people’s mental health. The two legislative initiatives, the Protecting Kids on Social Media Act and the Digital Service Act, on both sides of the Atlantic, reflect a global consensus on the need to address the adverse effects of social media on young users. While there are variations in their specific provisions, they share a common goal of balancing the benefits of online platforms with the protection of minors. The debate over the appropriate age limits, the role of parental consent, and the trade-off between personal freedoms and online safety remains a crucial part of the conversation in the EU and the United States.