TikTok’s Algorithm Pushing Minors Toward Explicit Content

Technology

 

A recent investigation by UK watchdog Global Witness has found that TikTok’s search suggestions are directing underage users toward sexually explicit content. The nonprofit conducted a study to examine how the app’s algorithms interact with minors, highlighting ongoing concerns about online safety for children.

Search Suggestions Promote Explicit Content

Global Witness created seven TikTok accounts in the UK, all registered as 13-year-olds—the platform’s minimum age requirement—and accessed from factory-reset phones with no prior search history. Even with “restricted mode” activated, which is intended to limit exposure to inappropriate content, the investigation found that the suggested search terms were heavily sexualized. For some accounts, explicit content appeared after the first interaction with the search bar, demonstrating how quickly the algorithm could push minors toward pornography.

The findings raise serious questions about how TikTok’s AI-driven recommendations may influence young users. According to the report, the platform’s algorithm doesn’t just display explicit material—it actively guides children toward it, increasing the risk of early exposure to sexual content. The investigation comes amid ongoing debates in both the UK and the US over the need for stronger online child protection and stricter age verification measures.

TikTok’s Response and Safety Measures

In response to concerns, TikTok stated that it prioritizes safety and age-appropriate experiences. The company highlighted that it removes 9 in 10 policy-violating videos before they are ever seen, and that it offers more than 50 features specifically designed to protect teens, including guided meditations and restrictions on late-night notifications. TikTok also said users can control their ad preferences and content visibility through platform settings.

The platform maintains policies prohibiting content that includes nudity, sexual activity, or suggestive acts involving minors. TikTok’s transparency report covering January to March 2025 noted that 30% of removed content was due to sensitive or mature themes. Additionally, the platform deletes roughly six million underage accounts each month using AI detection tools and trained moderators.

However, critics argue that these measures may not be enough. The UK’s Online Safety Act, which came into effect in late July, imposes stricter requirements on platforms like TikTok to prevent children from accessing harmful content. The act applies not only to UK-based platforms but also to international services with significant UK audiences. Critics, including privacy advocates, have warned that some compliance measures, such as age verification systems, could affect user privacy.

Global Witness conducted its study both before and after the Online Safety Act rules took effect, noting that TikTok’s algorithm continued to recommend sexually explicit content to simulated child accounts. While TikTok emphasizes its regulatory compliance and safety initiatives, the report underscores persistent challenges in ensuring that social media platforms protect minors from exposure to inappropriate material.

As social media platforms face growing scrutiny, other tech companies have implemented similar protective features. YouTube now uses AI to estimate user age and adjust content visibility, and Instagram automatically sets teen accounts to private. TikTok’s case illustrates the difficulties in balancing algorithm-driven engagement with safeguarding the well-being of young users online.

Leave a Reply

Your email address will not be published. Required fields are marked *