Social Media Giants to Implement Facial Recognition to Expel Underage Users

LONDON – In a groundbreaking move to protect children online, social media companies will soon be required to use facial recognition technology to identify and remove underage users from their platforms. This mandate is part of a new regulatory push by Ofcom, the UK’s communications regulator, set to be announced next month under the auspices of the Online Safety Act.

John Higham, Ofcom’s head of online safety policy, disclosed in an interview with The Telegraph that the initiative aims at ensuring platforms employ “highly accurate and effective” age verification methods. This follows an estimate that as many as 60% of children aged 8 to 11 in the UK already have social media profiles, despite platforms like Facebook, Instagram, TikTok, and Snapchat setting the minimum age requirement at 13.

Under the proposed regulations, social media firms could face fines amounting to 10% of their global turnover if they fail to comply, with penalties potentially reaching billions for companies like Meta. Additionally, executives could face up to two years in prison for persistent non-compliance. Higham emphasized, “We’re going to be looking to drive out the use of that sort of content, so platforms can determine who’s a child and who isn’t, and then put in place extra protections for kids to stop them seeing toxic content.”

The use of facial recognition for age verification involves scanning users’ faces to estimate their age based on biological markers. This approach has been adopted by some tech companies for years, particularly in industries like online gambling where age verification is crucial. Social media platforms are now expected to expand these checks, possibly when users attempt to change their age or through automatic detection of age-inappropriate behaviors or connections on the platform.

This regulatory shift has sparked a range of reactions. Privacy advocates have expressed concerns over the implications of collecting biometric data, while child safety groups applaud the move as a significant step towards safeguarding young internet users. However, the efficacy and privacy aspects of facial recognition technology remain under scrutiny, with debates on how it balances between privacy intrusion and child protection.

Technology companies have already taken steps towards more stringent age verification, including ID checks, facial age estimation, and parental consent mechanisms. Peter Kyle, the UK’s Technology Secretary, has indicated that the government might consider even stricter measures, like an outright ban on social media use for those under 16, if current efforts prove insufficient.

The announcement comes amidst growing awareness and action around children’s online safety globally, with various jurisdictions considering similar regulations to protect minors from harmful content, cyberbullying, and data exploitation. The UK’s approach could set a precedent for how other nations might tackle the pervasive issue of underage social media use.

Sources: