Social media executives from Meta, Snap, YouTube, TikTok and X are being summoned to Downing Street on Thursday for a crucial meeting with Prime Minister Sir Keir Starmer and Technology Secretary Liz Kendall over children’s safety online. The tech bosses will be questioned about what measures they are taking to safeguard young people and address parental concerns, as the government pursues its consultation on whether to implement a complete prohibition on social media for under-16s, in line with Australia’s approach. Sir Keir has stressed that the meeting will centre on ensuring “social media companies step up and take responsibility”, warning that “the consequences of failing to act are stark” and that the government owes it to parents and the next generation to prioritise children’s safety.
The Number 10 Face-off
Thursday’s gathering constitutes a critical moment in the government’s drive to hold tech giants to account for their role in safeguarding vulnerable young users. The gathering comes at a pivotal juncture, with Parliament having rejected calls for an outright ban on social media for under-16s just hours earlier, despite backing from the House of Lords. Instead of implementing a blanket prohibition, MPs chose to give ministers authority to introduce their own limitations, signalling the government’s inclination for a increasingly tailored regulatory approach rather than a comprehensive legislative ban.
The timing of the Downing Street summit highlights the government’s determination to seem decisive on online safety whilst addressing multifaceted commercial and political pressures. Professor Gina Neff from the University of Cambridge’s Minderby Centre for Technology and Democracy indicated the summit enables the government to demonstrate it is taking action on digital harms. Downing Street has already recognised that some platforms have made progress, implementing steps such as deactivating autoplay for children by default, and giving parents improved controls over device usage, though critics argue considerably more must be completed.
- Tech executives grilled regarding protections for children and responses to parental concerns
- The government considering restrictions on social media for those under 16 based on Australia’s example
- MPs rejected outright ban but provided ministers authority to establish limitations
- Some platforms already put in place protections like turning off autoplay for younger users
Parliament’s Rejection and the Wider Discussion
Wednesday evening’s House vote dealt a significant blow to supporters of a comprehensive social media ban for those under 16, marking the second occasion MPs have dismissed such proposals despite considerable backing from the upper chamber. The administration’s choice to favour ministerial discretion over formal legislation demonstrates a more cautious approach, with ministers arguing that an outright ban would be premature given ongoing policy considerations. This approach allows the administration room for manoeuvre in crafting bespoke restrictions rather than implementing a blanket prohibition that some worry could prove difficult to enforce and effectively oversee across various platforms.
The rejection has heightened discussion regarding whether the UK is properly shielding its youth from internet-based threats. Whilst the government maintains that providing ministers with powers to implement bespoke guidelines represents a increasingly practical solution, critics contend this approach falls short of decisive measures the situation demands. Recent studies conducted in Australia, where an social media restriction for those under 16 was established in December 2025, reveals that more than 60 per cent of minors keep using platforms even so, highlighting serious doubts about the success of legislative restrictions and suggesting the challenge stretches well past simple prohibition.
Criticism Across Parties
The parliamentary ruling has attracted sharp opposition from opposition benches. Conservative shadow education secretary Laura Trott accused Labour MPs of letting down parents and children by rejecting the ban, arguing that other nations are acknowledging social media’s harms whilst the UK drops back under the current government. Liberal Democrat education spokeswoman Munira Wilson reinforced these worries, declaring that “the time for incremental steps is over” and insisting on immediate measures to restrict the most harmful platforms for young users rather than incremental regulatory adjustments.
Australia’s Warning Story
Australia’s experience with online platform restrictions offers a sobering case study for policymakers evaluating comparable approaches in the UK. When the country implemented a prohibition on social media for under-16s in December 2025, it was hailed as a landmark step in protecting young people from digital risks. However, emerging research from the Molly Rose Foundation has revealed a concerning picture: more than 60 per cent of underage Australians keep using online platforms in spite of the legislative prohibition. This substantial non-compliance rate indicates that legal prohibitions alone could be inadequate in stopping young users intent on access from using the services they wish to use.
The Australian findings carry significant implications for the UK’s continuing policy discussions. If a comparable ban were introduced in Britain, the evidence indicates enforcement would pose formidable challenges, with young people likely finding ways to bypass age-verification systems and restrictions through various technical means. The data challenges arguments that a straightforward legal ban represents a silver-bullet solution to online safety concerns, instead highlighting the need for a more holistic approach integrating regulatory frameworks, platform responsibility, parental oversight tools, and digital literacy training to meaningfully address the risks young people face online.
| Key Finding | Implication |
|---|---|
| Over 60% of underage Australians still access social media despite ban | Legislative prohibitions alone cannot effectively prevent determined young users from accessing platforms |
| Ban introduced in December 2025 has failed to achieve widespread compliance | Enforcement mechanisms remain weak and young people find workarounds to restrictions |
| Blanket bans do not address underlying appeal of social media to young people | Multi-faceted approach combining regulation, platform accountability, and education is necessary |
Leading Specialists Urge Real Change
Child safety advocates and digital rights experts have stepped up demands for tech companies to implement meaningful action beyond voluntary measures. The Molly Rose Foundation, established in memory of 14-year-old Molly Russell who took her own life after viewing harmful content online, has been particularly vocal in demanding systemic change. Rather than implementing sweeping prohibitions that prove difficult to enforce, campaigners argue the priority should move towards holding platforms accountable for the systems driving harmful content to vulnerable users.
Andy Burrows, chief executive of the Molly Rose Foundation, has emphasised that Thursday’s Downing Street meeting represents a critical moment for government action. The charity has repeatedly maintained that platforms possess the technical capability to implement strong protections, yet frequently place engagement metrics over the welfare of users. Experts emphasise that genuine protection demands platforms to redesign their algorithmic recommendations, improve content moderation, and offer parents with meaningful tools to monitor their children’s online activity effectively.
The Algorithmic Challenge
At the centre of concerns lies the algorithmic systems that control what content young users see. These algorithms are engineered to maximise engagement, often pushing sensational, harmful, or addictive content to at-risk groups. Overhauling these mechanisms represents one of the most pressing challenges in online safety, demanding transparency from platforms about how their recommendation engines operate and what safeguards exist.
- Algorithms favour user engagement over user wellbeing and safety
- Platforms must increase transparency about algorithmic recommendation processes
- External reviews of algorithmic harm are essential for maintaining accountability
What’s Coming Next
Thursday’s summit at Downing Street will determine the tone for the government’s approach to online child safety in the coming months. Following the meeting, Sir Keir Starmer and Liz Kendall are set to outline their conclusions and determine whether current voluntary schemes from tech companies prove sufficient or whether more robust legal measures becomes necessary. The government remains midway through its public engagement exercise on whether to implement an Australia-style ban on social media for under-16s, with the conclusions from this week’s talks likely to influence the final policy direction.
Ministers have expressed their preference for giving themselves powers to impose restrictions rather than implementing an outright ban, citing anxieties over practical implementation and results. However, increasing pressure from opposition parties, child safety advocates, and parents suggests the government may face continued demands for more decisive action. The coming weeks will be pivotal in establishing whether digital platforms can demonstrate genuine commitment to safeguarding young people or whether Parliament will introduce new laws to compel adherence with more stringent safety standards.