Is the Tech Company Really Accountable?
by Sabrina Steele on 18 Feb 2026
In the US, a growing number of lawsuits against social media platforms are being brought forward by young people and bereaved parents. The claimants argue social media giants like YouTube, TikTok, Snapchat and Instagram have contributed to the deterioration of young people’s mental health and wellbeing — even leading, in some cases, to death.
Many of these cases focus on the design of platforms themselves: i.e. addictive features, algorithms promoting harmful content and viral challenges which are increasingly driving content. In extreme cases, social media use is presented as a contributing factor in fatal incidents, particularly where dangerous viral trends have led to a physical accident.
This blog explores the key areas being reviewed, the proposed legislative options and whether platforms can, or should, be held legally accountable.
The Issue
Two primary strands of litigation are relevant.
The first involves parents seeking access to their children’s social media activity following their deaths, often in cases linked to dangerous online challenges. The claimants argue these deaths resulted from “programming decisions which aim to maximise children’s engagement by any means necessary,” particularly on platforms such as TikTok. Some of the parents in question are UK families unable to access their children’s online accounts following their deaths.
The second strand focuses on the allegedly addictive design of platforms including Meta (Facebook and Instagram), Snap, YouTube and TikTok. The claimants argue that features like infinite scroll, algorithmic reinforcement and social validation loops are deliberately engineered to encourage compulsive use: contributing to depression, anxiety, eating disorders, self-harm and other mental health issues among young people.
These cases could have far-reaching consequences for how social media platforms operate. They are also some of the widest cases seen against tech companies in the US, with over 1,600 plaintiffs, including parents, children and school districts across multiple states.
For both types of cases, social media CEOs and senior executives will likely be summoned to testify as to how their respective platform operates.
The Legislative Response
In the US, these lawsuits are not only seeking financial compensation but could also result in demands for structural changes to platform design. For example, courts could require the removal or modification of features promoting excessive engagement, like endless scrolling, or could mandate the implementation of algorithmic transparency.
In the UK, bereaved parents are campaigning for “Jools’ Law”, which would allow families access to their child’s social media accounts after death. Currently, platforms often refuse access on privacy grounds, citing the risk of revealing third-party data, such as that involving other individuals in private messages. The UK government recently added a clarification to the Data (Use and Access) Act aimed at facilitating such data-sharing, reflecting growing political pressure to support families and children with managing their online activity.
More broadly, governments across the UK and EU are strengthening online safety frameworks. In the UK, the Online Safety Act has expanded the list of priority offences (including cyberflashing), and Ofcom has begun enforcement activity. In the EU, the Digital Services Act (DSA) is now in force, with new investigations underway and additional platforms, such as WhatsApp, recently designated as Very Large Online Platforms (VLOPs), subjecting them to stricter obligations. The Irish Data Protection Authority has recently launched an investigation into Grok.
Policymakers around the world are considering age-based restrictions. Australia, in 2025, introduced a ban on social media use for under-16s. The UK, France, Spain and Greece have all indicated they will explore similar measures this year. The UK has also confirmed further work to address ‘‘digital wellbeing’’ as part of the Children and Wellbeing Bill.
Implications for Social Media Platforms
Any combination of financial penalties, reputational damage and regulatory intervention could significantly reshape the social media industry. And yet the issues are complex, and achieving proportionate regulation will take time.
In the US, these cases signal a shift in judicial scrutiny. Historically, platforms have relied on Section 230 of the Communications Decency Act, which protects them from liability for user-generated content. If courts start holding platforms liable for harms linked to algorithmic design or product features, it will mark a fundamental shift in digital liability. Such a precedent could open the door to wider claims not only around child safety but in relation to physical injury during content creation or financial loss linked to algorithmic changes.
Some social media platforms have already opted to settle. For example, Snap Inc. and TikTok reached settlement agreements in a case over addictive design, meaning they will no longer be required to attend trial alongside Meta and YouTube. While these settlements establish no legal precedent, they suggest a growing recognition of corporate responsibility and increase the likelihood of further claims.
The range of cases, and the growing desire of prosecutors to bring suits, suggests tech platforms will face increasing pressure from governments and the public to make significant changes.
The scope of any future regulations will need careful consideration. For example, the Australian social media ban focuses on platforms that allow direct messaging, which would currently exclude YouTube. Any framework, therefore, must be adaptable to evolving business models and new platforms that may emerge. Policymakers may also consider excluding lower-risk platforms, such as YouTube in this instance, depending on the desired outcomes.
What Social Media Platforms Can Do
Most major platforms already have a range of safety measures in place, including:
- Enhanced protections for under-18 accounts;
- Screen time limits and usage reminders;
- Content moderation policies targeting dangerous behaviour
- Improving transparency around how algorithms curate content
- Offering users greater control over recommendation systems
- Expanding education on how algorithms work and how users can manage their online experiences
- Establish safer, limited accounts by default rather than as an additional setting
TikTok, for example, claims to remove 99% of dangerous content before it is reported. And yet critics argue these measures are reactive rather than preventive, and inadequately address systemic design risks.
Platforms have an opportunity to go beyond compliance by considering the following:
While algorithms remain commercially sensitive, transparency at a systems level, rather than full disclosure of code, alongside an educational programme could help rebuild public trust.
Responsibility, Freedom of Expression and Censorship
A central tension remains the question of how best to balance freedom of expression with harm prevention. The UK’s Online Safety Act and the EU’s DSA extend obligations beyond illegal content to “harmful” content — a framework which has raised questions, starting with initial legislative deliberations and continuing till now, over who defines harm and how platforms can and should intervene without veering into censorship.
Expecting platforms to monitor and moderate every piece of hosted content is neither technically nor ethically straightforward. Yet the argument increasingly focuses not on individual posts but product design: specifically, whether the engagement-driven online architecture creates foreseeable and preventable risks, particularly for children.
Age Assurance and Access Controls
Another area of focus is the age of consent. While the minimum age for most platforms is 13, enforcement is inconsistent. Governments and regulators are now considering stronger age-assurance mechanisms, including biometric verification and digital ID systems.
But raising the age of access or introducing bans carries its own risks, too. Research suggests removing access to mainstream platforms may in fact push young people toward less regulated and more dangerous online spaces, undermining safety objectives. In the UK, concerns are increasing around the ability of readily available VPNs allowing children to circumvent Online Safety Act restrictions.
Effective age assurance must therefore not come in the form of a blanket restriction, but rather should be balanced, privacy-preserving and accompanied by education.
Jurisdiction and Geopolitics
Legal accountability is further complicated by jurisdiction. None of the largest platforms is headquartered in the UK or EU, and many legal challenges face obstacles related to cross-border enforcement. TikTok has already argued that US courts lack jurisdiction over entities primarily based in the UK; similar arguments are likely in European cases.
These disputes are unfolding, moreover, against a backdrop of strained transatlantic relations, raising broader political and trade considerations regarding the regulation of US-based technology firms, given concerns of escalatory economic pressure or retaliation.
Conclusion
Cross-party political pressure means we are likely to see legislation to tackle and reduce children’s social media use going forwards. It will be up to social media platforms and governments to determine how accountable a platform is, whether it can build on existing safety measures and what role parents and society should play. Any regulations will need to be proportionate and future-proof, and to acknowledge the critical role online platforms play in day-to-day life.
It is a consequential moment for social media platforms. We look forward to continuing to work with government, industry and thought leaders as these discussions continue.
Topics: Online Platforms, Regulation, Technology, Digitaleconomy, digital policy, Innovation




Comments