TL;DR
- Ad Removal: Meta pulled over a dozen attorney ads recruiting plaintiffs for social media addiction lawsuits on April 9.
- Legal Context: A California jury awarded $6 million in damages after finding Meta negligent in a child addiction case.
- Conflict of Interest: The move raises questions about whether Meta can suppress lawful ads to limit its own litigation exposure.
- Section 230 Risk: Legal experts warn that selective ad removal could undermine Meta’s Section 230 defense in pending lawsuits.
Meta is pulling attorney recruitment ads from Facebook, Instagram, Threads, and Messenger that seek plaintiffs for social media addiction lawsuits against the company. The removal, first reported by Axios, comes two weeks after a California jury found Meta negligent in a landmark child addiction case, awarding $6 million in damages.
Blocking these ads lets Meta wield its dual role as advertising gatekeeper and litigation defendant: the company controlling the world’s two largest social media apps is now deciding which lawsuits against it can be advertised on those same apps. With more than 10,000 individual lawsuits and 800 school district claims filed nationwide, according to court records, the action raises immediate questions about whether a company can use its own platform to suppress lawful advertising that threatens its legal position – a move critics say could paradoxically strengthen the case against it.
Meta’s Ad Removal and Its Justification
According to Axios, over a dozen addiction lawsuit ads were deactivated on April 9, including ads from large national firms Morgan & Morgan and Sokolove Law. The ads ran across Facebook, Instagram, Threads, and Messenger, as well as Meta’s Audience Network, which distributes ads to thousands of third-party sites. A few ads remained active after the sweep, including some posted earlier that day.
It is unclear whether any of the plaintiff-seeking law firms are backed by private equity, as the original California lawsuit appears to have been. Meta appears to be relying on a terms-of-service clause allowing it to remove content deemed necessary to avoid or mitigate misuse of its services or adverse legal or regulatory impacts. No similar restriction appears in Meta’s advertising standards, though those standards include a separate clause reserving the right to reject ads contrary to the company’s interests, including its “competitive position” and “advertising philosophy.”
“We’re actively defending ourselves against these lawsuits and are removing ads that attempt to recruit plaintiffs for them. We will not allow trial lawyers to profit from our platforms while simultaneously claiming they are harmful.”
Meta spokesperson (via Axios)
Meta frames the removal as defensive litigation strategy, but it also demonstrates the company exercising editorial judgment over which ads can run on its platforms. Attorney recruitment advertisements are legal in all 50 states, and Meta’s own advertising standards contain no explicit prohibition against them. As both the advertising platform and the defendant in these lawsuits, Meta’s decision to suppress legal advertising creates a conflict of interest – and the kind of curatorial decision-making that courts examine when assessing Section 230 protections. Traditional media companies have long accepted attorney recruitment advertising without regard to whether the advertised lawsuits target their own interests.
The Verdict That Changed the Calculus
The ad removal follows negligence verdicts against Meta and Google in late March 2026. A New Mexico jury ordered Meta to pay $375 million for endangering children by enabling sexual predators on Instagram, the largest social media child safety verdict to date.
A Los Angeles jury later found Meta and YouTube negligent in a bellwether addiction case, awarding $6 million in damages, with Meta assigned 70% of liability.
The California plaintiff, a now 20-year-old identified as K.G.M., accused Meta of making deliberate design choices – including infinite scrolling and face-altering filters – that caused her addiction to Instagram from an early age, resulting in depression and thoughts of self-harm. Trial testimony from a former Facebook engineer revealed that safety features were stripped through internal review processes, and that only roughly 20 of Meta’s 30,000 engineers were focused on teen safety issues such as suicide and mental health, according to Scientific American. As a bellwether case, the outcome sets the template for thousands of pending lawsuits against Meta and other platforms.
The combined $381 million in damages from the two verdicts signals a fundamental shift in how courts treat social media companies. For years, the industry operated under the assumption that Section 230 of the Communications Decency Act provided broad immunity from liability for user-generated content. The California verdict reframed the question: it is not the content that causes harm, but the delivery mechanism itself.
That distinction produced a platform design liability precedent treating features like infinite scroll, autoplay, and notification systems as product defects rather than protected editorial choices. Plaintiffs successfully argued that these design decisions are separate from user content and therefore fall outside Section 230’s shield. If the theory survives appeal, it could expose every social media company to product liability claims based on how their apps are engineered.
“If plaintiffs can focus on how a service is designed, rather than the content that’s delivered via that design, they will always do so, and according to this court, that means that they will always get around Section 230 – and as a result, Section 230 is essentially eviscerated.”
Eric Goldman, professor at Santa Clara University School of Law (via Scientific American)
Lawyers across the country are now seeking new plaintiffs to bring class action lawsuits following the verdict – which is why attorney recruitment ads proliferated on Meta’s own platforms, and why Meta moved to shut them down.
A Free Speech Contradiction
The ad removal sits uncomfortably with Meta’s stated commitment to free expression. In a 2019 Georgetown University address, Zuckerberg pledged to uphold “as wide a definition of freedom of expression as possible.” In January 2025, he announced Community Notes as a replacement for third-party fact-checking, stating Meta’s systems had led to too much censorship. That same month, Meta’s official blog declared the company would “allow more speech by lifting restrictions on some topics that are part of mainstream discourse.”
Removing attorney ads while professing openness exposes a tension at the core of Meta’s public positioning. Selectively exercising editorial control over its advertising platform could undermine Meta’s own Section 230 defense. If the company can choose which legal advertisements run based on their implications for its litigation exposure, it becomes harder to argue the platform is merely a neutral conduit for third-party content.
Building on that pattern, the financial stakes clarify the motivation. In its January 2026 earnings report, Meta conceded material loss exposure due to “scrutiny on youth-related issues.” Unsealed filings in November 2025 revealed Meta had buried an internal study linking Facebook usage to addiction and depression. A separate October 2025 ruling found Meta’s lawyers had advised blocking teen harm research to avoid litigation.
From suppressing internal research to suppressing external advertising, the trajectory points to a company focused on controlling the narrative rather than addressing the design decisions at issue. Roughly 1,600 of the cases are pending in California alone, according to court filings. A federal multidistrict litigation bellwether is scheduled for this summer, testing whether the California design-liability theory holds in federal court.
As a result, the regulatory pressure extends beyond American courtrooms. Australia’s social media ban for minors took effect in December 2025, and Greece and Indonesia have since adopted similar measures. Meta was also recently embroiled in a dispute with the Motion Picture Association over advertising its Teen Accounts using PG-13 movie ratings, a conflict it quietly conceded in early April 2026. Whether blocking plaintiff recruitment on its own platforms can meaningfully slow the tide of lawsuits remains doubtful when the underlying design choices that courts have now deemed defective remain unchanged.


