Updated ,first published
The same principles that make Meta great at selling you running shoes are equally effective at connecting predators and children. That’s not an activist’s claim, it’s what Meta’s former director of engineering told a New Mexico jury — and they believed him.
After a bruising seven-week trial, jurors found out this week that Meta violated the state’s consumer protection law for hiding what it knew about child sexual abuse and mental health effects on Facebook and Instagram. They ruled that the company made false and misleading statements and engaged in “unfair” business practices that exploited the vulnerabilities of children.
The ruling, which fined Meta $US375 million ($536 million), is an important moment for the social media giant, and for every technology platform that has treated child safety as a reputational problem to be managed, rather than an engineering challenge to be taken seriously.
Just one day later, in Los Angeles on Thursday morning (AEDT), Meta and YouTube were found responsible in a trial about whether the tech giants intentionally designed addictive features that harmed a young woman’s mental health. Compensatory damages were estimated at $US3 million, with Meta on the hook for 70 percent and YouTube the remaining 30 percent.
Together, they represent Big Tech’s “Big Tobacco” era.
The New Mexico case began when state Attorney General Raúl Torrez conducted an undercover operation in 2023, creating a false profile of a 13-year-old girl. According to Torrez, the narrative was “simply brought about” by sexual pressure from abusers. Three arrests followed, and the trial finally exposed a corporate culture where safety concerns were subordinated to growth prospects.
The evidence was harrowing: Meta’s former engineering director Arturo Bejar told the court that he panicked after his own 14-year-old daughter received sexual solicitations on Instagram. Former Meta vice president Brian Boland testified that he “didn’t really believe security was a priority” under CEO Mark Zuckerberg and then-COO Sheryl Sandberg when she left the company in 2020.
(Mark) Zuckerberg is developing an AI CEO agent to help him run Meta. Hopefully it will be more humane.
Then there was Zuckerberg himself. In a pre-recorded presentation played to the jury, Meta’s boss was confronted with 15 years of internal communications and consumer complaints describing his products as addictive.
When the prosecutor asked whether users “have repeatedly told your company and you personally that they find the products addictive”, Zuckerberg became angry.
People use the word “conversationally”, he said. “That’s not what we’re trying to do with the product, and it’s not how I think it works.”
He also acknowledged that Meta had previously set staff targets to increase the time young people spend on the platform, before moving to other metrics starting in 2017.
He used the free speech line, that he was concerned about “not taking into account the ways people can express themselves” and that the natural evidence of harm was insufficient.
Zuckerberg is developing an AI CEO to help him run Meta. Hopefully it will be more humane.
Meta’s defense was that prosecutors cherry-picked internal documents to paint an unfair picture, and that about 40,000 employees work on the platform’s security. The company’s lawyer, Kevin Huff, claimed that Meta was clear that its protection was not perfect.
“What we’re doing is showing the world what they knew behind closed doors and weren’t willing to tell their users,” Torrez said in response.
The decisions reach what is seen as part of a shift in social media, both in the United States and internationally. Hundreds of other lawsuits from individuals, school districts and U.S. attorneys general are pending.
Legal experts have drawn comparisons to the Big Tobacco case of the 1990s, and it’s an apt analogy. Like the tobacco companies, Meta is accused not only of selling dangerous products, but of completely hiding what its research showed about the damage.
Perhaps the most revealing thing to emerge during the trial was Meta’s sudden decision to kill end-to-end encryption on Instagram direct messages. That feature, which Zuckerberg championed in a 2019 announcement about “a privacy-focused vision for social networks”, will be discontinued on May 8.
Meta says it will appeal. It’s affordable: the $375 million penalty is roughly what the company earns in a day.
Internal documents emerged during the trial which showed Meta’s head of content policy, Monika Bickert, had warned at the time: “We are about to do something bad as a company. This is irresponsible.” He claimed that encryption would make it impossible to detect child abuse or terrorist plots and refer cases to law enforcement. The meta continued anyway. Now, seven years later, it has quietly reversed course, blaming low adoption rates on a feature that was buried behind multiple menus and never promoted.
The reverse encryption approach captures the fundamental contradiction at the core of Meta’s approach to child safety. The company has shifted between privacy-first and security-first rhetoric depending on which narrative suits its business interests at any given time.
When encryption was fashionable, Zuckerberg was its champion. While it was a legal liability — internal documents revealed it would affect about 7.5 million reports of child sexual abuse material to law enforcement — the feature was casually dropped via a two-line notice on the support page.
Meta treated child safety aspects with the same level of dedication as its multibillion-dollar efforts: leaving them behind when it was easy to do so.
Unfortunately, it takes multiple whistleblowers, lawsuits and decades of damages to realize that this company cannot be easily trusted when it comes to protecting children and youth.
So what does a real solution look like? There are emerging models, although none are perfect. Roblox, the gaming platform popular with kids, says it’s trying. It recently launched a known as the “World Council of Parents”, involving 80 parents from 32 countries, including Australia, who will meet every three months and gain direct access to local product teams. It has also created a “Parent Champion” program to expand the feedback loop.
Roblox has faced many child safety controversies, and the program is advisory rather than mandatory, but this is the kind of participation that could work if Roblox is real about it. It is treating parents as partners rather than barriers, which is completely different from Meta’s approach.
Australia, meanwhile, is running its own international trials. The the ban on social networks of young people threatens social media giants like Meta with up to $49.5 million in fines for violating the law. The Australian approach has significant weaknesses, particularly its disproportionate impact on rural, neurodiverse and LGBTQ youth who rely on online communities.
eSafety Commissioner Julie Inman Grant’s assessment is designed to measure these impacts, but the results will take years, not months.
But the New Mexico ruling is so strong that the status quo — trusting self-regulation systems while children are being abused on their products — is unenforceable.
The second phase of the case, most likely in May, will be heard by a judge who could order Meta to implement specific changes: proper age verification, removal of platform predators, and protection for children in encrypted communications. Meta says it will appeal. It’s affordable: the $375 million penalty is roughly what the company earns in a day.
The question now is whether this decision, along with a wave of lawsuits and industry-wide enforcement, will finally provide the kind of structural change that voluntary commitments have never had.
Australia’s youth social media ban, the Roblox parent council, and a New Mexico court decision all represent differences on how to force the issue. Perhaps neither is sufficient on its own but, together, they form a body of evidence that makes the industry’s favorite defense (“we’re doing our best!”) increasingly difficult to maintain.
For Meta, the most disturbing revelation from Santa Fe may not have been the verdict itself, but during the trial when the prosecutor played Zuckerberg’s presentation to the jury.
Here was the CEO of a trillion-dollar company, faced with the warnings of his own employees, the findings of his own researchers, the failure of his own platform – and asked to explain why, knowing all this, so little had changed.
The jury listened to him, then gave their answer.
Business Brief Magazine provides top stories, exclusive coverage and expert opinion. Sign up to receive it every weekday morning.





