Meta and YouTube lost important social media tests. That is bad for free speech.


This week, juries in California and New Mexico dealt a pair of historic judgments against the giants of American social networks.

In Los Angeles, a jury awarded $6 million to a girl who claimed that Instagram and YouTube damaged her mental health. A day earlier, a Santa Fe jury ruled that Meta had designed its social media platforms in a way that harmed children — and ordered the company to pay $375 million in damages.

These decisions created a breakthrough for the legal movement that sees social media companies as new “Big Tobacco” — an industry that knowingly sells dangerous and addictive products. And it was a victory for the advocates of “children’s online safety,” who believe that social media is harming children’s psychological well-being. With thousands of similar cases pending, the California and New Mexico rulings could be models for change.

However, there are also decisions raised the alarm alarm to many defenders of free speech. For organizations like FIRE – and civil liberties writers like The reason Elizabeth Nolan Brown — these decisions will do more to undermine free speech online than to protect the mental well-being of young people.

To better understand – and question – this view, I spoke with Nolan Brown. We discussed how recent rulings could open the door to more regulation, evidence of the psychological effects of social media, and whether parents can adequately protect their children from internet abuse without government help. Our conversation has been edited for clarity and brevity.

Have you ever written that these decisions are “a very bad reputation for an open Internet and freedom of expression.” How?

One important safeguard for online speech is Section 230 of the Federal Communications Integrity Actwhich prevents online platforms from being held accountable for hosted speech but does not.

What we see in these cases is an attempt to circumvent Section 230 by recasting speech issues as “product liability” issues. Instead of saying, “We’re going after platforms for hosting bad speech,” the plaintiffs are saying, “We’re going after them for reckless product design.”

In other words, the choices social media companies make about how to curate their feeds or encourage engagement.

All right. Some of the things they complained about were “endless scrolling” (where you keep scrolling down and the feed doesn’t stop at the end of the page), algorithms that promote content that the user is more likely to engage with, and beauty filters.

But in the end, if you look at what they’re after, it comes down to speech. When you talk about TikTok or YouTube being so engaging that it’s “addictive,” you’re talking about content: No matter how TikTok’s algorithm is designed, it wouldn’t be compelling if the content wasn’t compelling.

Similarly, in the California case, the plaintiff claimed that Meta allowing beauty filters on photos was negligent product design, as they promote unrealistic standards of beauty, which caused her to develop body image issues.

But that goes back to speech: The choice to use a filter is something that individual users do to express themselves. Providing those tools to users is a form of speech.

But aren’t many of these product design choices neutral? A defender of these decisions can say: Social media companies are tricking children into using their platforms under duress, in a way that is bad for their mental health. And they do this, in part, through push notifications, auto-play videos, and endless feed scrolling. So, why can’t we legally prevent their use of those features – without force the types of speech they are allowed to stage?

Some people will say, “Why don’t we put a notification – or remove people after an hour – if they are children?” But in order to implement any set of rules or product design choices just for teenagers, these platforms would need to have some foolproof way of knowing who is a child and who is an adult.

And that means age verification procedures, where they either check everyone’s government-issued ID, or they use biometric data – or something else that requires everyone to present an ID before they can talk anywhere on the internet.

And that brings many problems. It makes people’s data more vulnerable to identity theft, hackers and fraudsters. It also means that your identity is linked to everything you do online. And that can be dangerous, especially for people who talk about sensitive issues or who oppose the government. The ability to chat and plan online anonymously is very important.

What if product design restrictions apply to adults and children alike? If we stopped social media companies from sending push notifications to everyone, that would avoid the age verification issue, right?

Many networks provide people with the tools to do these things already. You can turn off the autoplay feature. You can have a custom feed. You can adjust your settings so you don’t have these features.

If we say, “Why can’t the government mandate these elections?” I think that is a very slippery slope. You might be thinking, “Well, who cares about push notifications? Why can’t the government just mandate no push notifications?” But the logic of that brings us to a wider area.

It says it well: Since some people will have a problem with this, the government should have a little control over how the product is made. Yet people can use all kinds of products in a problematic way: Equality systems, streaming services, food. And we’re not saying like, okay, the government comes in and tells these companies especially how to do business in a way that will have less harm to people. And that attitude is very dangerous when we talk about products that involve speech.

A skeptic might say that the slope here is not that slippery. After all, the government has already demonstrated that it can enact targeted, non-substantive restrictions on speech without triggering a chain of censorship.

For example, since 1990, there have been limits on the amount of advertising which can be shown during children’s programming in certain hours – and also the requirement that advertising and content are clearly separated. Those measures are arguably more intrusive on speech than, say, banning autoplay videos on a social media platform. And yet, the Children’s Television Act of 1990 did not lead to any restrictions on First Amendment rights.

I think it makes a big difference if you’re talking about restricting speech for children and restricting it for adults. And what you were just mentioning were restrictions that would apply to everyone.

Beyond the First Amendment issues, you have expressed skepticism about the specific causation claims made by the plaintiffs in these cases: Specifically, social media caused their mental health problems. Still many social psychologists – the most famous Jonathan Haidt – have claimed that these platforms are damaging the psychological state of children. So, why do you think the claims here are exaggerated?

In a California case in particular, this young woman claims that, because she was on social media from a very young age, she developed mental health problems. But there were many testimonies that showed that there were many other things that were going wrong in his life. She suffered from domestic violence. He had problems with his parents, problems at school.

So the idea that social media directly caused her problems – rather than these life stresses that are well known to cause harm – I think that’s kind of suspect.

And I think you see this problem in the broader research on the mental health effects of social media. Often there is a ratio between depressive symptoms and heavy use of social networks because people who have a hard time at home and school – people who are socially isolated – tend to use social media more than people who are better off.

To what extent are your views on social media regulation based on questioning the real harm of these systems? If we found evidence that there really was a significant impact here – that autoplay and beauty filters were damaging children’s mental health – would you support legal restrictions on these features? Or would First Amendment concerns override public health concerns, regardless of the evidence?

The strength of evidence is important for guiding the decisions of individuals, parents, families, communities and school districts. But even if we knew that beauty filters cause a lot of harm, the government still would not have the right to ban them, since they are ways of speaking. Most people are not harmed by them.

There are many things that are harmful to some people, but beneficial to others. And I don’t think the existence of problematic use justifies banning those things for everyone.

I think the talk of social media “drugs” may be inappropriate in this regard. That language suggests that this is something that can automatically harm everyone. And that’s not the case. Most people use social media in a healthy way, in the same way that most people can drink alcohol without harming themselves, or eat a bag of chips without overdoing it.

I think it’s the same way with social media. This is a technology that can harm some people, especially those who already have psychological problems.

But it is not this addictive substance or poison that you cannot even be exposed to, or else. I think the look eclipses smartphones with almost unfathomable quality.

There are many situations, though, where we choose to over-regulate a substance or practice – not because it harms everyone involved – but rather, because it’s compelling. big it harms a minority of problematic users. Gambling and alcohol are two examples. But even with opioids, many people can pop the pills and never develop a dependency. Still others end up intoxicated and die of overdose. And because of that, we severely restrict access to opioids.

So, I feel like the question here might be less about whether social media is bad everyone than if it has serious consequences for problematic users.

I think there are people who talk about it the way you do. But others describe social media as something that people are powerless against. But yes, I don’t think we have strong evidence that this is dangerous in the way that addictive substances are. Actually, I think the evidence is mixed. Some lessons show that moderate smartphone use is actually related to better mental health outcomes.

You say that, instead of seeking government restrictions on social media, parents should be more responsible about their children’s use of smartphones and apps.

Many parents argue that their ability to monitor their children’s social media usage is really limited and that they do not have the tools to protect their children from the harmful effects of these platforms. What would you tell them?

I think this is straightforward with very young children. Like, why does a 6-year-old have unfettered alone time on a digital device? In the California case, the plaintiff was using social media as a very much a small child. And at that age, parents definitely have control over what their kids do and see online; you can control whether your child can access the smartphone. With teenagers, there are areas where technology companies are working with parents. We have seen more parental controls introduced in recent years. We have seen the Meta released specifically math for children who have certain restrictions on them. We have seen things like the introduction of the telephone which allows basic text but not a particular program. So, I think individual solutions are possible here. I think we can address people’s legitimate concerns without the government infringing on free speech.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *