Skip to content
Home » Supreme Court overturns injunction against Murthy v Missouri

Supreme Court overturns injunction against Murthy v Missouri

This article, about the recent Supreme Court decision on Murthy v Missouri, was written by Dorit Rubinstein Reiss, Professor of Law at the University of California Law San Francisco, who is a frequent contributor to this and many other websites, providing in-depth, and intellectually stimulating, articles about vaccines, medical issues, social policy, and the law.

In law journals, Professor Reiss writes extensively about vaccination’s social and legal policies. Reiss is also a member of the Parent Advisory Board of Voices for Vaccines. This parent-led organization supports and advocates for on-time vaccination and the reduction of vaccine-preventable diseases. She is also a member of the Vaccines Working Group on Ethics and Policy.

On 26 June 2024, the Supreme Court overturned the Fifth Circuit’s decision in Murthy v Missouri to grant an injunction forbidding the government from coercing or significantly encouraging social media companies to take down misinformation. The Supreme Court concluded that the plaintiffs lacked standing to bring the case.

Essentially, the Supreme Court found that the plaintiffs did not show that it was, in fact, the government’s actions that led to their post being removed or deplatformed or that the government may do anything to cause it going forward, so they did not make a case that it was government action, rather than platform decisions, that interfered with their ability to post misinformation on the platforms. 

This post explains and summarizes the Supreme Court decision on Murthy v Missouri.

This article by Professor Dorit Rubinstein Reiss discusses the Supreme Court’s decision in Murthy v Missouri, where the Court overturned an injunction that prevented government influence over social media's content moderation.
Photo by Pixabay on

Murthy v Missouri, the case

This case started in May 2022 as Missouri v. Biden. At that time, two states – Missouri and Louisiana – alleged that the Biden administration engaged in censorship against residents of the state.

At a later level, five plaintiffs joined, two of the signatories of the highly problematic Great Barrington Declaration, Drs. Jay Bhattacharya and Martin Kulldorff, Dr. Aaron Kheriaty, a psychiatrist who took arms in the anti-vaccine cause after losing his job with UC Irvine for refusing to be vaccinated against COVID-19, Jim Hoft, creator of The Gateway Pundit, a fake news website, and Jill Hines, head of Louisiana’s anti-vaccine organization, Health Freedom Louisiana. In short, the five people who joined the two states were people who likely had good cause to worry about social media acting against misinformation. 

As mentioned in an article by a sympathetic lawyer, multiple claims like that have been brought in a variety of courts, and the courts – rightly – consistently rejected these claims based on the fact that the choice to remove content and deplatform anti-vaccine misinformers was the platforms’ choice – not the government. Since social media platforms are private actors, the First Amendment does not apply to them, and they can make their own moderation decisions as they wish (something a lot of people run into and complain about, of course). For that reason, most such claims are lost in court. 

This case, however, ended up before Judge Terry A. Doughty, a judge with a clearly partisan approach to judging. I do not often say that; but in this case, the judge is fairly blatant about it. For example, in a preliminary injunction decision in a case challenging a vaccine mandate for the Head Start program, Judge Doughty, in the second paragraph of the decision, said:

In the immortal words of President Ronald Reagan, the nine most terrifying words in the English language are, “I’m from the government and I’m here to help.”

That is not a legal statement. That is a political, partisan statement. 

Where previous judges kicked out cases that tried to blame social media action on the government, Judge Doughty allowed the case to go forward and issued a preliminary injunction that ordered a range of government agencies not to talk – at all – to social media.

This was unusual, and the government, unsurprisingly, appealed. The Fifth Circuit panel who heard the appeal narrowed the preliminary injunction. Instead of forbidding any connection, it now forbade agencies to “coerce” or “significantly influence” social media. The problem is that in its lengthy decision, the court characterized it as coercion activities that did not include any direct threats and the decision is fairly seen as accepting, in essence, a conspiracy theory

Let me be clear: the government ordering social media or coercing or threatening it to remove (or deemphasize) content that the government dislikes, or to promote content the government likes, is something nobody reasonably wants. If you don’t think so, think back for a moment about the last five or six presidents of the United States.

Whatever your political views, I bet there are at least two or three there that you do not want to have the ability to decide what is or is not misinformation and what social media should let you see. This is why, I think, the Supreme Court was unanimous in its ruling in NRA v Vullo, where the Court allowed a case in which plaintiffs – NRA – convincingly made a case that the New York Department of Financial Services coerced companies not to do business with them. Government should not have the power to silence others’ speech directly, because that’s dangerous – and sooner or later it would come back to bite you, whichever side of the political spectrum you’re on. 

On the other hand, government interaction with social media, pressure, etc, is normal. Business as usual. And social media companies have a long history of ignoring or shrugging off government pressures when they don’t fit their interests, including by refusing to take down misinformation. If you go back to articles in 2019, the main complaint was that the government was not doing enough to regulate social media, and social media was causing harm – though the government was asking social media and pressuring the companies then, too. 

The question is where to draw the line. The problem, in my view, with the Fifth Circuit line was both the heavy drawing on conspiracy theories, ignoring the fact that social media companies – as the Supreme Court pointed out – were already acting on misinformation before the government stepped in, and it’s tricky to attribute the action to the government given the history,  and the inclusion of the interactions with CDC, which the court acknowledges are not coercive. 

But that is not what the Supreme Court decision overturning the injunction focused on.

scrabble tiles
Photo by CQF-Avocat on

The Supreme Court majority decision on Murthy v Missouri

In a 6:3 decision, the Supreme Court overturned the injunction. The majority justices included the three justices appointed by Democratic Presidents – Justices Kagan, Sotomayor, and Jackson – and three justices appointed by Republican Presidents, Justices Barrett (who wrote the opinion), Chief Justice Roberts, and Justice Kavanaugh. Justices Alito, Thomas, and Gorsuch dissented.

The Court did not set or provide us with a test for identifying when government interaction with social media becomes impermissible coercion, as pointed out in this post. Instead, the Court found that the plaintiffs here did not have the right to obtain relief from the federal courts in Murthy v Missouri, because they lacked standing

The idea of the standing doctrine is that in the United States, federal courts have the authority to resolve actual disputes, but not abstract questions, and people cannot bring cases against the government just because they are unhappy with something the government did: they have to have an actual stake.

Taking down your posts is a personal stake, but the issue here is that plaintiffs need to connect government action to the post taken down. If the government took down posts the users made on government websites, they would have standing: the government directly harmed them.

But in this case, the plaintiffs are claiming that the government pressured social media to take posts down, and the danger is that they will do so again, chilling their free speech. What they want the court to do is to tell the government not to pressure or encourage social media platforms to take down their posts or accounts in the future.

As the court pointed out, that creates challenges for plaintiffs, which, the court explained, “must show a substantial risk that, in the near future, at least one platform will restrict the speech of at least one plaintiff in response to the actions of at least one Government defendant.” (p. 10 of the opinion).  The plaintiffs’ main argument was that their past harms from the platforms – taking down posts or accounts – was because of the government defendants and these past actions show a real risk of future action.

Remember: the plaintiffs are not complaining directly against the platforms. The platforms, as private companies, are not subject to the First Amendment (that, of course, creates its own issues, given the power of social media in the modern world, but that is how our law works: constitutional rights only protect you against government, not private actors). Their claim is against the government action that they say silenced them. So they need to tie everything to the government. 

The Court found that they did not do that. The Court rejected the Fifth Circuit’s general approach – that because the platforms “censored” ““certain viewpoints on key issues” while “the government has engaged in a years-long pressure campaign” to ensure that the platforms suppress those viewpoints. 83 F. 4th, at 370. The platforms’ “censorship decisions”—including those affecting the plaintiffs—were thus “likely attributable at least in part to the platforms’ reluctance to risk” the consequences of refusing to “adhere to the government’s directives.””

The Court found this “overly broad”, and required examples of specific instances of content moderation by the platform for a plaintiff. The court was especially skeptical since “the platforms, acting independently, had strengthened their pre-existing content- moderation policies before the Government defendants got involved,” and since “the platforms continued to exercise their independent judgment even after communications with the defendants began.”

Since the platforms were already doing this, the Court did not find it convincing that it was the government’s approach that led to the content moderation plaintiffs were complaining about. And since the platforms refused to follow the government’s lead in some cases, it’s impossible to say that the plaintiffs here were affected by the government without pointing to such cases – which plaintiffs did not.

The rest of the majority’s decision on Murthy v Missouri reviews each plaintiff. In essence, the state plaintiffs did not point to any specific instances that the government’s role had anything to do with content moderation for anyone in their state (the one instant raised – flagging and deboosting (not removing) a state representative’s post about children and COVID-19 vaccines – did not address whether the removal was before or after the connection. 

Drs. Bhattacharya, Kulldorff, and Kheriaty’s complaints are all about social media restrictions that predated the discussions between the government and social media that the plaintiffs are highlighting. Absent a time machine and plaintiffs did not make a case that the government had or used one, these restrictions cannot be attributed to the communications between the government and social media that are the basis of the case. 

Jim Hoft, the Gateway Pundit, is mostly complaining about restrictions on his election-related posts, but for him, too, the posts in question were restricted before the communications discussed in the decision. 

Jill Hines’ case was closer because she could point to decisions that may have been connected (though the connection, the court points out, is “tenuous”). The restriction against her started in October 2020, before the communications in question, and one of her groups was deplatformed in July 2021.

Even there, the evidence is mixed, because it seems that Facebook has said it already acted – and CDC and Facebook were discussing reducing the reach of the book, not total removal (see pp. 17-18 of the opinion).  There were also later sanctions, though it’s not clear if they’re connected to anything the government – especially the CDC, in this case – did.

But even if, on this relatively weak evidence, Jill Hines could show some link between her Facebook sanctions and government action, said the Court, she would have to show the behavior is going to continue – but the evidence is that the government efforts “slowed to a trickle” by August 2022, when Hines joined the case. (p. 23). 

So she, too, has no case. Further, as phrased concisely by Prof. Eugene Volokh in his summary of the decision

…the plaintiffs couldn’t show that an injunction would protect their speech, since there’s no reason to think that even an injunction would lead the platforms to stop enforcing their policies (whether or not the policies were prompted by the government). [T]he available evidence indicates that the platforms have continued to enforce their policies against COVID–19 misinformation even as the Federal Government has wound down its own pandemic response measures. Enjoining the Government defendants, therefore, is unlikely to affect the platforms’ content-moderation decisions.

In addition to showing a direct injury, to show standing, plaintiffs need to show the court action will “redress” their issue – solve their problem. Here, plaintiffs could not, since an injunction against the government would not prevent the platforms from content moderation.

supreme court of united states in washington dc
Photo by Justin Coonan on

What about the dissent?

Three justices – Justices Alito, Thomas, and Gorsuch – filed a strong dissent on Murthy v Missouri, that essentially followed the Fifth Circuit’s lead. The dissent focused on Jill Hines, finding that she has standing, because she showed enough to make a connection between the government’s actions and the sanctions against her by Facebook. It also found, as the Fifth Circuit did, that there was coercion by the government – using similar arguments to the Fifth Circuit.


As Prof. Volokh pointed out in his post, the Supreme Court decision does not directly address when government speech becomes coercive – though as he pointed out, there are other cases on this, including the very recent NRA v. Vullo. We did not need Murthy v Missouri for this, and at the end of the day, it is still a case in which people unhappy with social media’s content moderation restrictions tried to bring a case by claiming a government conspiracy to impose restrictions. If it did not end before a partisan judge, this case would likely have died early, as other such cases did. 

But although, as the Supreme Court highlights at the start of the majority’s decision, the platforms were moderating the plaintiffs’ activities before the alleged communications took place – in fact, before the Biden administration, which the lawsuit targets, came into power – the lawsuit was allowed to advance on what can fairly be seen as a conspiracy theory. The Supreme Court was not convinced, and I think it was right to point to the lack of factual connections between the alleged behavior and the content moderation. 

One potential drawback of the case is that it could have a chilling effect on even non-coercive communications between the government and social media, undermining efforts by the government to use influence and guidance to combat misinformation. How you see that depends on how you see the government’s role here, and that is a tricky question. 

One additional and often ignored harm of the litigation effort is not about the lawsuit against the government, but the companion lawsuit against misinformation researchers in Stanford, which was also allowed by Judge Doughty to proceed to discovery, and in which researchers were targeted by misinformers who would rather not be called out. This strategy could – and likely did – directly have a chilling effect on misinformation research: Stanford has recently closed its disinformation research group, and that was directly related to the targeting of the group, including, though not only, by litigation. 

I hope, of course, that social media will continue to combat misinformation – even though it does it in imperfect, often flawed ways. I do not support the government controlling that effort; I see real dangers if the government does control what is and is not misinformation.

But this lawsuit was not aimed at preventing government coercion or protecting free speech from the government, as far as I can tell. It was part of an effort to reduce content moderation of misinformation, one of several lawsuits based on conspiracy theories that tried to use the courts to prevent such moderation.

Shutting it down is a good thing. 

Dorit Rubinstein Reiss
Liked it? Take a second to support Dorit Rubinstein Reiss on Patreon!
Become a patron at Patreon!

Discover more from Skeptical Raptor

Subscribe to get the latest posts sent to your email.

Discover more from Skeptical Raptor

Subscribe now to keep reading and get access to the full archive.

Continue reading