The Federal Trade Commission recently launched a public inquiry into technology platform censorship. Digital platform censorship clearly raises serious policy concerns. Nevertheless, before filing lawsuits, the FTC (and its fellow enforcement agency, the Department of Justice, DOJ) will need to factor in First Amendment protections enjoyed by platforms and limitations on agency statutory authority. Bringing platform censorship cases may not be the best use of limited agency resources.
The FTC’s Concerns
In soliciting public comments in support of its inquiry, the FTC stated that it seeks “to better understand how technology platforms deny or degrade users’ access to services based on the content of their speech or affiliations, and how this conduct may have violated the law.” The FTC stressed that such actions by platforms may run afoul of antitrust or consumer protection laws:
“Censorship by technology platforms is not just un-American, it is potentially illegal. Tech firms can employ confusing or unpredictable internal procedures that cut users off, sometimes with no ability appeal the decision. Such actions taken by tech platforms may harm consumers, affect competition, may have resulted from a lack of competition, or may have been the product of anti-competitive conduct.”
The FTC could attempt to argue that such harmful behavior violates the FTC Act’s prohibitions against “unfair methods of competition (UMC) and “unfair or deceptive acts or practices” (UDAP). Proving UMC or UDAP violations, however, could prove quite difficult.
FTC and DOJ Legal Challenges
First Amendment Protections
FTC and DOJ enforcement powers are constrained by the U.S. Supreme Court’s decision in Moody v. NetChoice (2024), holding that social media platforms’ content moderation decisions are protected speech under the First Amendment of the U.S. Constitution.
The Court’s Moody decision clarifies that the First Amendment applies to online platforms and that content moderation decisions are protected. Thus platforms may freely decide whether to include or exclude particular opinions – the government cannot regulate the platforms’ choices “just by asserting an interest in better balancing the marketplace of ideas.”
Unfair Methods of Competition (UMAC)
The FTC’s UMAC authority covers unilateral and joint anticompetitive conduct that would violate the Sherman Antitrust Act. The DOJ enforces the Sherman Act directly.
Section 2 of the Sherman Act forbids unilateral actions of monopolization or attempted monopolization by dominant firms. The FTC would first have to show that a platform had a monopoly power over an avenue for public digital expression. This would prove almost impossible to do, given the large number of social media platforms (for example, Facebook, X (formerly Twitter), LinkedIn, YouTube, and TikTok).
But even if a platform were held to monopolize some social media market, monopolization requires a showing of “anticompetitive conduct” – such as disfavoring rivals without a legitimate business justification. Supreme Court case law, however, sharply limits a monopolist’s liability for refusals to deal, meaning that such a showing would probably fail.
Nevertheless, if a court found monopoly power and anticompetitive conduct, the Moody decision would preclude a court from striking down discriminatory content moderation decisions by the monopolist platform.
The FTC potentially could argue that UMAC extends to “standalone” violations – unilateral unfair platform business conduct that falls short of Sherman Act monopolization. The First Amendment would, once again, prevent this argument from restricting the platform’s moderation policy.
Section 1 of the Sherman Act declares illegal anticompetitive concerted conduct – contracts, combinations, or conspiracies that unreasonably restraint trade.
Public comments suggest that the FTC and the DOJ are considering 3 types of potential Section 1 violations:
- Agreements among platforms to suppress particular content (for example, articles on the Hunter Biden laptop). The argument would be that suppression of particular content would harm consumers and competition by reducing a particular type of output. Each platform acting alone could suppress content, protected by the First Amendment. In contrast, a joint decision could have an anticompetitive business aim (avoiding losing business to competitors), not protected by the First Amendment. The platforms could counter, however, that their interest was purely expressive, unrelated to business. The outcome of any “agreement suppression” prosecution would be highly uncertain.
- Agreements among advertisers to boycott particular content or platforms. Advertisers could argue that such agreements reflected primarily freedom of expression and therefore entitled to First Amendment protection. In the Claiborne Hardware case, the Supreme Court held that a boycott that is primarily expressive is shielded by the First Amendment. As such, prosecutions of these agreements would be highly risky.
- Agreements between technology platforms and government officials. The Supreme Court has stated that “[g]overnment officials cannot attempt to coerce private parties in order to punish or suppress views that the government disfavors.” It is the government, however, that is liable for this coercion, not private parties. Platforms could also argue that conversations with government officials are part of normal business and should not be chilled. It appears very doubtful that alleged “platform-government agreements” could be prosecuted as an antitrust violation.
Unfair or Deceptive Acts or Practices (UDAP)
Platform censorship could only constitute “deception” or “unfair” acts or practices under very narrow conditions that are seldom met, as explained in a commentary by George Mason Law School economics and privacy scholars. Any FTC platform UDAP cases likely would be “legal longshots,” representing a low probability of success at a high resource cost.
Deception
As the economics and privacy scholars point out, “a representation is deceptive under the FTC Act if it is material and likely to mislead a significant minority of reasonable consumers.” Applying this test, a deception claim for platform censorship would only fly if a platform could be shown to have violated very specific statements about how it would deal with particular types of content.
Platforms are not likely to have made such very specific promises. Broad statements about a platform’s moderation policy are inherently subjective, and would likely be seen by courts as mere “puffery.” Furthermore, claims that a platform “fooled” consumers by engaging in “hate speech” or “misinformation” are subjective as well and unlikely to pass muster in court.
Unfairness
The FTC Act provides that “[a]n act or practice is unfair if it (1) causes or is likely to cause substantial consumer injury, (2) that is not outweighed by countervailing benefits to consumers or competition; and (3) is not reasonably avoidable.”
To win a platform censorship case, the FTC would have to show: (1) a direct link between breaches of platform terms of service and harm imposed on many consumers; plus (2) harm suffered by disfavored consumers that outweighed the benefits to other platform users (such as a reduction in content the second group disliked); plus (3) disfavored consumers could not have reasonably avoided their injury (unlikely, since those consumers could readily have found other platforms to post or review comments). Any 1 of those showings would be hard to prove in court, let alone all 3.
Next Step for the Agencies
The FTC’s inquiry into technology platform censorship may well bring to light abuses of government power and actions by digital platforms that systematically favor particular viewpoints. Shedding a spotlight on such conduct plainly serves the public interest, particularly when public malfeasance is revealed – as Supreme Court Justice Louis Brandeis famously stated, “sunlight is said to be the best of disinfectants.”
FTC inquiry sunshine could lead platforms to revisit and perhaps reform their content moderation policies. It might also discourage government officials from providing troublesome non-public content moderation “advice” to platforms.
Both consumers and many commercial businesses that deal with platforms would benefit. What’s more, these benefits could be achieved without the costs and uncertainties of lawsuits, which would face major First Amendment challenges and a low probability of success.
This reality, particularly in a time of tight agency budgets, would seem to counsel against bringing platform censorship cases. By allocating all (or virtually all) of their litigation resources to more traditional antitrust and consumer protection matters, the FTC and the DOJ could get the “greatest bang for the enforcement buck.” This would benefit American consumers and competition.