Later this period, the Supreme Court will deal with two key cases about online speech that could significantly shape the future of social media, the court announced on Monday.
One case, Gonzalez v. Google, will consider whether tech platforms’ recommendation algorithms are protected from lawsuits under a commonly invoked legal shield that tech companies have used to stifle other types of content moderation.
The other case, Twitter v. Taamneh, will determine whether social media companies can be sued for allegedly aiding and abetting an act of terrorism when the platforms have hosted unrelated user content that generally expresses support for the group behind the violence.
Both cases have significant implications for the tech industry, which has come under increasing pressure over content moderation in recent years amid calls from lawmakers and President Joe Biden for the corporate liability shield, Section 230 of the Communications Decency Act, to trimmed back.
The court’s rulings on Monday set the stage for a possible judicial tightening of the law, which has been heavily criticized by members of both parties over platforms’ handling of content, but which industry defenders say is essential to keeping online services free of spam, hate speech and other legal , but offensive content.
Google did not immediately respond to a request for comment. Twitter declined to comment.
More recently, some justices, including conservatives Clarence Thomas and Samuel Alito, have expressed interest in hearing cases about online content moderation, which could allow the court to weigh in on an increasingly influential sphere of public life.
By taking up Gonzalez, the court opens up new risks for platforms including Google, Meta and Twitter. In that case, the court is expected to decide whether Google can invoke Section 230 to avoid liability over its YouTube algorithms recommending videos made by supporters of the terrorist group ISIS. A possible verdict against Google could expose large parts of the tech giant’s business, not to mention other tech companies that use automatic recommendation engines, to new lawsuits.
In the Twitter case, the justices will review whether hosting general pro-ISIS content — unrelated to a specific terrorist attack by the organization — can constitute “knowing” and “substantial assistance” to the group in violation of a federal anti-terrorism law, especially in light of company policies and efforts to block this material.
A ruling against Twitter could mean tech platforms can’t cite § 230 to avoid lawsuits alleging violations of the US anti-terrorism law, effectively rewriting the liability shield.
Conversely, a ruling in Twitter’s favor could potentially uphold Section 230’s broad reach by overturning a lower court ruling that found technology platforms liable under the Anti-Terrorism Act.
This story has been updated with additional reaction and background information.