When we talk about our right to speak freely, most of us know intuitively that isn’t just limited to the words that come out of our mouths. Because when we say that our “speech” is protected by the First Amendment, we’re also talking about books, movies, TV shows, video games, music, virtual reality simulations, art — every way that human beings express themselves. Last week someone posed the following question: What if the expression isn’t from a human being at all? Does the First Amendment protect speech made by artificial intelligence?
Personally, I love questions like this. Some of the most exciting and unsettled First Amendment issues center on whether or not the amendment covers new forms of technology, like computer code, algorithms or blockchain. It might seem odd to consider these things to be in the same category as a fiery political speech or work of art, but they all share an important commonality in that they’re all vehicles people can use to communicate with one another and express ideas.
The history of technology and the First Amendment essentially involves our legal system slowly and reluctantly expanding the definition of speech to include new forms of communication. My favorite example is the Supreme Court’s initial take on movies. In 1915, the court decided movies weren’t protected by the First Amendment because they were a business rather than a legitimate form of expression and noted that movies were “vivid, useful and entertaining, no doubt, but ... capable of evil, having power for it, the greater because of their attractiveness and manner of exhibition.”
This language seems to reflect a fear that the technology behind movies was a little too entertaining and immersive and therefore needed to be controlled. But by 1952, the Supreme Court had changed its mind completely, stating that, “It cannot be doubted that motion pictures are a significant medium for the communication of ideas. ... The importance of motion pictures as an organ of public opinion is not lessened by the fact that they are designed to entertain as well as to inform.” By then, the once alarming technology had become ingrained in daily life and so the court could recognize its value in facilitating expression. Many believe the same will eventually be true for things like computer code (while some lower courts have recognized code as speech, the Supreme Court has never weighed in on this).
Of course, just deciding that something “counts” as speech doesn’t mean that it’s protected by the First Amendment. Not every type of speech is. For example, if you threaten someone’s life, or hire a hitman, you are certainly engaging in the act of speech, but the First Amendment won’t protect it. There are times when speech becomes conduct — when it’s more than just an expression of an idea, but constitutes an action — and that’s usually when the government can regulate it. For example, using code to create a video game could be considered an act of expression protected by the First Amendment, but using code to launch denial of service attacks probably wouldn’t be. There are very few bright-line rules for determining whether or not something is protected by the First Amendment. Courts have to examine the context surrounding an expression and, sometimes, the intent of the speaker when making these decisions.
Artificial intelligence (AI) adds a whole other dimension to this debate, because it’s not always clear who the speaker is. Right now, most code can be considered to be the expression of the programmers behind it. But as AI grows more sophisticated and more able to think for itself, there will come a point where the things it says and does can’t be attributed to any human being. (Maybe that point has already arrived. In 2016, Microsoft created an AI system named “Tay,” which they had operate a Twitter account to tweet out as a teenage girl and learn from the Twitter accounts that interacted with it. Within 24 hours, Tay became racist and antisemitic and Microsoft was forced to shut it down.)
When the day comes that Siri and Alexa are able to think for themselves, will the First Amendment protect their right to express those thoughts? As crazy as that might seem, there’s nothing in the text of the First Amendment that requires the speaker to be human. Furthermore, the First Amendment doesn’t just exist so that speakers can express themselves, but to protect listeners and viewers and their right to receive information. As John Frank Weaver wrote in his article, “Why Robots Deserve Free Speech Rights,” “The First Amendment protects the speaker, but more importantly it protects the rest of us, who are guaranteed the right to determine whether the speaker is right, wrong or badly programmed. We are owed that right regardless of who is doing the speaking.”
Of course, there are plenty of reasons why we wouldn’t want the First Amendment to apply to AI. It would make it just as difficult for the government to regulate computer speech as it is for the government to regulate our speech — which might be a problem considering that computers are much, much better at speaking than we are. As the wonderfully named law review article, “Siri-ously? Free Speech Rights and Artificial Intelligence,” points out, “ a number of thoughtful commentators have already extensively documented the harms caused by the speech products of existing technologies due to computers’ phenomenal speed and often global interconnectivity, harms that include deception, manipulation, coercion, inaccuracy and discrimination. We can expect such harms only to mount with the growing communicative capacities of increasingly sophisticated computers.”
But the article goes on to point out that failing to protect AI speech risks the government suppressing a valuable source of information for human beings and that we don’t need to take an all-or-nothing approach here. These are still the early days of the so-called AI revolution.
— Lata Nott is executive director of the First Amendment Center of the Freedom Forum Institute. Contact her via email at email@example.com, or follow her on Twitter at @LataNott.