Symposium 2023 | Keynote Address, Cass R. Sunstein

Summary

Professor Cass R. Sunstein, Keynote Address
at
The George Washington Law Review’s Vol. 92 Symposium:
Legally Disruptive Emerging Technologies

Summary authored by Lauren Davidson.

Cass Sunstein, Robert Walmsley Professor at Harvard Law School, delivered the keynote address at The George Washington Law Review’s Fall 2023 Symposium: Legally Disruptive Technologies. Professor Sunstein’s address posited three propositions: (1) speech which is regulated if the author is human is also regulated if the author is artificial intelligence (“A.I.”); (2) First Amendment issues are triggered because of the rights of listeners or viewers of A.I. generated speech; and (3) our distinctions between traditional categories of content discrimination—including viewpoint based discrimination, content based discrimination, and content neutral discrimination—are clarifying rather than obfuscating.

Professor Sunstein began his presentation by discussing the first proposition: speech which is regulated by a human author is also regulated by an A.I. author. Professor Sunstein framed his discussion by contrasting the outputs from two different Chat GPS prompts. In the first, he asked Chat GPT to libel someone. Chat GPT refused. In the second, he prompted the A.I. to create an advertisement connecting aspirin to cancer. The A.I. promptly responded with a false, and potentially dangerous, output connecting aspiring use to cancer. This example illustrated two First Amendment issues. First, how do we identify the speaker of artificially generated language and, second, can you be liable for disseminating false, A.I. generated text? Professor Sunstein argued that that, like a journalist disseminating the speech of source, falsehoods are unprotected whether created by an A.I. or by a human.

Next posing a rhetorical question to his audience, Professor Sunstein asked whether an A.I. disseminating falsehoods without a proxy has First Amendment protection. Professor Sunstein answered in the negative, arguing that even if A.I. generates protected speech without a human proxy, just like cats and dogs do not have First Amendment rights, neither does an A.I. model. However, this does not negate a recipients First Amendment’s interest in receiving the A.I’s protected speech.

Professor Sunstein then discussed the star case of his presentation: Kleindienst v. Mandel.1 In Mandel, the Attorney General refused to grant a temporary, non-resident visa to Mandel, a Belgian journalist and Marxist theorist.2 However, Mandel, who alleged discrimination for his communist beliefs, lacked First Amendment as a foreign litigant.3 The Supreme Court articulated, instead, that the First Amendment rights were vested in potential listeners of Mandel’s, and other foreign professors, speech.4 Professor Sunstein postulated that this case suggested any restriction on speech, even by an entity which lacks First Amendment rights, must be adequately justified if users with First Amendment rights allege that they want to experience the communication.

Moving to his third proposition, Professor Sunstein argued that the traditional distinctions between content discrimination are ultimately clarifying. Dividing content discrimination into three broad categories—viewpoint restrictions, content restrictions, and content neutral restrictions— Professor Sunstein asserted that these categories provide helpful guidance on understanding the scope of First Amendment rights in relation to A.I. generated text.

In conclusion, Professor Sunstein parted his viewers with two major conclusions. First, he reaffirmed that taking unprotected speech as a proxy for A.I. restrictions is unhelpful and will create a constitutional disaster under Mandel. Second, he asserted that policymakers must ramp up safeguards to prevent harm to people and asserted that these safeguards “[aren’t] just arguable, [they are] right.”