Section 230 of the Communications Decency Act says services — from Facebook and Google to movie review aggregators and mom blogs with comment sections — shouldn’t be held liable for most content from third parties. It is easy in these cases to distinguish between the platform and the sender. Not so with chatbots and AI assistants. Few have ever grappled with whether Section 230 provides them with protection.
Consider ChatGPT. Type a question, and it provides an answer. It not only displays existing content such as a tweet, video or website contributed by another person, but records its contribution in real time. The law states that a person or organization becomes liable if they “enhance” the content even “in part.” And does turning, say, a list of search results into a summary qualify as an improvement? In addition, the performances of all AI offerings are greatly appreciated by the creators of AI, who have set the rules of their programs and shaped their outcome by reinforcing the behaviors they like and discouraging those who don’t.
But at the same time, the whole answer of ChatGPT is, as one commentator put it, a “remix” of third-party material. The tool builds its answers by predicting which word should come next in the sentence based on what words are next in sentences across the web. And just as the creators behind the machine inform the output, so do the users who ask questions or participate in the conversation. All of this suggests that the level of protection provided by AI models can vary depending on how repetitive a particular product is compared to integration, and how deliberately the user has tried the model to produce a given answer.
So far there is no legal clarity. Supreme Court Justice Neil M. Gorsuch said during oral arguments in a recent case involving Section 230 that AI “creates arguments today that can be content that goes beyond selecting, choosing, analyzing or digesting content” — thinking it’s “unsafe.” Last week, the show’s writers agreed with his analysis. But companies operating at the next frontier deserve a strong response from lawyers. And to find out what that answer should be, it’s worth looking, once again, into the history of the Internet.
Scholars believe that Section 230 was responsible for the explosive growth of the web in its formative years. Otherwise, endless lawsuits would have prevented any new service from becoming a network as important as Google or Facebook. That’s why many call Section 230 “the 26 words that created the Internet.” The problem is that many now think, in retrospect, that the lack of results encouraged the Internet not only to grow but also to grow out of control. With AI, the world has the opportunity to act on the lessons learned.
That lesson should not be to unceremoniously strip Section 230’s immunity from capital language models. After all, it was like that good that the Internet can grow, even if its diseases grow, too. Just as websites cannot hope to expand without the protection of Section 230, these products cannot hope to provide a wide variety of answers on a variety of topics, for a variety of applications – which is what we should be looking for. to do – without legal protection. However, the United States also cannot afford to repeat its biggest mistake in Internet governance, which was not governing at all.
Lawmakers should give temporary room to Section 230 for new AI models while watching what happens as the industry takes off. They have to solve the problem of these provocative tools, such as who is responsible, say, in the case of defamation if the developer is not. They should study complaints, including lawsuits, and judge whether they can be avoided by changing the immune system. They should, in short, let the Internet of the future grow like the Internet of old. But this time, they should pay attention.