Business & Tech

State Senator drafts legislation to regulate tools like ChatGPT—with the help of ChatGPT

Introducing ChatGPT
Photo Courtesy of openai.com.

By Jasmine Li

Boston University News Service

Massachusetts State Senator Barry R. Finegold drafted legislation earlier this month to regulate the use of artificial intelligence tools like ChatGPT. He used ChatGPT to help draft the bill to “illustrate its power,” a spokesperson for the Senator said.

The bill would require companies with generative AI technology to register their tools with the District Attorney’s office for regular risk assessments. It would also require these companies to produce text with a distinctive watermark or authentication process to allow viewers to easily identify AI-generated text.

Sen. Finegold’s spokesperson said these regulations are not restrictive to the growth and development of AI technology. 

“Setting guardrails for all of these new companies as they grow and develop can be helpful in shaping what the landscape looks like and giving everyone in this emerging technology space some rules to play by,” according to Finegold’s office.

Artificial intelligence generally refers to computer systems able to perform tasks that require human intelligence. Sen. Finegold’s bill specifically targets generative AI tools like ChatGPT and DALL-E that generate original text, images, code, or synthetic data from user prompts.

The AI-generated bill required light editing before it was appropriate to introduce to the State Legislature.

“It was about 70% of the way there after some poking and prodding,” the spokesperson said, “It still required some work, but even so, 70% of the way there is still very impressive.”

Sen. Finegold’s office noted that ChatGPT added several important details.

“It defined a key term, expanded on what the ‘core operating standards’ of generative AI models should be, and clarified the process for registering with the Attorney General’s Office.”

Experts are still skeptical about generative AI’s potential to deliver meaningful content on a consistent basis.

According to Andrew Sellars, a Boston University School of Law professor specializing in technology and intellectual property, generative AI is not as intelligent as people believe.

“If you look with a critical eye at what gets put out of ChatGPT, I think you’ll find very often that the results are a bit vapid and a bit empty,” he said, “That’s exactly what I see in this bill as well.”

The Senator’s use of ChatGPT to draft this bill was fun and provocative, Sellars said, but its regulation of discrimination and plagiarism requires more definition. As for watermarking, “I don’t see a technical solution for what they’re proposing there,” he said.

Data privacy is a significant concern, as most AI tools are trained with publicly available content from all over the web, including content that is copyright-protected. AI companies justify that the use of this content is protected by the fair use doctrine, which permits the use of copyrighted material if there is a limited and “transformative” purpose.

A Supreme Court ruling in 2022 “seemed to indicate that it largely would think that web scraping is not a violation of the Computer Fraud and Abuse Act,” Sellars said, referring to the federal government’s anti-hacking law.

But other laws can apply to the collection of information online. 

“When you gather a person’s biometric information, including their face, for purposes of building facial analysis technology, there are laws about biometric privacy that could come into play,” Sellars explained. 

Authorship of content produced by tools like ChatGPT is another complex debate. 

“Some have made the argument that no one is the author because no human actually was the one who captured that expression in that way,” Sellars said, “Therefore, the work might be thought of as in the public domain.”

However, in the circumstances like the drafting of Sen. Finegold’s bill where human edits were made to an AI draft, the person doing the editorial work has a solid argument that they can be considered the author, Sellars said.

As these AI tools become available to the general public, artists and writers pay close attention to their development. 

Angelina Pei, an illustration and furniture design student at the Rhode Island School of Design, said she had used an AI generator to inspire imagery for her artwork. Pei said she understands the argument both for and against regulating AI tools. 

“I don’t think it matters to the extent of federal regulation. However, some structure could alleviate some strife between artists and AI,” she said, “I think there could be a dialogue.”

“There are some AI programs which use pre-existing artwork that rips off artists, but it’s not only the images or art that people search for and hold on to,” Pei said. “We gravitate and connect to humanity, and that is something AI can manipulate but not recreate.”

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.