BOSTON – Artificial intelligence is increasingly becoming a reality. Software like ChatGPT can write student essays, draft cover letters, and even offer advice with only a small amount of information from the user. Now, some Massachusetts lawmakers are interested in regulating it.
“I think we’re at the beginning of a transformative technology that will have a huge impact on the lives of so many people,” said State Senator Barry Finegold.
Finegold wanted to get a head start, drawing parallels to Facebook’s unveiling nearly 20 years ago.
“We thought it was kind of cute, and college kids were using it, but we never knew how powerful Facebook was going to be,” Finegold said.
Artificial intelligence, or AI, can be broken down into four categories: Reactivity, Limited Memory, Theory of Mind, and Self-Awareness.
Some of them aren’t even in the picture yet, but Reactive is the most common. This is where ChatGPT comes in. The software reacts to what you type in the search bar.
Finegold wants to get the attorney general involved to understand how the algorithm is being used and demand that the output be watermarked so students can’t use it to cheat.
Tech advocates like Taylor Puckett say software like ChatGPT can dramatically increase productivity.
“You spend less time digging through Google search results, it’s able to give you some coherent answers, you might have to make some small adjustments, but for the most part, the information is there,” Puckett said.
Cybersecurity expert Peter Tran said he was concerned about the risks of an unregulated space that was vulnerable to the intent to spread misinformation. “I’m the opposite of Taylor,” Tran said.
Even with a built-in moral compass, those who want to do harm can be approached much more quickly by artificial intelligence.
“As an attacker, you can say ‘I’m looking for this kind of functionality’, and you’ll have ChatGPT generate that functionality for you very quickly, so you can make very small modifications to your intent, and you’re off to Right,” Tran said.