Leaders in the field of artificial intelligence shatter misconceptions

Left to Right: Alfonso Carillo Montiel, Nathan Johnson, PhD, Rich Palmer, and Sara Saperstein at the “Artifical Intelligence for Good” discussion. General Assembly, Boston, Mass., Sept. 24. Photo by Hannah Rogers/BU News Service

By Hannah Rogers
BU News Service

“Higher education needs to skill people in dealing with the AI revolution, ‘cause it’s gonna happen,” said Rich Palmer, Co-Founder and CTO of Gravyty, an artificial intelligence company that creates AI tools for fundraiser enablement. 

Palmer and three other local professionals who use artificial intelligence in their daily life cleared up public misconceptions at a panel discussion titled “Artificial Intelligence for Good” in front of an audience of about forty people at Boston’s General Assembly Tuesday night. 

Palmer was joined by Sara Saperstein, the lead data scientist at MassMutual, Nathan Johnson, an associate research scientist in bioinformatics and data analysis at Harvard Medical School and Alfonso Carrillo Montiel, a lawyer and program manager at Global Alumni Education. 

Throughout the discussion, the speakers touched on misconceptions people have towards artificial intelligence, their own definitions of artificial intelligence and how the technology can be used to improve human lives. 

“Artificial intelligence is using computer programs to mimic or replicate human intelligence,” Saperstein said.

Saperstein said she uses artificial intelligence at MassMutual to develop machine learning models that can predict risks to a person’s health. 

Palmer and Saperstein argued against the public’s fear of strong AI. The two said AI is what we make of it and how we choose to utilize it.

Rich Palmer answers a question about AI at the General Assembly, Boston, Mass., Sept. 24. Photo by Hannah Rogers/BU News Service

“[People believe] it’s inherently evil,” said Palmer. “Is a machine learning algorithms inherently evil? It’s us. We decide what data we use.” 

Johnson cited a Tesla accident where someone was killed by a self-driving vehicle and asked the audience who was at fault: the makers of the vehicle or the user of the AI? Johnson argued because the AI was both created and used by humans, whoever’s fault it may have been, it wasn’t the computer’s. 

“[People think] it’s smarter than we give it credit for,” Johnson said. “The fundamental reality is that the computer does what you tell it to do. It’s not thinking in the sense of how we think.”

Meanwhile, Montiel commented on the bigger picture, noting that misconceptions about AI are so prevalent because he believes people are afraid of what they don’t understand and we give technology “too much credit.” 

“[AI] is just part of the progress we have as a society,” Montiel said. 

The panelists all agreed the masses should learn as much as possible about online algorithms and complex technology through Youtube videos, tutorials and courses. The group urged not to trust the government to make progress with complex technology.

Palmer also reminded the audience that those developing and utilizing AI should consistently be held accountable for their practices.

“Most of the world doesn’t have the privilege that many in the west do,” Palmer said. “We have to look out for the vulnerable people and ask at every step whether we’re doing the right thing.”

Add Comment

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.