By Briana Leibowicz Turchiaro
Boston University News Service
Boston University prepared its community for the election with training on identifying disinformation. Lead trainer Mike Reilley led the RTDNA/Google News Initiative Election Fact-Checking training program on Oct. 25.
During the event, Reilley covered a series of tools that people can use to more easily identify media that has been altered with AI. One of the main concerns with disinformation on the Internet is the rise of deepfakes. According to Tech Target, deepfakes are synthetic types of media including images, videos and/or audio of real or non-existent people, all of which are AI-edited or AI-generated.
Disinformation has existed for centuries, but with the rise of the Internet and AI, it has become easier to spread and manufacture. Social media, in particular, is a very prominent space for disinformation, Reilley said.
“AI gives us the ability to create images, video and audio with nothing more than a short text prompt,” Reilley said. “That’s a great thing, but in the hands of bad actors, it can produce devastating results.”
Identifying disinformation is always useful, but especially during an election, it’s important to become well-versed in identifying disinformation. Here are some tips to help:
In 2023, Google launched an “about this image” feature in its Image Search. This feature allows users to discover several aspects of an image like an image’s history and how other websites use that image. The “about this image” feature also allows users to see an image’s metadata, which can be a heavy indicator of AI-alterations, Reilley said.
During the event, Reilley said that these features can be useful to see if an image has already been published in news outlets in the past and to see how different pages describe the same image.
Other than features on Google, there are also some indicators that we can identify ourselves through careful analysis.
One common example is abnormalities in photos, such as odd finger placement or objects with incorrect proportions. Additionally, AI is still very underdeveloped when constructing facial expressions, often resulting in images that have awkward or misplaced emotions that don’t make sense with the image’s context.
For example, in the image below, the children’s facial expressions seem somewhat abnormal and confused, rather than happy and engaged. Especially with the third girl wearing a pink shirt, her facial expression is very telling of an AI-generated image.
In the same image, the hand placements do not match or align with the respective child either, which shows how AI can have abnormalities in their images.
Another important tell for an AI-generated image is the text and labels that accompany the image. AI is still not advanced enough to connect the image to a description, and so many AI images have jumbled or misspelled words.
AI-generated media also extends to audio. Fake AI audio has been a particular concern in this election because it has already created confusion.
Earlier this year, during the New Hampshire primaries, a deepfake audio of Joe Biden circulated through robocalls. In this audio, Biden appeared to tell voters to “stay home and save their vote” for the general election. According to the FCC, this audio was proven to be created with AI that mimicked Biden’s voice.
The New Hampshire case is a clear example of how fake AI audios can be particularly dangerous since they are harder to identify as AI-generated.
According to Enthu.ai, some voice analysis software like Observe.ai and CallMiner can help detect synthetic tones, which are clear indicators for AI-generated audio.
According to Cointelegraph, many audio recordings have a robotic tone that do not have the natural changes in pitch that a person’s voice would have. This is usually accompanied by speech that is emotionally disconnected from what the speaker is saying.
Another possible indicator is observing background noise. Real recordings almost always include some sort of background noise, a car honk or the natural buzz from our living room fan are background noises that we are well accustomed to.
A huge tip is to avoid obtaining news from social media. The number of individuals who are relying on social media to obtain news is increasing, according to PEW research.
The same study found that around 49% of Americans connected the issue of keeping the public informed about current issues and events with the amount of disinformation they were consuming. This is accompanied by a rise of disinformation-spread since social media apps have few restrictions.
According to an NYU Stern study, a particular example is X, formerly known as Twitter. Ever since Musk’s takeover of the company, his devotion to free speech has removed a lot of restrictions on what users can post, opening the door to disinformation.
Reliable news outlets fact-check articles and spread accurate information, a characteristic that social media lacks, as that is not its primary function.
Reilley said: “AI detection tools always lag a bit behind improvements in AI creation tools, so it’s important to stay up on the most current technology and know what to look for.”