A video that uses an artificial intelligence voice-cloning tool to mimic the voice of Vice President Kamala Harris saying things she did not say is raising concerns about the power of AI to mislead with Election Day about three months away.
The video gained attention after tech billionaire Elon Musk shared it on his social media platform X on Friday without explicitly noting it was originally released as parody.
By late Sunday, Musk had clarified the video was intended as satire, pinning the original creator’s post to his profile and using a pun to make the point that parody is not a crime.
The video uses many of the same visuals as a real ad that Harris, the likely Democratic president nominee, released launching her campaign. But the fake ad swaps out Harris’ voice-over audio with an AI-generated voice that convincingly impersonates Harris.
Join our WhatsApp Channel for more news
“I, Kamala Harris, am your Democrat candidate for president because Joe Biden finally exposed his senility at the debate,” the AI voice says in the video. It claims Harris is a “diversity hire” because she is a woman and a person of color, and it says she doesn’t know “the first thing about running the country.” The video retains “Harris for President” branding. It also adds in some authentic past clips of Harris.
Mia Ehrenberg, a Harris campaign spokesperson, said in an email to The Associated Press: “We believe the American people want the real freedom, opportunity and security Vice President Harris is offering; not the fake, manipulated lies of Elon Musk and Donald Trump.”
The widely shared video is an example of how lifelike AI-generated images, videos or audio clips have been utilized both to poke fun and to mislead about politics as the United States draws closer to the presidential election. It exposes how, as high-quality AI tools have become far more accessible, there remains a lack of significant federal action to regulate their use, leaving rules guiding AI in politics largely to states and social media platforms.
The video also raises questions about how to best handle content that blurs the lines of what is considered an appropriate use of AI, particularly if it falls into the category of satire.
The original user who posted the video, a YouTuber known as Mr Reagan, disclosed from the beginning both on YouTube and on X that the manipulated video is a parody. Yet Musk’s initial post with the video, which had far wider reach with 130 million views on X, according to the platform, only included the caption “This is amazing” with a laughing emoji.
Over the weekend, before Musk clarified on his profile that the video was a joke, some participants in X’s “community note” feature suggested labeling his post as manipulated. No such label has been added to it, even as Musk has separately posted about the parody video.
Some users online have questioned whether his initial post might violate X’s policies, which say users “may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm.”
The policy has an exception for memes and satire as long as they do not cause “significant confusion about the authenticity of the media.”
Chris Kohls, the man behind the Mr Reagan online persona, pointed an AP reporter to a YouTube video he posted early Monday responding to the ordeal. In the YouTube video, he confirmed he used AI to make the fake ad and argued that it was obviously parody, with or without a label.
Musk endorsed Trump, the Republican former president and current nominee, earlier this month. He didn’t respond to an emailed request for comment.
Two experts who specialize in AI-generated media reviewed the fake ad’s audio and confirmed that much of it was generated using AI technology.
One of them, University of California, Berkeley, digital forensics expert Hany Farid, said the video shows the power of generative AI and deepfakes.
“The AI-generated voice is very good,” he said in an email. “Even though most people won’t believe it is VP Harris’ voice, the video is that much more powerful when the words are in her voice.”
He said generative AI companies that make voice-cloning tools and other AI tools available to the public should do better to ensure their services are not used in ways that could harm people or democracy.
Rob Weissman, co-president of the advocacy group Public Citizen, disagreed with Farid, saying he thought many people would be fooled by the video.
“I’m certain that most people looking at it don’t assume it’s a joke,” Weissman said in an interview. “The quality isn’t great, but it’s good enough. And precisely because it feeds into preexisting themes that have circulated around her, most people will believe it to be real.”
Weissman, whose organization has advocated for Congress, federal agencies and states to regulate generative AI, said the video is “the kind of thing that we’ve been warning about.”
Other generative AI deepfakes in the U.S. and elsewhere have tried to influence voters with misinformation, humor or both. In Slovakia in 2023, fake audio clips impersonated a candidate discussing plans to rig an election and raise the price of beer days before the vote. In Louisiana in 2022, a political action committee’s satirical ad superimposed a Louisiana mayoral candidate’s face onto an actor portraying him as an underachieving high school student.
Congress has yet to pass legislation on AI in politics, and federal agencies have only taken limited steps, leaving most existing U.S. regulation to the states. More than one-third of states have created their own laws regulating the use of AI in campaigns and elections, according to the National Conference of State Legislatures.
Beyond X, other social media companies also have created policies regarding synthetic and manipulated media shared on their platforms. Users on the video platform YouTube, for example, must reveal whether they have used generative artificial intelligence to create videos or face suspension.