Albeit verrrrry slowly, the Federal Election Commission is finally taking a step toward guidelines that could restrict the use of deepfake technology in political ads.
Last week, commissioners voted unanimously to open up a period for feedback on whether the FEC should be able to regulate the use of deepfakes — or AI-generated video and audio footage — in ads.
The 60-day public commenting period is expected to open this week. (Here’s a link with details on how you can comment.)
The rising popularity of artificial intelligence has heightened concerns about how this technology can be used to deceive and manipulate voters, with experts sounding the alarm about how deepfakes can alter the public’s perception of reality in frightening ways.
And conservatives — including Florida Gov. Ron DeSantis and the Republican National Committee — have already begun to dubiously deploy deepfake technology in attack ads.
And conservatives — including Florida Gov. Ron DeSantis and the Republican National Committee — have already begun to dubiously deploy deepfake technology in attack ads.
Backed by dozens of lawmakers, Public Citizen petitioned the FEC in July to clarify its authority to restrict the use of deepfakes under a law prohibiting officials from making “fraudulent misrepresentations” of a political opponent. The petition was the the nonprofit organization’s second attempt at getting the FEC to take action; its first petition failed in June, after all three Republican commissioners voted against advancing to the public comment phase.
It’s a welcome sight to see federal election officials finally taking the problem of deepfakes seriously. But there’s no time to waste. High-tech media manipulation isn’t a looming issue — it’s bearing down on us as we speak.