Meta, the parent company of Facebook, has set clear guidelines on the use of its highly anticipated generative artificial intelligence tools for political campaigners.
In an engaging statement on its website, Meta declared that advertisers promoting candidates and causes will not have access to the innovative AI tools for ads creation.
The company stated, “As we delve into testing our cutting-edge generative AI ads creation tools in ads manager, we want to make it clear that advertisers currently running campaigns for housing, employment, credit, social issues, elections, politics, health, pharmaceuticals, or financial services are not permitted to harness the power of these generative AI features. Our commitment to responsible use of technology remains at the forefront.”
“In April, the Republican National Committee boldly declared a groundbreaking feat: crafting the inaugural US political advertisement entirely through AI.”
This breakthrough not only signifies a sneak peek into an imminent era where political campaigns harness AI for ad creation but also foreshadows the challenges as rival campaigns leverage this technology to influence public opinion.
AI-generated images featuring a fusion of US President Joe Biden and Vice President Kamala Harris were paired with ominous visuals, showcasing boarded-up stores, surging crime, and shuttered banks in a compelling advertisement.
The ad, though not crafted with Meta’s AI tools, stirred up widespread discussion and debate globally, particularly in the US, on the potential use and misuse of AI in shaping voter opinions.
Timothy Kneeland, a professor of political science and history at Nazareth College in upstate New York, highlighted the alarming aspect of the situation, emphasizing the low cost and ease of mass production of such content. He underscored the pressing need to gauge how the general public perceives these AI-driven influences, stressing the challenge in their ability to exercise discernment.
Kneeland also shed light on the transformative impact AI could have on the economics of political campaigns, indicating a shift in how campaigns are run.
“Mr. Kneeland suggested the possibility of halving your campaign staff, expressing worries about AI’s broader economic implications on future employment.”
Related: EU Tax Battle Escalates: Court Demands Re-examination in Apple’s $14 Billion Dispute
He highlighted the potential for AI to level the political campaign playing field, making it more accessible for individuals without deep pockets.
Emphasizing the democratizing impact, he pointed out that AI has the capacity to empower those who struggle to raise substantial funds for their campaigns.
UN Secretary-General Antonio Guterres echoed these sentiments in October, expressing concerns about easily produced AI-generated content that could deceive people. He shared a surreal experience of watching himself deliver a flawless speech in Chinese, facilitated by an AI app, despite not speaking the language.
Meta’s recent move in unveiling its AI-generated advertising tool is just one among several Big Tech announcements aimed at allaying concerns surrounding the potential misuse of AI in political campaigns.
In September, Alphabet, Google’s parent company, took a proactive stance by mandating the disclosure of AI-generated political advertising content. According to the company’s blog post, any advertisement featuring synthetic content, creating an illusion of a person saying or doing something they never did, or manipulating footage of a real event to generate a realistic but false portrayal, requires explicit disclosure.
These industry giants are taking steps to address the rising apprehensions about AI’s role in shaping political narratives, demonstrating a commitment to transparency and accountability.
The US Federal Election Commission, an autonomous body enforcing campaign laws, has greenlit a petition targeting “deliberately deceptive artificial intelligence campaign ads.” While approval is secured, the outcome of potential actions remains uncertain.