(This article from November 6 has been updated to remove the mention of Snapchat’s political advertisement policy in paragraph 11.)
NEW YORK: A company spokesperson for Meta, the owner of Facebook, announced on Monday that the company is preventing political campaigns and advertisers in other regulated industries from utilizing its new generative AI advertising products. This move prevents access to tools that lawmakers have warned could accelerate the spread of false information ahead of elections.
Following the publication of this piece, Meta made the decision publicly known on Monday night via modifications to their support center. Although it does not have any limits directly on AI, its advertising guidelines prohibit advertisements containing material that has been refuted by the company’s fact-checking partners.
In a note appended to several pages outlining the functionality of the tools, the company stated, “As we continue to test new Generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for Housing, Employment or Credit or Social Issues, Elections or Politics, or related to Health, Pharmaceuticals or Financial Services aren’t currently permitted to use these Generative AI features.”
“We believe this approach will allow us to better understand potential risks and build the right safeguards for the use of Generative AI in ads that relate to potentially sensitive topics in regulated industries,” it said.
The policy change was made a month after Meta, the second-largest digital ad marketplace in the world, declared it would begin to give advertisers more access to AI-powered advertising tools. These tools allow advertisers to instantly create backgrounds, edit images, and change ad copy in response to simple text prompts.
Initially, beginning in the spring, only a select few marketers had access to the capabilities. The firm said at the time that they are on schedule to launch for all marketers worldwide by the following year.
Following the excitement around the release of OpenAI’s ChatGPT chatbot last year—which can respond to queries and other prompts with written replies that resemble those of a human—Meta and other tech firms have hurried to introduce generative AI ad solutions and virtual assistants in recent months.
The safety guard rails that the businesses want to implement on such systems have not been made public yet, thus Meta’s decision on political advertisements is among the most important AI policy decisions the industry has made to yet.
The largest digital advertising giant, Alphabet’s Google, said this week that comparable image-customizing generative AI ad technologies will be available. According to a Google representative who spoke to Reuters, the company intends to keep politics out of its products by prohibiting the use of a list of “political keywords” as suggestions.
Additionally, Google has scheduled a policy change for mid-November that would mandate that any election-related advertisements carry a disclaimer if they use “synthetic content that inauthentically depicts real or realistic-looking people or events.”
Political advertisements are blocked on TikTok and by Snap, the owner of Snapchat, in its AI chatbot. In addition, Snapchat employs human reviewers to verify the accuracy of all political advertisements, ensuring that no deceptive AI is used. X, the former Twitter, has not released any tools for generative AI advertising.
Last month, Nick Clegg, the chief policy officer at Meta, said that generative AI usage in political advertising was “clearly an area where we need to update our rules.”
In advance of the recent AI safety meeting in the UK, he issued a warning, urging governments and tech firms to be ready for the possibility that the technology may be used to rig the 2024 elections. He also called for a particular emphasis on election-related information that “moves from one platform to the other.”
Prior to this, Clegg informed Reuters that Meta was preventing the creation of lifelike representations of prominent personalities by its user-facing Meta AI virtual assistant. Meta promised to provide a mechanism to “watermark” artificial intelligence-generated material this summer.
With the exception of parodies and satire, Meta strictly prohibits deceptive AI-generated video in any content, including organic, non-paid postings.
The independent Oversight Board of the firm said last month that it would investigate the viability of such strategy, taking up a case involving a manipulated video of US President Joe Biden that Meta claimed it had left up since it was not artificial intelligence (AI)-generated.



























