Our website use cookies to improve and personalize your experience and to display advertisements(if any). Our website may also include cookies from third parties like Google Adsense, Google Analytics, Youtube. By using the website, you consent to the use of cookies. We have updated our Privacy Policy. Please click on the button to check our Privacy Policy.

Meta Has Implemented New AI Transparency Rules That Will Have an Impact Across Many Industries

Political Ads Are the Main Focus, but Other Industries Must Follow the Rules

Picture this: You are scrolling through Meta (or Facebook, if you still want to call it that…) and see a political ad. 

A gravelly voice over accuses a certain Opposing Candidate of signing a law that had a terrible impact on a certain population of people. 

A bleakly colored medium-wide shot shows a group belonging to said population of people. Cut to regular medium shots, close-ups, extreme close-ups, canted angles, fisheye lens coverage of people with tears streaming down their face because of the signing of the law in question.

The voiceover gently tugs at your heartstrings as it sings the virtues of the advertisement’s chosen candidate, urging you to vote for this candidate over the opposing candidate.

But here’s the thing: A.I. generated the shots of those suffering people. The people were A.I.-generated. There may be such a group of people out there in your country, but the people you saw in that ad were not real. 

Rather, someone generated them solely to tug at your heartstrings.

Meta, née Facebook, has recently announced a set of guidelines for political ads that employed A.I. in their creation. 

But something to understand is that these AI transparency rules extend beyond the political realm to a number of industries. 

Meta Is Requiring AI Transparency in Political Advertising

The basic rule is that if a political ad depicts A.I.-generated events or people, the advertiser must disclose this fact.

This rule takes effect in 2024. Another aspect of it is that political advertisers cannot use Meta’s own generative A.I. tools to create political or social issues ads, according to the New York Times. 

The NYT also reports that, additionally, Meta A.I. cannot generate any ad “related to housing, employment, credit, health, pharmaceuticals, or financial services.”

However, they can still be generated using something like DALL-E. The catch, of course, is that you need to still disclose the use of A.I. in creating the advertisement. 

This is just in time for the 2024 U.S. election. Even during the previous election just four years ago, the potential for using A.I.-generated content in ads was not too large an issue.

We still need to see how Meta will actively enforce this rule.

The Impact AI Transparency Will Have on Business Owners

Well, if your business relates to the above-named categories quoted in the above section, this rule will already affect you in 2024.

If your business falls under any of these categories, then consider this your first heads-up that you will need to start disclosing the use of A.I. in your marketing content. 

Depending on the level of success of this rule—and, of course, public sentiment surrounding it—tech companies like Meta (which owns Instagram, mind you) may go on to require all business owners to disclose the use of A.I. in marketing. 

This means that all businesses will have to be upfront with their customer base about whether they generate the marketing content they put online using A.I.

This may not be too big of a deal if using A.I. in marketing becomes the norm—and, of course, consumers are not put off by content made by A.I.

How Will Meta Enforce AI Transparency? 

As hinted above, it is unknown how Meta will actually make companies follow these rules. 

For instance, if a company fails to report the use of A.I., and everyone is fooled, will they get away with it?

Does Meta have A.I.-detecting software that could help snuff out the use of A.I., somehow? 

These are important questions because it could mean that Meta may only superficially treat the issue of misinformation, essentially putting itself in a relationship of trust with the advertisers.

Again, as of right now, Meta may have more sophisticated methods for vetting content, but it seems like the process will mostly involve asking a “Was this ad generated by A.I.” question, where you check the yes or no box.

Depending on the penalties that businesses may face for violating the policy, business owners looking to Trojan Horse some A.I. content onto Meta will likely consider checking “yes” as the best move.

You Can Still Make Small Edits Without Disclosing

Meta’s words are kind of vague here, but if Meta does not consider A.I. content consequential to an asserted claim, then they do not have to disclose it.

Thinking of adjusting brightness or retouching a photo for the sake of making something look just superficially nicer to the eye, with no deeper or consequential motive. 

Basically, you will likely be able to get away with small edits, but it is good to play it safe if you do something large-scale with A.I. in your ad. 

GO AI Articles

Guardian Owl Digital dedicates itself to assisting businesses worldwide in learning about and implementing A.I.

For continuing your AI education and keeping up with the latest in the world of AI, check out our AI blog: 

New Year, New AI: Here Are the Biggest Trends in AI Coming in 2023

How AI Could Have Helped Southwest Avoid Its Holiday Disaster

IBM Watson vs. Microsoft’s ChatGPT: The AI Chat Matchup of the Century

AI on the Stand: Explaining the Lawsuit Against the Microsoft Automated Coder

AI and You: What Determines Your AI Recommendations in 2023?

How AI Could Have Foreseen the Crypto Crash—(It Already Analyzes Exchange Markets)

Google’s Response to ChatGPT: What the Tech Giant Is Doing to Improve Its Own AI Efforts

Related Posts