Ads
Meta has announced plans that it will begin to enforce new requirements on political advertisements on its platforms starting from the beginning of 2024, The Wall Street Journal reports. These new requirements are specifically around the use of AI to either create or digitally alter media that is used within ads on Facebook, Instagram and more.
Examples that Meta specifically highlighted include advertisements where a real person is depicted saying or doing something that they didn’t actually say or do, or the digital creation of a realistic-looking event or person, which either didn’t occur or doesn’t exist. Both of these types of advertisements could have the potential to be misleading or potentially harmful, which Meta aims to mitigate.
There are certain requirements to this new policy which determine if the media in question is in scope of the declaration requirement, such as whether it is “immaterial” but if an advertiser doesn’t make the disclosure then an ad is likely to be rejected. Advertisers who repeatedly fail to make the necessary disclosures will then incur penalties from Meta, which remain unspecified.
Meta has also confirmed that the policy will not just apply to political advertisements, but also to topics and ads related to social issues as well. Additionally, Meta’s own generative AI tools cannot be used by advertisers to make political ads.
Given that the U.S. election season begins in January with the primary elections, this will be the first true test of the potential impact that generative AI can have on news and information that is spread across platforms around the world. This is the latest step in a wide range of measures taken by companies and governments to moderate and safeguard the use of AI across the world ever since ChatGPT first launched one year ago.
Source: The Wall Street Journal