Meta has added an option to label AI-generated content to its platforms, so that users ‘recognise when something may have been created with generative AI.’
This is particularly useful for posts on Facebook, Instagram and Threads that feature AI-generated nail designs, so that clients can manage their nail art expectations at their next appointment.
Sir Nick Clegg, president, global affairs, Meta
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” comments Sir Nick Clegg, president of global affairs at Meta, in a post on the company’s website. “People are often coming across AI-generated content for the first time, and our users have told us they appreciate transparency around this new technology.
“It’s important that we help people know when photorealistic content they’re seeing has been created using AI.”
How it works
“When photorealistic images are created using our Meta AI feature, we do several things to make sure people know AI is involved, including putting visible markers that you can see on the images, and both invisible watermarks and metadata embedded within image files,” Clegg continues.
Subsequently, content created or edited using Meta’s AI tools and shared as a post, story or reel, may automatically be labelled as ‘AI’ content, or have a visible watermark added that reads: ‘Imagined with AI’. Content created or edited using third-party AI tools will be labelled as ‘Made with AI’.
Meta requires users that wish to share content with photorealistic video or realistic-sounding audio to add the label, if it has been digitally generated or altered. The company does not currently require users to label images that have been created or altered with AI, however these images will automatically receive a label if Meta detects AI interference.
Looking to the future
“These are early days for the spread of AI-generated content,” Clegg adds. “As it becomes more common in the years ahead, there will be debates across society about what should and shouldn’t be done to identify both synthetic and non-synthetic content.
“Industry and regulators may move towards ways of authenticating content that hasn’t been created using AI as well content that has. What we’re setting out are the steps we think are appropriate for content shared on our platforms right now. But we’ll continue to watch and learn, and we’ll keep our approach under review as we do.”