Facebook and Instagram users will start seeing labels on AI-generated images on their social media feeds, part of a broader tech industry initiative to sort between what is real and what is not. A Meta spokesman said on 06 February 2024 that it is working with industry partners on technical standards to make it easier to identify images and, eventually, video and audio generated by artificial intelligence tools.[1]
See: https://redskyalliance.org/xindustry/why-do-some-ai-images-look-like-me
Just how well it will work at a time when it is easier than ever to make and distribute AI-generated imagery that can cause harm, from election misinformation to nonconsensual fake nudes of celebrities. “It’s kind of a signal that they’re taking seriously the fact that generation of fake content online is an issue for their platforms,” said Gili Vidan, an assistant professor of information science at Cornell University. She stated it could be “quite effective” in flagging a large portion of AI-generated content made with commercial tools, but it won’t likely catch everything.
Meta’s president of global affairs, Nick Clegg, did not specify when the labels would appear but said it would be “in the coming months” and in different languages, noting that a “number of important elections are taking place around the world. As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” he said.
Meta already puts an “Imagined with AI” label on photorealistic images made by its tool. Still, most AI-generated content flooding its social media services comes from elsewhere. Several tech industry collaborations, including the Adobe-led Content Authenticity Initiative, have been working to set standards. A push for digital watermarking and labeling AI-generated content was also part of an executive order that US President Joe Biden signed in October.
Clegg said that Meta will work to label “images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.”
Google said AI labels were coming to YouTube and its other platforms last year. “In the coming months, we’ll introduce labels that inform viewers when the realistic content they’re seeing is synthetic,” YouTube CEO Neal Mohan reiterated in a year-ahead blog post last week.
One potential concern for consumers is if tech platforms get more effective at identifying AI-generated content from a set of major commercial providers but miss what is made with other tools, creating a false sense of security. “There’s a lot that would hinge on how platforms communicate this to users,” said Cornell’s Vidan. “What does this mark mean? With how much confidence should I take it? What is its absence supposed to tell me?”
This article is presented at no charge for educational and informational purposes only.
Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization. Call for assistance. For questions, comments, a demo or assistance, please get in touch with the office directly at 1-844-492-7225, or feedback@redskyalliance.com
Reporting: https://www.redskyalliance.org/
Website: https://www.redskyalliance.com/
LinkedIn: https://www.linkedin.com/company/64265941
Weekly Cyber Intelligence Briefings:
REDSHORTS - Weekly Cyber Intelligence Briefings
https://attendee.gotowebinar.com/register/5993554863383553632
[1] https://www.securityweek.com/meta-says-it-will-label-ai-generated-images-on-facebook-and-instagram/
Comments