Why do Some AI Images Look Like Me?

12331830864?profile=RESIZE_400xMeta recently released a new standalone AI image generator.  The tech is based on its Emu image synthesis and the way it all works might surprise you.  Consider this with Meta AI already built into the Meta apps like Messenger and Instagram.  It is now available in a browser window and is quite impressive.  The only catch is that users are the ones supplying the source images.[1]

Meta scrapes all of our social media feeds to the tune of about one billion images, according to Ars Technica.  The AI can fabricate groups of people that might look somewhat familiar, but only if you are suspicious.  One image the AI generated caused researchers to question its source.  They asked the AI to create an image of a group of people smiling at the camera.  Some of them looked strangely familiar.  They would not say any of them looked like their actual friends, but they also do not remember consenting to an AI experiment like this.  And what if this begins to be misused?

See:  https://redskyalliance.org/xindustry/how-to-spot-a-deepfake-it-s-easy

We hand over our data to help advertisers target us.  We grudgingly provide private information which can be used for nefarious purposes.  Now, Meta is using our images to build an AI image generator, whether we want to be involved with that or not.  Readers might think, what is the big deal?  The images are publicly available, and according to one report, the AI does not include the private images we only share with friends and family.  If an AI uses our images to create something unrecognizable, and if we are all contributing to the AI landscape in a fun and mostly harmless way, should we just play along?

Slowly but surely, we become unwilling participants in an AI revolution that could become an AI catastrophe.  Technology can be amazingly helpful, especially to creative types who need inspiration (of course, it could also put them out of a job).

See:  https://redskyalliance.org/xindustry/artists-fighting-back-against-ai

What some researchers are against is the lack of consent.  It might be buried somewhere in the terms of service.  No one remembers agreeing to those, but it is clear that everyone is helping Meta increase market share and prove they can keep up with Microsoft, OpenAI, and Google.

Even with the Imagine app in a browser, there is no option where you consent to allow Meta to use public images.  No one knows where this will lead.  What seems harmless and fun to most posters did not think too much about privacy concerns when Facebook first launched and can quickly turn sour.  Meta is probably scraping images and analyzing trends, knowing how often we share photos of kids and family. Users are feeding the beast, one image at a time.

For people sharing public images and not knowing how to make them private, it can feel like another invasion of privacy.  That is not a good feeling, even if our images look amazing.

 

This article is presented at no charge for educational and informational purposes only.

Red Sky Alliance is a Cyber Threat Analysis and Intelligence Service organization.  Call for assistance.  For questions, comments, a demo, or assistance, please get in touch with the office directly at 1-844-492-7225 or feedback@redskyalliance.com   

Weekly Cyber Intelligence Briefings:

Reporting: https://www.redskyalliance.org/

Website: https://www.redskyalliance.com/

LinkedIn: https://www.linkedin.com/company/64265941

REDSHORTS - Weekly Cyber Intelligence Briefings

https://attendee.gotowebinar.com/register/5993554863383553632

 

[1] https://www.forbes.com/sites/johnbbrandon/2023/12/12/meta-is-scraping-our-photos-from-facebook-and-instagram-to-create-ai-images/

E-mail me when people leave their comments –

You need to be a member of Red Sky Alliance to add comments!