Meta announces that it will identify images made by AI on its platforms

The company assured that "in the coming months" users will know if the images they see on Facebook, Instagram and Threads have been generated by artificial intelligence.

Meta announced Tuesday that it will identify images made by AI on its platforms. The company assured that "in the coming months" users will know if the images they see on these social networks have been generated using AI and, thus, avoid issues in the face of the presidential elections that will take place in November this year.

The person who reported the news was Meta President of Global Affairs Nick Clegg. He assured through a company blog that Meta's priority is to identify the images for better user transparency:

In the coming months, we will label images that users post to Facebook, Instagram and Threads when we can detect industry standard indicators that they are AI-generated.

AFP reminds that Meta already identifies the images generated using its own tool, Meta AI, launched in December. However, the company's intention is to be able to offer the same information with images made using other platforms such as Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock.

We’re building industry-leading tools that can identify invisible markers at scale – specifically, the "AI generated" information in the C2PA and IPTC technical standards – so we can label images from Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock as they implement their plans for adding metadata to images created by their tools.

2024 elections key for the development of Meta's feature

This feature, Clegg assures, will be available "in all the languages ​​supported by each application," to avoid any type of misinformation. The announcement comes at a key moment. Almost half of the world's population will be eligible to go to the polls this year, and the use of AI-made images in the 2024 presidential election in the United States this November is of particular concern.

The company is also trying to avoid situations like the one singer Taylor Swift experienced recently. Fake nude images of the artist, created with AI, went viral, being viewed 47 million times on X (formerly Twitter).

This is situation that Meta will try to avoid with the development of its new tool. However, Clegg told AFP, they are aware that this labeling feature "will not eliminate" the production of fake images, although they hope that it will be minimized "within the limits of what technology currently allows":

It's not perfect, the technology is not fully developed yet, but of all the platforms it is the most advanced attempt yet to provide meaningful transparency to billions of people around the world. I sincerely hope that by doing this and leading the way, we encourage the rest of the industry to work together and try to develop the common technical standards we need.