Meta announces that it will identify images made by AI on its platforms
The company assured that "in the coming months" users will know if the images they see on Facebook, Instagram and Threads have been generated by artificial intelligence.
Meta announced Tuesday that it will identify images made by AI on its platforms. The company assured that "in the coming months" users will know if the images they see on these social networks have been generated using AI and, thus, avoid issues in the face of the presidential elections that will take place in November this year.
The person who reported the news was Meta President of Global Affairs Nick Clegg. He assured through a company blog that Meta's priority is to identify the images for better user transparency:
AFP reminds that Meta already identifies the images generated using its own tool, Meta AI, launched in December. However, the company's intention is to be able to offer the same information with images made using other platforms such as Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock.
2024 elections key for the development of Meta's feature
This feature, Clegg assures, will be available "in all the languages supported by each application," to avoid any type of misinformation. The announcement comes at a key moment. Almost half of the world's population will be eligible to go to the polls this year, and the use of AI-made images in the 2024 presidential election in the United States this November is of particular concern.
The company is also trying to avoid situations like the one singer Taylor Swift experienced recently. Fake nude images of the artist, created with AI, went viral, being viewed 47 million times on X (formerly Twitter).
This is situation that Meta will try to avoid with the development of its new tool. However, Clegg told AFP, they are aware that this labeling feature "will not eliminate" the production of fake images, although they hope that it will be minimized "within the limits of what technology currently allows":