Meta under scrutiny: Australian Senate investigates use of adult photos to train AI without consent
The company's global privacy policy director defended the practice, arguing that only "public data" is used.
A Senate committee in Australia questioned the practices of tech firm Meta due to reports that the company has used photos of Australians to train its artificial intelligence models without obtaining explicit permission from users.
The controversy was sparked during a hearing in which the Select Committee on AI Adoption, chaired by Labor senator Tony Sheldon, asked Meta to clarify the ethics and justification behind the use of public images from Instagram and Facebook since 2007.
Use of public data without consent
Melinda Claybaugh, Meta's director of global privacy policy, defended the use of public photos, arguing that the company only uses "public data." According to Claybaugh, when users post content on Facebook or Instagram and choose to make it public, that information is available for a variety of uses, including training AI models. This approach has been criticized for not considering users' explicit consent for these specific purposes.
Lack of opt-out option for Australia
Senator Sheldon raised an additional concern: the absence of an opt-out option for Australian users to exclude their data from AI training, unlike in the European Union. In Europe, privacy regulations allow users to opt out of allowing their data to be used to train AI. Claybaugh explained that this option in Europe is based on specific EU privacy regulations, but did not provide details on whether Meta will implement a similar option in Australia.
Concerns about the use of images of minors
Another crucial issue addressed during the hearing was the use of images of minors. Claybaugh assured that Meta does not use photos of teenagers to train AI, but only images of adults over 18. However, this statement comes against a backdrop of growing concern about children's privacy.
In July 2024, Human Rights Watch (HRW) discovered that the LAION-5B dataset, used to train AI applications, contained photos of Australian children, some as young as three years old. HRW warned of the risks associated with the use of these images, including the possibility that they could be misused by malicious actors to generate explicit content or for other harmful activities. The organization has urged the Australian government to introduce stricter laws to protect minors' personal data and prevent its inappropriate use in emerging technologies.