YouTube will require creators to identify whether their content was made with artificial intelligence

The video platform is the latest to take action against the dangers of this type of technology.

Social media companies are taking action to protect users from the dangers that come with using new types of technology such as artificial intelligence (AI). YouTube is the latest company to update its usage policies.

The platform owned by Google LLC will force creators to identify whether the content they publish on the platform is made with AI. To do this, viewers will receive a notification informing them if the content was designed with this technology.

It is still unknown when this new policy will go into effect, although it is not expected to take effect until next year. YouTube wants to prevent certain types of content from being manipulated, especially those that deal with matters of public relevance. The platform's vice presidents of product management, Jennifer Flannery O'Connor and Emily Moxley wrote on the company's blog, "This is especially important in cases where the content discusses sensitive topics, such as elections, ongoing conflicts and public health crises, or public officials."

If creators violate the new policy, YouTube will remove the content and suspend their accounts. 

It is not the first time that a subsidiary company of Alphabet Inc. has tried to crack down on artificial intelligence. In September, Google announced that electoral campaigns for the upcoming presidential elections must disclose whether their political ads were designed with AIy. The measure went into effect this month.

Other platforms that took action against AI

Starting January 1, 2024, Meta Platforms will require users to identify whether artificial intelligence was used to create election ads. Campaigns that post content on any of their platforms such as Facebook, Instagram or WhatsApp must disclose that it is made with this type of technology.

The measure will apply worldwide.