YouTube is developing an innovative technology that will allow users to identify content generated by artificial intelligence, such as cloned music or fake faces. This new functionality will be integrated into its Content ID system, which currently manages copyright. The main goal is to provide greater security for creators and users by ensuring videos are authentic.

The platform is working with key partners to launch a pilot program early next year, focusing on protecting creators from deepfakes and AI-generated content, especially in the entertainment industry.

 

YouTube strengthens its fight against AI

One of the main focuses will be detecting videos using AI without consent, particularly in music and film. This new technology will enable professionals, such as actors and musicians, to verify if their faces or voices have been recreated without permission. YouTube will also evaluate if AI-generated creations violate Content ID rules, potentially tagging or removing them based on the infraction.

With this initiative, YouTube aims to control the evolution of AI on its platform, offering creators tools to prevent fraud while utilizing AI’s creative potential. However, all AI-generated content must comply with community guidelines to remain on the platform.

YouTube acknowledges AI’s potential but stresses the importance of regulating its use to protect both creators and consumers, ensuring that what is seen and heard is genuine and legal. This ensures a safer future for content on the platform.