Vimeo joins companies TikTok, YouTube, and Meta in implementing a way for creators to label AI-generated content. The video hosting service announced on Wednesday that creators must now disclose to viewers when realistic content is created with AI.
The new updates to Vimeo’s terms of service and community guidelines ensure that videos that are AI-generated, synthetically created, or manipulated aren’t mistaken for real people, places, or events. This is a notable move for Vimeo as it has become increasingly difficult to distinguish between real and fake content created by advancing generative AI tools.
Vimeo doesn’t require creators to disclose content that is clearly unrealistic, such as animated content, videos with obvious visual effects, or uses AI for minor production assistance. However, videos that portray a celebrity saying or doing something they didn’t do in real life or show altered footage of an actual event or place should have an AI content label.
Additionally, the company said that AI content labels will appear on videos that use Vimeo’s set of AI tools, such as a tool that can delete long pauses and disruptions in speech.
A distinct label now appears at the bottom of the video, indicating that the creators voluntarily disclosed their use of AI. When uploading or editing a video, creators can select a checkbox for AI-generated content and specify whether AI was used for audio, visuals, or both.
Vimeo is currently leaving it up to the creators to label their AI-generated content. However, the company is working on an automated system that detects AI and labels the appropriate content.
In an official blog post, CEO Philip Moyer wrote, “Our long-term goal is to develop automated labeling systems that can reliably detect AI-generated content, further enhancing transparency and reducing the burden on creators.”
Moyer, who only just joined this past April, has previously spoken about Vimeo’s stance on AI. In another blog post, he told users that Vimeo is protecting user-generated content from AI companies by prohibiting generative AI models from being trained on videos hosted on the platform. Similarly, YouTube’s Neal Mohan has explicitly stated that using videos on the platform to train models—including OpenAI’s Sora—is a violation of its terms of service.