The rise of AI-generated content on social media is causing concern as it becomes increasingly difficult to distinguish between real and manipulated or AI-generated images and videos. Truepic, a company backed by Microsoft, offers a solution with its Truepic Lens technology, which authenticates media at the point of creation. The application captures data such as date, time, location, and device information, and applies a digital signature to verify the authenticity of the image.
Why it matters: The proliferation of AI-generated content on social media has prompted calls for action from lawmakers and tech companies. By understanding the implications and challenges posed by AI-generated content, technology executives can make informed decisions about developing strategies, allocating resources, and implementing measures to safeguard their platforms, users, and the integrity of digital content.
- Truepic is seeing interest from various sectors, including NGOs, media companies, and insurance firms, as the need to verify the legitimacy of visual content becomes crucial.
- Lawmakers are calling for tech companies to implement technology to recognize AI-generated content and label it clearly for users, but experts warn that technical solutions alone may not be sufficient, emphasizing the need for cooperation among tech companies, governments, and academia to address the problem effectively.
- Companies are exploring different approaches, such as post-production identification of AI-generated images and real-time marking of images with digital signatures, to combat the issue of AI-generated content on social media.