In a bid to enhance transparency, Google Photos will now include visible notes on photos edited with AI tools, marking a significant step in informing users about artificial intelligence’s role in digital image alterations.
In an effort to promote transparency around the use of artificial intelligence in photo editing, Google has announced a new initiative within its Google Photos application. As AI-powered tools become increasingly integrated into everyday photo editing, Google aims to ensure users are informed about the use of these technologies.
According to a blog post published by Google, beginning next week, photos edited using AI tools such as Magic Editor, Magic Eraser, and Zoom Enhance will include a visible note within the Google Photos app. This note will appear alongside the photo’s file name, backup status, and camera information, explicitly stating: “Edited with Google AI.” The move is designed to make it more apparent when AI has been utilised in photo alterations, although this addition is only visible within the app itself, not as a watermark on the image.
John Fisher, Engineering Director for Google Photos and Google One, expressed Google’s commitment to ongoing transparency regarding AI usage. He noted that while this initiative marks progress, the company is actively seeking feedback and evaluating other solutions to clearly convey when AI edits have been made. The need for this arises as AI editing tools, unlike traditional filters and enhancements, can manipulate photos in profound ways that are less visible to the naked eye, thus potentially blurring lines of authenticity.
Google’s decision to implement a tag system stems from the growing use of generative AI in images. Previously, the metadata of a photo already included information about any AI-based editing, adhering to technical standards set by the International Press Telecommunications Council (IPTC). However, this information, typically hidden in metadata, was primarily used for investigative or archival purposes. By surfacing this detail directly within the app, Google intends to mitigate confusion about the authenticity of AI-modified images.
This initiative does not single out AI tools but includes non-AI features like Best Take and Add Me from Google’s Pixel smartphones. Best Take combines several shots to produce an image that depicts everyone at their best, while Add Me allows users to place an individual into a picture where they were not originally present. These features will also carry a disclaimer, emphasizing the amalgamation of multiple images.
Despite the improvements in transparency, the public visibility of AI usage still requires users to delve into the app to verify any given image. Social media platforms, like Facebook and Instagram, are exploring possibilities to utilise such metadata for labeling images on their platforms, an approach Google Search is also beginning to adopt.
This development highlights a broader trend towards increased recognition and responsibility when deploying AI in digital media. As AI image editing becomes more prevalent, establishing trust in digital content remains a significant concern. The initiative could potentially serve as a precedent for future practices within the industry, where clear indications of AI involvement become standard, thus enhancing credibility for professionals and everyday users alike.
Source: Noah Wire Services