Google is rolling out significant updates to enhance its search engine with advanced AI features, aiming for a revolutionary user experience through voice, video, and image inputs.
Google is implementing extensive updates to its search engine by integrating more advanced artificial intelligence features, marking some of the most comprehensive changes the platform has seen in recent years. The updates, announced on Thursday, centre on enhancing user interaction with search results through voice, video, and image inputs, aiming to revolutionise the way people engage with information online.
These new features will initially roll out in the United States with the focus on mobile searches related to recipes and meal inspirations. The enhancements are part of Google’s strategic effort to keep pace with fierce competition from other tech giants in the AI domain, most notably OpenAI, which recently secured a record-breaking $6.6 billion in funding, reflecting the intensified interest in AI technologies.
Central to Google’s updates is the modification of its “AI Overviews” feature, which has previously faced challenges, including the dissemination of erroneous information. This feature is intended to generate AI-driven summaries of search queries positioned at the top of search results, offering streamlined answers along with links to additional resources. The new iteration of AI Overviews will incorporate more direct links within summaries to provide users with greater context and ease of navigation to supporting websites.
The revamped AI Overviews aim to address earlier issues where publishers expressed concerns over decreased website traffic due to users engaging more with AI-generated responses than clicking through to their sites. Google reports that preliminary tests show an increase in web traffic to these supporting sites under the new design.
Moreover, Google is introducing advertisements within the AI Overviews, aiming to blend commercial opportunities with the user experience on its search platform. This feature will first be tested on mobile devices in the U.S., as part of Google’s efforts to generate revenue in a rapidly evolving digital landscape.
Expanding beyond traditional text input, Google’s search capabilities will now allow users to interact using their voice when asking questions while maintaining the ability to search with pictures. This includes an upgrade to the Google Lens, which can now answer questions in a conversational manner about real-time video feeds inputted by users, such as observing moving fish in an aquarium.
These advancements are intended to simplify the search experience by allowing users to interact with Google in a more intuitive, human-like manner. According to Rajan Patel, a Google vice president of search engineering, the goal is to make search more accessible and effortless for users wherever they are, whether through traditional queries or innovative methods like video and voice.
Google’s integration of AI extends to enhancing shopping experiences on Google Lens. Users can now photograph products they encounter in real-life scenarios and quickly obtain comprehensive information, including purchasing options from various retailers. This update is now available globally for both Android and iOS users.
Furthermore, Google has introduced the “Circle to Search” feature, accessible on over 150 million Android devices. This feature allows users to identify songs played in multimedia contexts by circling the relevant section in a video or audio clip.
These innovations further underscore Google’s commitment to embedding AI deeply into its services, transforming them into more dynamic tools that anticipate and meet user needs in real-time. The strategic push into AI echoes a broader industry trend capitalising on machine learning advancements to enhance digital interactions and provide users with faster and more accurate search results.
Source: Noah Wire Services