At the recent Connect 2024 event, Meta introduced new AI features aimed at transforming user interactions across its platforms, whilst expanding its hardware capabilities and customisable AI solutions.
Meta, the parent company behind popular social media platforms such as Facebook, Instagram, WhatsApp, and Messenger, is making significant strides in the integration of artificial intelligence (AI) into its services. Meta AI, previously known as Facebook Artificial Intelligence Research, is at the forefront of this evolution, aiming to redefine how users interact with digital platforms by becoming a seamless personal virtual assistant within the Meta app ecosystem.
At the Connect 2024 event in late September, Meta unveiled a range of updates and features designed to make AI tools more accessible and user-friendly. The company has shifted its focus from purely chatbot functions to developing a sophisticated multimodal and multilingual AI assistant capable of handling intricate tasks.
Meta AI is quietly integrated into daily social interactions, smoothing connections and content creation. Users can engage with Meta AI by simply typing “@” followed by Meta AI within chats, enhancing the experience with suggestions, answers, and image editing capabilities. This integration extends to search functions, offering a more intuitive way to explore topics based on feed content, which Meta describes as a “contextual experience.”
Among Meta AI’s impressive features is its ability to conduct natural voice conversations. It can process and respond in multiple languages, including English, French, German, Hindi, Italian, Portuguese, and Spanish, with plans to introduce celebrity voices such as John Cena and Kristen Bell.
Currently, Meta AI is operational in 21 countries outside of the United States, including Canada, India, and South Africa. However, its availability does not extend to the European Union due to compliance with the EU’s AI Pact and the AI Act, which necessitate comprehensive data training summaries, a requirement Meta is cautious about due to its data privacy history.
In addition to software integrations, Meta is expanding its AI capabilities into hardware. The Ray-Ban Meta glasses exemplify this move, offering functions like remembering the location of a parked car. These glasses perform tasks based on user focus, such as making calls or scanning QR codes. Notably, the glasses can translate conversations in real-time across several languages, facilitating smoother communication on the go.
Other hardware innovations include the Meta Quest S3, a mixed reality headset that highlights Meta’s commitment to blending virtual and augmented reality experiences. Additionally, the prototype holographic AR glasses, known as Orion, have been in development for over a decade, indicating a long-term vision for immersive technological landscapes.
As part of its AI studio offerings, Meta plans to allow users and businesses in the US to create custom AI chatbots without advanced programming knowledge. These customizable AI personas aim to enhance customer and follower interactions, with transparency ensured by marking all AI-generated responses.
The driving force behind Meta AI’s capabilities is the Llama 3.2 model, a family of large language models (LLMs) capable of generating text and understanding visuals. The latest version, Llama 3.2, is open-source and boasts advanced capabilities, challenging even the best closed-source models.
By the year’s end, Meta is targeting that Meta AI becomes the most widely used AI assistant globally. Currently, over 400 million users engage with Meta AI monthly, with 185 million accessing its features weekly across Meta’s range of products. This widespread adoption showcases Meta’s ambition to redefine user interaction through advanced AI integration.
Source: Noah Wire Services