Meta launched the Meta AI app - Now available with social feeds, smart glasses integration and more

Meta AI.png

Meta today launched the first version of Meta AI, a self-developed artificial intelligence (AI) assistant application powered by the Llama 4 model. So, what should you know about it?

For your information, like other AI assistants, users can type or speak to it, generate images, and get the latest information from the web in real time. Interestingly, however, the Meta AI app offers a Discover feed feature that adds an AI element to social media. Through it, users can see the best prompts shared by others, and then like, comment, share, or remix (modify) the prompts to their own style.

In addition, the app features an opt-in beta voice mode that allows for more natural and relaxed conversations. Based on research from a full-duplex AI model, this mode supports fast exchanges and overlapping conversations. To begin with, both voice modes are only available in the United States, Canada, Australia, and New Zealand.

Moreover, Meta AI enables image generation and editing via voice or text conversations. It can search the web for product recommendations and research, with the help of “conversation triggers” provided to provide ideas.

In the US and Canada, Meta uses information from Facebook and Instagram profiles to personalize how the AI assistant responds. In theory, this means that how users use both apps will influence the results provided by Meta AI. It is understood that Meta AI is not a completely new app, but rather replaces the View app that was previously used with Ray-Ban's Meta smart glasses. For now, users in Malaysia are not yet able to use Meta AI as a full-fledged AI assistant - it only works to pair with the smart glasses.

 


What are your thoughts on this news? Comment below, and stay tuned for more news like this at TechNave.