Meta has officially launched the first version of its new Meta AI app, a personalized assistant powered by its latest AI model, Llama 4. Designed to deliver more meaningful and relevant interactions, the app introduces features that remember your preferences, adapt to your context, and offer a deeply personalized experience across Meta’s ecosystem.
The app is now available on iOS and Android, bringing a standalone experience to users beyond the integrations already found in WhatsApp, Instagram, Facebook, and Messenger.
A More Personal AI Experience
Unlike generic assistants, Meta AI is built to learn from you. It remembers what you tell it—like your love for travel or favorite languages—and can use information from your Meta profiles and interactions to tailor responses. For users in the US and Canada, this personalization can extend across linked Facebook and Instagram accounts through Meta’s Accounts Center.
“Our goal is to make AI that feels like it’s yours,” said Meta in a launch statement. “It’s not just helpful, it’s personal.”
Voice-First Interactions
The app’s standout feature is its voice functionality. You can now hold conversational voice chats with Meta AI, made more natural thanks to Llama 4 and full-duplex speech technology. This new demo allows for more lifelike, real-time dialogue instead of robotic voice readings—though it remains in early testing.
This feature is currently available in the US, Canada, Australia, and New Zealand, and Meta encourages user feedback to help fine-tune the experience.
Discover Feed and Content Sharing
A new Discover feed has also been introduced, where users can see how others are creatively using Meta AI. From clever prompts to inspiring use cases, the feed promotes community engagement while offering tools to remix ideas for your own use. Importantly, nothing is shared unless users choose to post it.
Seamless Across Devices
The Meta AI app also acts as the new companion app for Ray-Ban Meta smart glasses, replacing the previous Meta View app. Users can start a conversation through their glasses and seamlessly pick it up on the app or on the web—although the reverse isn’t yet supported.
Meta AI’s web version has been upgraded too, now featuring voice interaction, an optimized desktop interface, enhanced image generation tools, and a rich document editor that allows users to create and export visually rich PDFs. In some regions, users can even import documents for the AI to analyze.
You’re in Control
Meta emphasizes user control over the AI experience. A visible mic icon indicates when voice features are active, and settings allow users to toggle modes such as “Ready to talk” for hands-free interaction.
With this release, Meta aims to redefine how people interact with AI by making it more accessible, social, and tailored to their everyday lives.
The Meta AI app is now available on iOS and Android.
For more information or to manage your experience, visit the Meta AI Help Center.