Meta is launching a new standalone Meta AI app powered by its latest large language model, Llama 4 — marking a significant leap toward creating more personal and conversational artificial intelligence. Already integrated into daily interactions on WhatsApp, Instagram, Facebook, and Messenger, Meta AI now gets its own dedicated home for users who want to engage more deeply through voice and text.
This first version of the app, now available in the US, Canada, Australia, and New Zealand, introduces a new way to talk with Meta AI — one built around natural, real-time voice conversations. It includes a full-duplex speech demo, allowing users to experience more fluid and humanlike dialogue. Meta says the app was created to “get to know you,” using personalization and seamless interaction as its north stars.
“Hey Meta, let’s chat” — that’s the new rallying cry.
Built for Conversation and Multitasking
Unlike previous voice assistant models, Meta AI in the new app is more than just reactive — it’s designed to feel social, personal, and helpful. Whether you’re cooking, commuting, or juggling tasks, the app lets you talk to Meta AI via voice while your hands and eyes are focused elsewhere. A visible mic icon keeps users informed about when voice input is active.
The AI also integrates features like image generation and editing — all accessible through either text or voice. This makes creativity and problem-solving more dynamic and accessible than ever.
Introducing Full-Duplex Speech and Personalized Responses
A standout innovation is the inclusion of full-duplex speech technology, allowing Meta AI to speak more naturally by generating speech in real time rather than reading from prewritten text. Though still in demo phase, it offers a glimpse of what future AI voice assistants might sound like. Meta cautions that users may encounter bugs or inconsistencies, but emphasizes that feedback from this release will shape future updates.
The app’s personalization features are designed to deepen over time. Meta AI remembers key user preferences — from favorite hobbies to languages being learned — and leverages context from users’ profiles and activity across Facebook and Instagram (if connected through the Accounts Center). In the US and Canada, personalized responses are already available, making the AI feel more attuned to individual needs and interests.
Discover and Share Prompts in the Community Feed
To connect users with one another and showcase creative ways to use AI, the app includes a Discover feed. Here, people can share, remix, or simply browse the most engaging prompts others have created. Nothing is posted publicly without explicit user consent, ensuring users remain in full control of what they share.
Integration Across Devices, Including Ray-Ban Meta Glasses
The new Meta AI app also replaces the Meta View companion app for Ray-Ban Meta smart glasses, streamlining device management and unifying the AI experience. Users can begin a conversation with Meta AI on their glasses and continue it in the app or web version — though not the other way around (yet). All device settings and media will migrate automatically upon app update.
Meta AI on Web Gets a Major Upgrade
In addition to the mobile rollout, Meta AI on the web has been upgraded with voice features, Discover feed access, and a revamped interface tailored for desktop productivity. It introduces an enhanced image generation tool with customizable presets and a new document editor — currently being tested in select countries — that can generate and export rich documents in PDF format. Document import and analysis by Meta AI are also in the works.
Voice First, User First
Meta emphasizes that voice is the most natural way to engage with AI. The app includes settings for users who want voice features enabled by default, and every aspect of the experience is designed with user control in mind. Whether you’re multitasking, seeking inspiration, or just playing around, Meta AI is now always just a conversation away.