In a bid to strengthen its position in the AI assistant market, Meta has rolled out significant enhancements to its virtual agents. During an interview, CEO Mark Zuckerberg revealed details about the company's latest natural language models and vision for conversational AI.
Zuckerberg took to Instagram to announce that Meta's AI tools are receiving extensive upgrades. This includes new versions of its language model called Llama and a real-time image generator. The company aims to integrate these advanced systems into platforms like Facebook, WhatsApp and Instagram to power question answering and content creation.
Going into more specifics during a podcast interview, Zuckerberg shared that Llama's latest iteration – Llama 3, has been trained in new sizes of 8 billion and 70 billion parameters. These state-of-the-art models will underpin Meta's AI assistant, positioning it as a sophisticated tool capable of handling a wide range of queries. The interactive agent will also gain the ability to animate pictures on the fly as the user types, a feature projected to boost creative expression.
To assess Llama 3's abilities, Zuckerberg noted it surpasses earlier versions in domains such as coding, math and logical reasoning. He further revealed Meta is evaluating even more powerful neural networks with over 400 billion parameters. All current AI releases have been made open-source for developers while maintaining a user-friendly interface across key Meta services.
With enhanced language skills and multimodal response generation, Meta AI aims to offer the most capable free conversational system out there according to its CEO. Only time will tell if these ambitious advances can help the company outpace industry leaders in building helpful, harmless and honest artificial intelligence.