
In April 2025, Meta Platforms (“<span class="news-text_medium">Meta</span>”) unveiled the latest iteration of its large language model, named Llama 4, which consists of two advanced versions: Llama 4 Scout and Llama 4 Maverick. According to Meta, Llama 4 is a multimodal AI system, meaning it can process and integrate various forms of data, including text, images, video and audio. Furthermore, it has the ability to convert content across these different formats, enhancing its versatility.
Meta described Llama 4 Scout and Llama 4 Maverick as the most advanced models yet and the best in their class for multimodality. In a significant move, Meta confirmed both of these models will be available as open-source software, allowing developers to integrate them into their own applications and innovations. Meta also previewed Llama 4 Behemoth, positioning it as "one of the smartest LLMs in the world" and its most powerful model to date. Meta intends for Llama 4 Behemoth to serve as a "teacher" for training new models, further advancing AI capabilities.
This release marks a major development in Meta's ongoing efforts to compete in the rapidly growing AI sector. In response to the growing demand for AI technology, particularly after the transformative impact of OpenAI's ChatGPT, large tech companies have been making substantial investments in AI infrastructure. Meta plans to invest up to US$65 billion in AI infrastructure this year, reflecting the industry's push for greater AI-driven returns amid investor expectations.
However, the launch of Llama 4 faced some delays. It has been reported that Llama 4 did not meet Meta's technical benchmarks during its development, particularly in the areas of reasoning and mathematical tasks. The company also had concerns that Llama 4 was less proficient than OpenAI's models in conducting human-like voice conversations. With these challenges addressed, Meta aims to push the boundaries of AI technology and solidify its position in the competitive AI landscape.