Meta’s launch of two artificial intelligence (AI) Llama 4 models has positioned the U.S. as a leader in the AI race, David Sacks, the U.S. AI and crypto czar, said in an X post on Saturday. He wrote:
“For the U.S. to win the AI race, we have to win in open source too, and Llama 4 puts us back in the lead.”
On April 6, Meta announced the launch of its fourth-generation open-source Llama 4 models — Llama 4 Scout and Llama 4 Maverick.
The AI race intensified with the launch of Deepseek
DeepSeek, a Chinese AI startup founded in 2023, launched its first model in December 2024. In January 2025, it launched a chatbot that claimed to rival the capabilities of OpenAI’s ChatGPT.
Downloads of DeepSeek’s generative AI, DeepSeek R1, topped the charts of app stores. It shook the world’s primary assumption: Hefty investments and expensive chips are the only way to get ahead in the AI game.
As a result, the stock prices of U.S. tech companies like Nvidia suffered losses following DeepSeek R1’s launch.
This is because Deepseek claimed to have spent approximately $6 million to train its AI model. On the other hand, OpenAI reportedly spent around $100 million to train ChatGPT-4.
While venture capitalist Marc Andreessen called DeepSeek’s R1 launch “AI’s Sputnik moment,” U.S. President Donald J Trump called it a “wake-up call” for American firms.
Since then, U.S. firms have been trying to leapfrog in the AI race, and according to Sacks, who has been a vocal proponent of AI, Llama 4 has been the key.
Meta claims Llama 4 models are “best in their class”
Meta claims that the Llama 4 models are their “most advanced models yet” and also “best in their class for multimodality.” Both models are currently available for download and use on Meta applications like WhatsApp and Instagram.
Multimodal AI systems are capable of processing various types of data — text, image, audio, and video — simultaneously. This enables the AI to comprehend complex scenarios and generate comprehensive responses.
Llama 4 Scout and Llama 4 Maverick are the first open-source Meta AI models built using a mixture of experts (MoE) architecture. In an MoE, multiple smaller models or specialized experts collaborate to make the larger AI model work. This means that experts focus on solving the parts of the problem they are designed to handle.
Llama 4 Scout has 17 billion active parameters and 16 experts. Llama 4 Maverick has the same number of parameters but is designed with 128 experts. While the former can fit in a single NVIDIA H100 GPU, the latter requires an H100 host.
Meta claims that Llama 4 Scout outperforms Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across a broad range of widely reported benchmarks.
Llama 4 Maverick, on the other hand, provides results comparable to DeepSeek v3 on reasoning and coding despite having less than half the active parameters. Meta also asserts that Llama 4 Maverick beats GPT-4o and Gemini 2.0 Flash across a number of benchmarks.
Furthermore, according to Meta’s testing, Llama 4 “responds with strong political lean at a rate comparable to Grok.”
Meta also unveiled Llama 4 Behemoth, which is still in training, as one of the “world’s smartest” large language models (LLMs).
Meta launched its first Llama model in February 2023.
The post David Sacks says Meta’s open-source Llama 4 puts U.S. in the lead in AI race appeared first on CryptoSlate.