Meta

Meta: Llama 4 Maverick

Released Apr 5, 2025
proprietary license
1,048,576 context
400 parameters
openmultimodal

Overview

Llama 4 Maverick, introduced on April 5, 2025, is one of the first models in Meta’s Llama 4 family, designed as a natively multimodal model supporting text + image inputs with text outputs. It employs a Mixture-of-Experts (MoE) architecture with 128 experts, activating ~17B parameters per token out of a pool of ~400B total parameters. This design improves scalability, efficiency, and reasoning capacity. Maverick has a 1M-token context window, enabling it to handle large documents, extended conversations, and multimodal reasoning. Its knowledge cutoff is August 2024.

The model is released under the Llama 4 Community License and comes in both base and instruction-tuned (“Instruct”) versions. Maverick is widely deployed via Hugging Face, Google Vertex AI, Amazon Bedrock, and Oracle Cloud, making it one of the most accessible large open-weight models. However, it outputs text only (no image/audio generation) and, while input capacity is huge, output limits are typically much smaller. The MoE design also raises hardware demands, as maintaining 128 experts requires significant compute resources, and Meta’s license introduces restrictions around commercial-scale use.

Performance

Avg. Latency

Model Rankings

Supported Tasks