Google

Google: Gemma 3 12B

Released Mar 12, 2025
proprietary license
128,000 context
12 parameters
openmultimodal

Overview

Gemma 3 12B, announced by Google DeepMind on March 12, 2025, is part of the open-weight Gemma 3 family, designed to provide a balance between capability and accessibility. With around 12 billion parameters, it supports multimodal input (text + images) and outputs text, making it useful for reasoning, summarization, Q&A, and visual understanding tasks. The model supports an input context of 128,000 tokens and typically generates up to ~8,000 tokens in output.

The 12B variant is instruction-tuned (“Gemma-3-12B-IT”) and optimized for multilingual use across more than 140 languages. It can run on a single GPU or TPU, offering a lighter compute footprint than very large proprietary models, while still achieving strong performance in reasoning benchmarks. Quantized and lower-precision variants are available to improve efficiency. Limitations include smaller output lengths relative to input capacity, scaling hardware needs at larger sizes, and performance below massive proprietary models on the most complex multimodal or reasoning-heavy tasks.

Performance

Avg. Latency

Model Rankings

Supported Tasks