Google

Google: Gemma 3 27B

Released Mar 12, 2025
proprietary license
128,000 context
openmultimodal

Overview

Gemma 3 27B, announced on March 12, 2025, is the largest open-weight model in Google DeepMind’s Gemma 3 family. With around 27 billion parameters, it is multimodal—accepting both text and images as input and producing text outputs. It supports a 128,000-token context window and typically generates up to ~8,192 tokens, enabling it to process multi-page documents, extended conversations, or large batches of images in a single prompt.

The model is instruction-tuned in its “-it” variants for chat, reasoning, and summarization use cases, and it supports structured outputs and function calling. It is multilingual, covering over 140 languages. Deployment is flexible: the full BF16 model requires ~46 GB of VRAM, but quantization-aware training (QAT) versions in 8-bit or 4-bit reduce the footprint significantly, allowing more accessible use outside large-scale clusters. While it delivers stronger reasoning and multimodal performance than smaller Gemma models, it remains lighter and more open than proprietary systems, making it well-suited for research, development, and fine-tuned applications.

Performance

Avg. Latency

Model Rankings

Supported Tasks