Gemma 3 4B, released on March 12, 2025, is the mid-sized member of Google DeepMind’s open-weight Gemma 3 family. With about 4 billion parameters, it is multimodal—supporting text and image inputs and generating text outputs. Like the larger Gemma 3 models, it features a 128,000-token input context window with an output capacity of ~8,192 tokens, enabling it to handle long documents and mixed text–image reasoning tasks.
The 4B variant is designed as a balance between efficiency and capability: it offers multilingual support across 140+ languages, strong summarization and reasoning performance, and compatibility with moderate hardware. Inference can run with ~6.4 GB VRAM in BF16, or significantly less in quantized 8-bit (~4.4 GB) or 4-bit (~3.4 GB) modes, making it accessible to developers outside large-scale infrastructure. While it lags behind the 12B and 27B versions on the most complex reasoning and multimodal benchmarks, its lower compute footprint makes it ideal for research, prototyping, and practical deployment where efficiency matters.