Qwen2.5-VL-7B-Instruct is a 7-billion parameter vision-language model from Alibaba’s QwenLM team, released on January 26, 2025 under the Apache 2.0 license. It is the instruction-tuned variant of the 7B scale in the Qwen2.5-VL family, designed to process multimodal inputs such as text, images, charts, documents, and video. The model enables structured outputs—including JSON for structured content and bounding boxes for visual localization. Weights are publicly available on Hugging Face and GitHub, making it suitable for both research and applied multimodal use.