Florence-2, introduced by Microsoft Research at CVPR 2024, is an open-source vision-language foundation model designed to unify diverse computer vision tasks within a single sequence-to-sequence framework. Unlike traditional models that specialize in specific tasks, Florence-2 accepts both images and text prompts and outputs text for tasks such as captioning, object detection, segmentation, OCR, and region-based grounding. It comes in two sizes—Florence-2-base (~230M parameters) and Florence-2-large (~770M parameters)—and is trained on FLD-5B, a large dataset of ~126M images with ~5.4B annotations.
The model demonstrates strong zero-shot and fine-tuned performance, often rivaling larger vision-language systems while remaining lightweight and efficient. Released under the MIT license, all weights are publicly available, making it accessible for fine-tuning and deployment in applications like VQA, content tagging, accessibility, and research. Florence-2’s compact design, versatility, and openness position it as a practical alternative to larger proprietary multimodal models.