Vision-Language Model Selection and Reuse for Downstream Adaptation

1National Key Laboratory for Novel Software Technology, Nanjing University, China
2School of Artificial Intelligence, Nanjing University, China
3School of Intelligence Science and Technology, Nanjing University, China
ICML 2025

Corresponding Author

Abstract

Pre-trained Vision-Language Models (VLMs) are becoming increasingly popular across various visual tasks, and several open-sourced VLM variants have been released. However, selecting the best-performing pre-trained VLM for a specific downstream task is challenging since no single VLM can achieve promising performance on all downstream tasks, and evaluating all available VLMs is impossible due to time and data limitations. To address this problem, this paper proposes a novel paradigm to select and reuse VLM for downstream tasks, called Model Label Learning (MLL). The proposal is highly computationally efficient and growable since the model labeling process is completed target task independent and the ability could grow with the number of candidate VLMs. We also introduce a new benchmark for evaluating VLM selection methods, including 49 VLMs and 17 target task datasets. Experimental results clearly demonstrate the effectiveness of the proposed method for selecting and reusing VLMs.

Framework

MLL contains three key modules: model labeling, which assigns labels to each VLM to describe their specialty and utility; model selection, which matches the requirements of the target task with model labels; and model reuse, which applies selected VLMs to the target task in an ensemble manner.

BibTeX


        @article{tan2025vision,
          title       = {Vision-Language Model Selection and Reuse for Downstream Adaptation},
          author      = {Tan, Hao-Zhe and Zhou, Zhi and Li, Yu-Feng and Guo, Lan-Zhe},
          journal     = {arXiv preprint arXiv:2501.18271},
          year        = {2025}
        }