Evaluating Foundation Models Under Class Imbalance

Day - Time: 18 March 2026, h.11:00
Place: Area della Ricerca CNR di Pisa - Room: C-40
Speakers
Referent

Giulio Del Corso

Abstract
Foundation models are increasingly used as fixed feature extractors, enabling fast experimentation through precomputed embeddings. While this workflow is attractive, its behaviour under class imbalance remains insufficiently examined. In imbalanced settings, conventional accuracy can mask uneven performance across classes, particularly for rare categories. In this talk, I present a controlled comparison between strong CNN baselines and embedding-based foundation model pipelines on an imbalanced remote sensing dataset. Rather than relying on accuracy alone, we adopt macro-level evaluation measures to assess how evenly models distribute performance across classes. The results show that foundation models are not inherently superior to well-optimised CNNs. In majority-dominated scenarios, gains are often marginal. However, when evaluation prioritises balanced behaviour, optimisation choices and, in some cases, the fusion of complementary representations become critical. The central message is methodological: in imbalanced classification, model choice and metric choice must be aligned with the actual objective. Foundation models are powerful tools, but they do not replace careful evaluation design.