ICON MATCHA: Towards Matching Anything

Fei Xue1,2     Sven Elflein1,3,4     Laura Leal-Taixé 1     Qunjie Zhou1    
1NVIDIA    2University of Cambridge      3University of Toronto    4Vector Institute

CVPR 2025 (Highlight)

Abstract

Establishing correspondences across images is a fundamental challenge in computer vision, underpinning tasks like Structure-from-Motion, image editing, and point tracking. Traditional methods are often specialized for specific correspondence types, geometric, semantic, or temporal, whereas humans naturally identify alignments across these domains. Inspired by this flexibility, we propose MATCHA, a unified feature model designed to “rule them all”, establishing robust correspondences across diverse matching tasks. Building on insights that diffusion model features can encode multiple correspondence types, MATCHA augments this capacity by dynamically fusing high-level semantic and low-level geometric features through an attention-based module, creating expressive, versatile, and robust features. Additionally, MATCHA integrates object-level features from DINOv2 to further boost generalization, enabling a single feature capable of matching anything. Extensive experiments validate that MATCHA consistently surpasses state-of- the-art methods across geometric, semantic, and temporal matching tasks, setting a new foundation for a unified approach for the fundamental correspondence problem in computer vision. To the best of our knowledge, MATCHA is the first approach that is able to effectively tackle diverse matching tasks with a single unified feature.


Method Overview

Architecture Overview

MATCHA is built on top the Stable Diffusion (SD) and DINOv2 foundation models. Given an image, SD first extracts low- and high-level features with rich spatial and semantic information at different steps. Both features are dynamically fused with the attention mechanisms to augment each other with complementary information. The two augmented features are jointly supervised with ground-truth geometric and semantic correspsondences respectively, which improves their performance on geometric and semantic matching tasks significantly. Finally, features from DINOv2, which have rich object-level knowledge are statically fused with the two augmented features by concatenation along the channel dimension, resulting in a unified feature with spatial, semantic, and temporal knowledge.


Results

MATCHA tackles various matching tasks. Select each tab below to explore the results for each task.

Geometric Matching

MATCHA performs geometric matching in both outdoor and indoor environments and supports all keypoint detectors (e.g., Superpoint and DISK).



Semantic Matching

MATCHA performs semantic matching for various objects.



Temporal Matching

MATCHA performs temporal matching by finding correspondences between the first and all following frames.



Comparison with Previous Methods

MATCHA tackles various matching tasks. Select each tab below to explore the results for each task.

Comparison of Heatmap

Given a query point from the source image, we visualize the heatmap on the target image and the predicted closest retrieval point.



Comparison of Semantic Matching

We visualize the inliers (green) and outliers (red) of semantic matching.



Comparison of Geometric Matching

We visualize the inliers after PnP + RANSAC of geometric matching.



Comparison of Temporal Matching

We visualize the inliers (green) and outliers (red) of temporal matching.




BibTeX


    @inproceedings{matcha,
        Author = {Fei Xue and Sven Elflein and Laura Leal-Taixé and Qunjie Zhou},
        Title = {MATCHA: Towards Matching Anything},
        Year = {2025},
          booktitle={CVPR},
        }