Establishing correspondences across images is a fundamental challenge in computer vision, underpinning tasks like Structure-from-Motion, image editing, and point tracking. Traditional methods are often specialized for specific correspondence types, geometric, semantic, or temporal, whereas humans naturally identify alignments across these domains. Inspired by this flexibility, we propose MATCHA, a unified feature model designed to “rule them all”, establishing robust correspondences across diverse matching tasks. Building on insights that diffusion model features can encode multiple correspondence types, MATCHA augments this capacity by dynamically fusing high-level semantic and low-level geometric features through an attention-based module, creating expressive, versatile, and robust features. Additionally, MATCHA integrates object-level features from DINOv2 to further boost generalization, enabling a single feature capable of matching anything. Extensive experiments validate that MATCHA consistently surpasses state-of- the-art methods across geometric, semantic, and temporal matching tasks, setting a new foundation for a unified approach for the fundamental correspondence problem in computer vision. To the best of our knowledge, MATCHA is the first approach that is able to effectively tackle diverse matching tasks with a single unified feature.
MATCHA is built on top the Stable Diffusion (SD) and DINOv2 foundation models. Given an image, SD first extracts low- and high-level features with rich spatial and semantic information at different steps. Both features are dynamically fused with the attention mechanisms to augment each other with complementary information. The two augmented features are jointly supervised with ground-truth geometric and semantic correspsondences respectively, which improves their performance on geometric and semantic matching tasks significantly. Finally, features from DINOv2, which have rich object-level knowledge are statically fused with the two augmented features by concatenation along the channel dimension, resulting in a unified feature with spatial, semantic, and temporal knowledge.
MATCHA tackles various matching tasks. Select each tab below to explore the results for each task.
MATCHA performs geometric matching in both outdoor and indoor environments and supports all keypoint detectors (e.g., Superpoint and DISK).
MATCHA performs semantic matching for various objects.
MATCHA performs temporal matching by finding correspondences between the first and all following frames.
MATCHA tackles various matching tasks. Select each tab below to explore the results for each task.
Given a query point from the source image, we visualize the heatmap on the target image and the predicted closest retrieval point.
We visualize the inliers (green) and outliers (red) of semantic matching.
We visualize the inliers after PnP + RANSAC of geometric matching.
We visualize the inliers (green) and outliers (red) of temporal matching.
@inproceedings{matcha,
Author = {Fei Xue and Sven Elflein and Laura Leal-Taixé and Qunjie Zhou},
Title = {MATCHA: Towards Matching Anything},
Year = {2025},
booktitle={CVPR},
}