Under the supervision of Prof. Guy Gilboa, Viterbi Faculty of Electrical Engineering, Technion
Self-Supervised Unconstrained Photo-consistent Image Transform for Improved Matching
Image processing and computer vision tasks often benefit from representations, which are invariant to certain image changes. Photo-consistency is a highly desired property, essential for tasks based on color and contrast cues, such as matching, registration and recognition. Traditionally, representations were designed in a model-based manner. Lately, with the rise of deep learning, new data-driven algorithms are proposed to solve this problem.
In our work, we propose a new and completely data-driven approach for generating an unconstrained photo-consistent image transform. We show that classical algorithms, which operate in the transform domain, become extremely resilient to illumination changes. This considerably improves matching accuracy, outperforming the use of state-of-the-art invariant representations as well as new matching methods based on deep features. The transform is obtained by a neural network, referred to as PhIT-Net (Photo-consistent Image Transform Network). The network is trained in a self-supervised manner, with a specialized triplet loss.
The transform yields a highly flexible representation, which can be easily used for various tasks. We point out that the utility of our method is not restricted to handling illumination invariance, and that it may be applied for generating representations which are invariant to additional types of nuisance, undesired, image variants.
Thu 23 Apr 2020
Start Time: 11:30
Zoom | Electrical Eng. Building