Learning visual similarities robust to bias
|Author||: William Eric Thong|
|Promotor(s)||: Prof.dr. C.G.M. Snoek, Prof.dr.ir. A.W.M. Smeulders|
|Year of publication||: 2022|
|Link to repository||: Link to thesis|
This thesis investigates how visual similarities help to learn models robust to bias for computer vision tasks. Models should be able to adapt constantly to new and changing environments without being biased to what they have seen during training. Throughout this thesis, we focus on the research question: how to learn visual similarities robust to bias? We explore this question through the multiple facets of biases with a common theme on visual similarities to address them. We start with categorization across multiple domains, then investigate the ability to retrieve seen and unseen attribute combinations, followed by the study of the confidence of image classifiers towards seen classes, and finally the identification and mitigation of adverse predictions in image classifiers.