Biography
Dr. João Henriques is a Research Fellow of the Royal Academy of Engineering, working at the Visual Geometry Group (VGG) at the University of Oxford. His research focuses on computer vision and deep learning, with the goal of making machines more perceptive, intelligent and capable of helping people. He created the KCF and SiameseFC visual object trackers, which won the highly competitive VOT Challenge twice, and are widely deployed in consumer hardware, from Facebook apps to commercial drones. His research spans many topics: robot mapping and navigation, including reinforcement learning and 3D geometry; multi-agent cooperation and "friendly" AI; as well as various forms of learning, from self-supervised, causal, and meta-learning, including optimisation theory. For the latest research please refer to: https://www.robots.ox.ac.uk/~joao/
Most Recent Publications
SNeS: learning probably symmetric neural surfaces from incomplete data
SNeS: learning probably symmetric neural surfaces from incomplete data
Towards real-world navigation with deep differentiable planners
Towards real-world navigation with deep differentiable planners
Learning altruistic behaviours in reinforcement learning without external rewards
Learning altruistic behaviours in reinforcement learning without external rewards
Space-Time Crop & Attend: improving cross-modal video representation learning
Space-Time Crop & Attend: improving cross-modal video representation learning
On compositions of transformations in contrastive self-supervised learning
On compositions of transformations in contrastive self-supervised learning
Most Recent Publications
SNeS: learning probably symmetric neural surfaces from incomplete data
SNeS: learning probably symmetric neural surfaces from incomplete data
Towards real-world navigation with deep differentiable planners
Towards real-world navigation with deep differentiable planners
Learning altruistic behaviours in reinforcement learning without external rewards
Learning altruistic behaviours in reinforcement learning without external rewards
Space-Time Crop & Attend: improving cross-modal video representation learning
Space-Time Crop & Attend: improving cross-modal video representation learning
On compositions of transformations in contrastive self-supervised learning
On compositions of transformations in contrastive self-supervised learning