Skip to main content
A mesh covering booth seating areas : From Oxfordshire to Silicon Valley

Machine Learning; Vision

If you (or your children) spent summer 2016 trying to track down that last elusive Jigglypuff, you’ll already know the thrill of augmented reality (AR).

The wildly popular mobile game Pokémon Go made high-profile use of AR technology, utilising players’ mobile phone cameras to visualize its fantastical creatures in the real world. The result was an app which was downloaded over 800 million times, and a demonstration of this young technology’s future potential.

For Professor Victor Prisacariu, Dyson Associate Professor in Information Engineering and co-founder of up-and-coming spinout tech firm, this is only the start of what AR can accomplish. “Over the next few years,” he predicts, “AR will continue to gradually solve the key challenges standing in its way and will find success in various industries - some of which, like face filters on Snapchat, we’re already seeing”.

“What is exciting for me is that can shape the direction of the industry and establish ourselves as a key foundation for developers to build the next wave of spatial applications.” arose from the work that Victor had been leading for several years at the Oxford Active Vision Laboratory research group, which seeks to solve problems in computational vision, particularly in developing mechanisms by which computers can build and understand 3D reconstructions of their surroundings. The group works on applications for location and mapping, wearable and assistive computing, semantic vision, augmented reality, human motion analysis, and navigation.'s flagship product uses a standard built-in smartphone camera to build a three-dimensional map of the world, all in real-time. The service is designed to run in the background of other developers’ AR-enabled apps. For the first time, it provides them with the ability to both remember a user’s previous spaces, and to crowd-source a 3D map of the world from footage supplied by multiple users, sharing information between them to make the map ever more accurate and immersive.

This provides a number of benefits. It means that multiplayer games can be played across various devices, all taking place in the same physical space. Digital effects can actually interact with the 3D world around us: instead of just placing an animation on top of the camera input, you can watch (for example) a digital mouse scurrying around your own real-life furniture or throw a virtual ball into the actual bowl sitting on your coffee table.

This is the result of cutting-edge research undertaken here in Oxford, but when it came to launching the company, Victor and his colleagues looked slightly further afield. “We chose to base ourselves in San Francisco because we have ambitions to become a major AR platform,” he explains.

“Being based close to Apple and Google, the leading AR platforms, and the technical and executive talent nearby meant that we have the best chance of success. In addition, the large investment community meant that we could obtain support from, rather than compete with, the Silicon Valley ecosystem.”

It’s been a busy few months for the team at, and it sounds like Victor will be racking up the airmiles for a while yet. “We spent the last several months productizing the research from the Active Vision Lab, and we’re now in the process of bringing our first customers onto our Reality Platform.

“The response from our first beta testers has been overwhelmingly enthusiastic and that excitement from our customers is what motivates us to keep going.”

Research Groups

Alumna profile: Tamara Finkelstein