I am a postdoctoral research fellow, working at the Virtual Environments and Computer Graphics group at University College London with Anthony Steed. My research focuses on rendering and computer vision techniques for virtual and augmented reality.
Computer Graphics for Mixed Reality
Mixed reality creates new challenges when producing high quality real-time graphics. The rendered virtual content needs to not only appear realistic, but also consistent with the existing real content. One challenge is how to effectively estimate the real lighting environment, and how to use the estimate to render the virtual content. Another is that of accurately determining where virtual content is obscured by real content, and should not be rendered.
Real-time Computer Graphics and Perception
Advances in display technology are allowing increasingly high-fidelity displays to be produced. As pixel density increases, however, it becomes increasingly challenging to render content at full resolution in real time. Additionally, the bandwidth required to stream content to these displays becomes problematic. Given access to eye tracking, how can we exploit this information to reduce bandwidth and computational requirements without sacrificing perceived quality?
Beyond Blur: Real-time Metamers for Foveated Rendering
D Walton, R Kuffner-dos Anjos, S Friston, D Swapp, A Steed, T Ritschel
ACM Trans Graph (Proc. SIGGRAPH 2021) 40(3) [PDF] [Webpage]
This work introduces a new method for foveated rendering using ventral image metamers. These are alternative versions of images which are indistinguishable from the original for a given fixation point. We introduce a method to extract a model of the perceivable components of an image for a given fixation point, and a method to convert this model to a metamer of the input. Both methods are fast, and the model is compact, allowing metamers to be used for the first time in real-time compression and rendering applications.
Improved Real-time Rendering for Mixed Reality (EngD Thesis)
David R. Walton [PDF]
My EngD project focused on real-time techniques for enhancing graphics in MR applications. I focused on how to interpret and exploit data from sensors such as RGBD and fisheye cameras to capture detailed information about the real world, and render more realistic and consistent MR scenes.
Dynamic HDR Environment Capture for Mixed Reality
David R. Walton and Anthony Steed
VRST 2018 [PDF] [Bibtex] [Video]
This paper built on the earlier work in Synthesis of Environment Maps for Mixed Reality below. We present new techinques allowing the estimation of full HDR environment maps, and allowing us to respond much more quickly to changes in the real 3D environment. These improvements do not require any additional sensing hardware. We demonstrate a full AR application capable of working in real time.
Accurate Real-time Occlusion for Mixed Reality
David R. Walton and Anthony Steed
VRST 2017 [PDF] [Bibtex] [Video]
In MR applications, correctly handling occlusion of virtual objects by real ones is critical to maintaining a good user experience, but this remains a significant challenge. Consumer depth sensors can be used for this purpose, but the depth maps they provide are noisy, incomplete and often unreliable. This paper presents a technique for using these depth maps to estimate a high-quality occlusion matte. We also develop a technique for quantitatively comparing the quality of AR occlusion handling methods, and use it to assess our approach and others.
Synthesis of Environment Maps for Mixed Reality
David R. Walton, Diego Thomas, Anthony Steed and Akihiro Sugimoto
ISMAR 2017 [PDF] [Bibtex]
High-quality estimation of surrounding lighting is important for rendering realistic virtual objects in AR. Particularly when rendering specular, mirror-like virtual materials, high-frequency environment lighting is required. This paper presents techniques for estimating a 360 degree environment map around a virtual object, constructed using data from a two-camera system consisting of a depth camera and a fisheye camera. We show how these sensors can be used in a real-time system that tracks the motion of the cameras and updates a 3D scene model in real time, using this to estimate environment maps and render realistic 3D objects.
Augmented Reality Occlusion
David R. Walton, Imagination Technologies Ltd [Google Patents]
Research Associate, UCL 2020-Present
I currently work as a research fellow in the Virtual Environments and Computer Graphics group at UCL.
Research Engineer, Imagination Technologies 2018-2020
From 2018-2020 I continued to work at Imagination Technologies and was promoted to Research Engineer.
EngD Student, UCL & Imagination Technologies 2014-2018
From 2014-2018 I worked on an EngD project, a collaboration between the Virtual Environments, Imaging and Visualisation Centre at UCL and Imagination Technologies. It focused on novel real-time rendering techniques for AR, particularly on techniques applicable to mobile devices. This was supervised by Prof. Anthony Steed from UCL, and Luke Peterson and Paul Brasnett of Imagination Technologies.
The EngD included a taught MRes component, during which I took courses including Computer Vision, Computer Graphics and Virtual Reality. As part of the VR course group project, we developed an immersive VR application for the CAVE. I passed the MRes component with a distinction and was added to the Dean’s list.
During the EngD I took part in an internship, working for 6 months at the National Institute for Informatics in Tokyo supervised by Prof. Akihiro Sugimoto and collaborating with Diego Thomas of Kyushu University.
BSc Mathematics, MSc Computer Science and Applications, University of Warwick 2009-2013
I completed my BSc in Mathematics at the University of Warwick, obtaining a 1st class degree. I continued on to an MSc in Computer Science and Applications, gaining a distinction and the prize for best overall graduating student. My MSc dissertation focused on techniques for real-time computer graphics in OpenGL.
Email: david.walton.13 at ucl.ac.uk