Ronald is a post-doctoral Research Fellow at Imperial College London. Ronald obtained his PhD from the University of Oxford, where he held an EPSRC studentship. His research interests are in mobile perception including robust 3D reconstruction on mobile devices, SLAM and semantic scene understanding using learning-based methods. In the past he has worked on machine learning for natural user interaction and optimal systems design. Ronald has experience in both the computer animation and aerospace industry. He received a BSc and MSc degree in Information Engineering from the University of Witwatersrand, specializing in non-linear systems and control in 2014.
Angela Dai is a junior research group leader at the Technical University of Munich. Her research focuses on creating high-quality 3D models of real-world environments, towards enabling human-level scene understanding and democratizing 3D scanning for content creation and mixed reality scenarios. She completed her Ph.D. in Computer Science at Stanford University, advised by Pat Hanrahan. During her PhD, she has advanced real-time 3D reconstruction, and leveraged this towards developing machine learning approaches towards improving the reconstruction quality and semantic and instance understanding of these 3D scans. Angela received her Bachelors degree in Computer Science from Princeton University. Her work has been recognized with a Professor Michael J. Flynn Stanford Graduate Fellowship and a 1.25mil euro ZDB junior research group award.
Alex is a Research Fellow at Trinity College at the University of Cambridge in the United Kingdom. He graduated with a Bachelor of Engineering with First Class Honours in 2013 from the University of Auckland, New Zealand. In 2014, he was awarded a Woolf Fisher Scholarship to study towards a Ph.D. at the University of Cambridge. He is a member of the Machine Intelligence Laboratory and is supervised by Prof. Roberto Cipolla. Alex’s research investigates applications of deep learning for robot perception and control. He has developed computer vision algorithms to enable autonomous vehicles to understand complex and dynamic scenes. In particular, he is excited about leveraging geometry for unsupervised learning, reasoning under uncertainty with Bayesian deep learning and developing end-to-end systems which can reason from perception to control. His technology has been used to power smart-city infrastructure with Vivacity, control self-driving cars with Toyota Research Institute and enable next-generation drone flight with Skydio.
Sudeep is a Research Scientist at Toyota Research Institute. He obtained his PhD from MIT, under John Leonard. He completed his M.S. in Computer Science from EECS, MIT under the advisorship of Seth Teller. Prior to coming to MIT, he was a computer vision developer at PhaseSpace Motion Capture for 2 years working on real-time computer-vision related technologies. He completed my B.S. in Mechanical Engineering in ‘08 at the University of Michigan - Ann Arbor. He has also held internships at companies like Mitsubishi Electric Research Labs (MERL) and Segway. He is particularly interested in developing SLAM-aware robots that can learn persistently in an environment from visual experience. His research attempts to understand the capabilities at the intersection of object and scene understanding and Simultaneous Localization and Mapping (SLAM).