Ronald is a post-doctoral Research Fellow at Imperial College London. Ronald obtained his PhD from the University of Oxford, where he held an EPSRC studentship. His research interests are in mobile perception including robust 3D reconstruction on mobile devices, SLAM and semantic scene understanding using learning-based methods. In the past he has worked on machine learning for natural user interaction and optimal systems design. Ronald has experience in both the computer animation and aerospace industry. He received a BSc and MSc degree in Information Engineering from the University of Witwatersrand, specializing in non-linear systems and control in 2014.
Sudeep is a Research Scientist at Toyota Research Institute. He obtained his PhD from MIT, under John Leonard. He completed his M.S. in Computer Science from EECS, MIT under the advisorship of Seth Teller. Prior to coming to MIT, he was a computer vision developer at PhaseSpace Motion Capture for 2 years working on real-time computer-vision related technologies. He completed my B.S. in Mechanical Engineering in ‘08 at the University of Michigan - Ann Arbor. He has also held internships at companies like Mitsubishi Electric Research Labs (MERL) and Segway. He is particularly interested in developing SLAM-aware robots that can learn persistently in an environment from visual experience. His research attempts to understand the capabilities at the intersection of object and scene understanding and Simultaneous Localization and Mapping (SLAM).
Alex is a Research Fellow at Trinity College at the University of Cambridge in the United Kingdom. He graduated with a Bachelor of Engineering with First Class Honours in 2013 from the University of Auckland, New Zealand. In 2014, he was awarded a Woolf Fisher Scholarship to study towards a Ph.D. at the University of Cambridge. He is a member of the Machine Intelligence Laboratory and is supervised by Prof. Roberto Cipolla. Alex’s research investigates applications of deep learning for robot perception and control. He has developed computer vision algorithms to enable autonomous vehicles to understand complex and dynamic scenes. In particular, he is excited about leveraging geometry for unsupervised learning, reasoning under uncertainty with Bayesian deep learning and developing end-to-end systems which can reason from perception to control. His technology has been used to power smart-city infrastructure with Vivacity, control self-driving cars with Toyota Research Institute and enable next-generation drone flight with Skydio.
Will Maddern is a Senior Researcher with the Oxford Robotics Institute at the University of Oxford, and flagship lead for the Oxford RobotCar project (robotcar.org.uk). He leads a team focusing on autonomous driving in urban environments; primarily localisation, mapping and navigation using vision, LIDAR and radar, along with path planning, control, and obstacle perception using deep learning. Prior to joining Oxford, Will completed a Bachelor of Engineering (Mechatronics) at the University of Queensland and a PhD in robot navigation at the Queensland University of Technology, both in Brisbane, Australia.
Andrew is a leader in international research in real-time visual SLAM algorithms. His highly-cited MonoSLAM algorithm opened the door for devices with low-cost cameras to localise in and understand their surroundings. This work is having huge industrial impact in robotics, augmented reality and mobile devices. He has worked for over 10 years with Dyson to design the core SLAM algorithms at the heart of the company’s first robotic product, the Dyson 360 Eye, on sale around the world in 2016. He founded and directs the Dyson Robotics Laboratory at Imperial College London.