Abstract
Model-based control is a popular paradigm for robot navigation because it can leverage a known dynamics model to efficiently plan robust robot trajectories. The key challenge in robot navigation is safely and efficiently operating in environments which are unknown ahead of time, and can only be partially observed through sensors on the robot. In this talk, we present our work in coupling learning-based perception with model-based control. The learning-based perception module is trained using optimal control, and it produces a series of waypoints that guides the robot to the goal via a collision free path. These waypoints are then used by a model-based planner to generate a smooth and dynamically feasible trajectory that is executed on the physical system using feedback control. Our experiments in simulated real-world cluttered environments and on an actual ground vehicle demonstrate that the proposed approach can reach goal locations more reliably and efficiently in novel environments as compared to purely mapping-based or end-to-end learning-based alternatives. We conclude with some lessons learned in using learning-based perception in a feedback control loop.
This is joint work with Somil Bansal, Varun Tolani, Saurabh Gupta, Andrea Bajcsy, and Jitendra Malik.