We develop a local path planner specific to path-following tasks, which allows a lidar variant of VT&R3 to reliably avoid obstacles during path repeating. This planner is demonstrated using VT&R3 but generalizes to any path-following applications.
We present the first continuous-time lidar-only odometry algorithm using these Doppler velocity measurements from an FMCW lidar to aid odometry in geometrically degenerate environments.
We present an extensive comparison between three topometric localization systems: radar-only, lidar-only, and a cross-modal radar-to-lidar system across varying seasonal and weather conditions using the Boreas dataset.
The Boreas dataset was collected by driving a repeated route over the course of 1 year resulting in stark seasonal variations. In total, Boreas contains over 350km of driving data including several sequences with adverse weather conditions such as rain and heavy snow.
We provide a demo of Visual Teach and Repeat 3 for autonomous path following on a mobile robot, which uses deep learned features to tackle localization across challenging appearance change. Corresponding paper on deep learned features: link.
VT&R3 is a C++ implementation of the Teach and Repeat navigation framework developed at ASRL. It allows user to teach a robot a large (kilometer-scale) network of paths where the robot navigate freely via accurate (centimeter-level) path following, using a lidar/radar/camera as the primary sensor (no GPS).
We propose a method that combines reinforcement and imitation learning by shaping the reward function with a state-and-action-dependent potential that is trained from demonstration data, using a generative model.