* **Reading:
Read PRR Chapter 10
** *Reminder: Readings are your responsibility. You will be expected to come to class prepared, having read the material, and ready to participate in the discussion*
### Logistics * How is Lab time going? * Tour of Labnotebook2!
## Localization * What exactly is the problem with knowing where you are? * Remember all of this needs an agreed upon coordinate system! * How would you do it with your eyes closed? * Recall **odometry** is like **dead reckoning** * Lots of noise in the signal; accuracy varies; errors build up
### What kinds of landmarks can be used? * Lidar detected fixed obstacles * Vision detected fixed obstacles * How can one obstacle be distinguished from another? * Color, Size, Shape * Note that in addition to recognizing, we need to know the distance
### Lidar detection * Robot has a 'map' of fixed obstacles * Robot compares that map with the apparent, transient, map from Lidar * Calculates a probability distribution of where it might be on that map * Process is called AMCL - Adaptive Monte Carlo Localization
### Visual Detection * Robot has a collection of scenes it can recognize. * Needs to meet certain characteristics for it to work * With each scene is the coordinates that correspond to the recognized scene * CV is constantly analyzing what is seen by the camera * As soon as it identifies an image, it can use that to figure out where it is * Note! It has to also figure out where it is relative to the image
### Requirements Visual localization * CV (computer vision) algorithms need to be able to identify and differentiate it from other images * A coordinate in 3D space is required * It needs to stay put and not move * Examples: facade of a building, a particular tree, a wall, etc. * Bad examples: a parked car; a person
### Fiducial "SLAM" * Special CV Algorithm identifies a known pattern -- a fiducial * Each fiducial has a unique ID and a known 3D pose in the room * Algorithm determines the TF relationship between camera and a fiducial * As fiducial has a known 3D pose relative to the room, this gives a 6D pose estimate for the camera * This is then used to localize the robot relative to the map * Need to be able to see the fiducial and identify it * Distance? Size of Fiducial? Is it in frame? Is it occluded? * Need to see more than one fiducial to get a position fix
## AMCL - Important algorithm for localization * Adaptive Monte Carlo Localization * Very sophisticated (and standard) mathematical technique * Can be used with different kinds of sensors.
## SLAM and AMCL *
9: Building Maps
*
9a: Building Maps
### How AMCL works with Lidar 1. Requires a map of Lidar visible obstacles, with a coordinate system, and anchored in the real world 1. Given a stationary robot, and a lidar scan, what does it see? 1. Look for a match on the map 1. Form a probability distribution of where the robot MIGHT be 1. Move the robot a little. 1. Compute what the lidar would see given what is known about the motion 1. Update the probabilities
### SLAM * What if there is no map yet? * Simultaneous localization and mapping * Create a theoretical map based on view of the LIDAR * Move the robot a little and update the map * Travel through the relevant region (using e.g. Teleop) * Use the map to localize, or where there is no map yet, extend the map.
### GOAT - Go To Any Thing
* https://theophilegervet.github.io/projects/goat/ * https://github.com/facebookresearch/home-robot
Thank you. Questions?
(random Image from picsum.photos)