shortcuts:
lectures
homeworks
lab notebook
search:
How do Robots Localize?
(Thu Oct 10, lect 12)
|
previous
|
next
|
slides
|
Using landmarks to determine location
logistics
New Lab Schedule
Some labs have important activities; not all, but some
Mandatory for all students
Thursday’s from 3:55pm to 5:55pm
If you have a serious conflict, you can leave a little earlier
But you need to be there and you need to talk to me before leaving
Thursday morning 10-12 is still a lab with TA and/or Pito staffing
Localization
What exactly is the problem with knowing where you are?
Remember all of this needs an agreed upon coordinate system!
How would you do it with your eyes closed?
Recall
odometry
is like
dead reckoning
Lots of noise in the signal; accuracy varies; errors build up
What kinds of landmarks can be used?
Lidar detected fixed obstacles
Vision detected fixed obstacles
How can one obstacle be distinguished from another?
Color, Size, Shape
Note that in addition to recognizing, we need to know the distance
Lidar detection
Robot has a ‘map’ of fixed obstacles
Robot compares that map with the apparent, transient, map from Lidar
Calculates a probability distribution of where it might be on that map
Process is called AMCL - Adaptive Monte Carlo Localization
Visual Detection
Robot has a collection of scenes it can recognize.
Needs to meet certain characteristics for it to work
With each scene is the coordinates that correspond to the recognized scene
CV is constantly analyzing what is seen by the camera
As soon as it identifies an image, it can use that to figure out where it is
Note! It has to also figure out where it is relative to the image
Requirements Visual localization
CV (computer vision) algorithms need to be able to identify and differentiate it from other images
A coordinate in 3D space is required
It needs to stay put and not move
Examples: facade of a building, a particular tree, a wall, etc.
Bad examples: a parked car; a person
What is needed?
Goal: somehow we need to determine the TF between tf(Odom) and tf(Map)
Can you see how that single transform will do it?
AMCL - Important algorithm for localization
Adaptive Monte Carlo Localization
Very sophisticated (and standard) mathematical technique
Can be used with different kinds of sensors.
How AMCL works with Lidar
Requires a map of Lidar visible obstacles, with a coordinate system, and anchored in the real world
Given a stationary robot, and a lidar scan, what does it see?
Look for a match on the map
Form a probability distribution of where the robot MIGHT be
Move the robot a little.
Compute what the lidar would see given what is known about the motion
Update the probabilities
SLAM
What if there is no map yet?
Simultaneous localization and mapping
Create a theoretical map based on view of the LIDAR
Move the robot a little and update the map
Travel through the relevant region (using e.g. Teleop)
Use the map to localize, or where there is no map yet, extend the map.
Thank you. Questions?
(random Image from picsum.photos)
Welcome
Autonomous Robotics
List of Lectures
Homeworks
Final Projects
Programming Assignments
Lectures
Sensing and Moving
Welcome
ROS From the top
ROS Nodes and Topics
Robot Hardware
Sensing with Lidar
Perception and Sensors
How do mobile Robots move?
ROS Programming Best Practices
Orienting
Robot Orientation
ROS Coordinates
The almighty TF package
How do Robots Localize?
Localization in practice
Navigating
Path Planning
Advanced Topics
Fiducials
Finite State Machines
Behavior Trees
Intro to computer Vision
Kalman Filters
Kalman Filters
Joints and Links
Manipulators
Projects
Project Work
Project Work
Last Class!
Course Resources
Guide for Teaching Asssistants!
References
Cribsheets
Key Papers in Robotics
Book Supplements
Cosi119a FAQ
The "Big Ideas"