Title: Collaborative Perception and Planning for Multi-View and Multi-Robot Systems

  

Date: Monday, April 24th, 2023

Time: 11:00 AM – 12:30 PM EST

Location: CODA C1215, or Zoom Link

Join our Cloud HD Video Meeting

Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars across mobile, desktop, and room systems. Zoom Rooms is the original software-based conference room solution used around the world in board, conference, huddle, and training rooms, as well as executive offices and classrooms. Founded in 2011, Zoom helps businesses and organizations bring their teams together in a frictionless environment to get more done. Zoom is a publicly traded company headquartered in San Jose, CA.

gatech.zoom.us

 

Nathan Glaser

Robotics Ph.D. Student 

School of Electrical and Computer Engineering

Georgia Institute of Technology 

  

Committee

Dr. Zsolt Kira (Advisor) – School of Interactive Computing, Georgia Institute of Technology 

Dr. James Hays – School of Interactive Computing, Georgia Institute of Technology 

Dr. Patricio Vela – School of Electrical and Computer Engineering, Georgia Institute of Technology 

Dr. Pratap Tokekar – Department of Computer Science, University of Maryland

Dr. Milutin Pajovic –  Senior Research Scientist, Analog Devices

  

Abstract

The field of robotics has historically focused on egocentric, single-agent systems.  However, robots in such systems are susceptible to single points of failure.  For instance, a single sensor failure or adverse environmental condition can render an isolated robot

"blind".  On the other hand, robots in multi-agent systems have the opportunity to overcome potentially dangerous blind spots, via communication and collaboration with their peers.  In this proposal, we address these communication-critical settings with Collaborative Perception and Planning for Multi-View and Multi-Robot Systems.

 

First, we develop several learned communication and spatial registration schemes for collaboration.  These schemes allow us to efficiently communicate and align visual observations between moving agents.  We demonstrate improved egocentric semantic segmentation accuracy for a swarm of obstruction-prone aerial quadrotors.  Second, we develop a distributed multi-agent SLAM algorithm that efficiently maps a shared scene, especially when robots are only allowed limited communication during rendezvous.  We additionally develop a distributed multi-agent trajectory exchange method that validates and scores trajectories for self-driving vehicles.  Using our method, we demonstrate reduced collision rates as compared to single-agent and multi-agent baselines.  Third, we propose to apply these collaboration techniques to multi-view perception for robotic agriculture.