Autoware Tutorial


Autoware, open source, autonomous driving, ROS


Autoware is the largest autonomous driving open source community. It has found widespread adoption as Autoware is used by 100+ companies, runs on 30+ vehicles, is used in 20+ countries, as well as for education in 5 countries. Autoware.AI is the original Autoware project build on ROS 1, launched as a research and development platform for autonomous driving technology for researchers, developers, and students interested in autonomous driving technology. Now Autoware.Auto is launching, which is the next generation of Autoware based on ROS 2. Autoware.Auto is managed by an open source community manager, applies best-in-class software engineering practices, and is based on a redesigned architecture.

This tutorial will start with a holistic overview over all Autoware projects. Then there will be a set of invited talks presenting algorithms and modules available in Autoware. Part of the workshop will be dedicated to a tutorial including how to bring up Autoware on a computer.

The goal of this tutorial is to introduce Autoware to the IV community and to encourage publication of academic results in open source in addition to the paper presented at the conference.


  • 9:00-9:45 Welcome and Introduction to Autoware
    Introduction to Autoware.AI, Shinpei Kato (Tier IV)
    Introduction to Autoware.Auto, Esteve Fernandez (Apex.AI)
    Introduction to ROS (Robot Operating System), Brian Gerkey (Open Robotics)
  • 9:45-10:30 Algorithms in Autoware 1
    3D perception in Autoware.Auto , Christopher Ho (Apex.AI)
    Computer Vision, Jabob Lambert (Nagoya University & Perception Engine)
    Coupling Autoware and Simcenter Prescan for virtual testing and verification of automated driving systems, Frank Rijks (Presales Solution Consultant, Siemens PLM Software)
  • 10:30-11:00 Coffee Break and Networking
  • 11:00-11:30 Algorithms in Autoware 2
    Model-based Systems Engineering, Bernd Gassman and Frederik Pasch (Intel Germany)
    Responsibility-Sensitive Safety (RSS), Bernd Gassman (Intel Germany)
  • 11:30-12:30 Building Applications with Autoware 1
    Maps for Autonomous Parking , Angelo Mastroberardino, Punnu Phairatt, Brian Holt (Parkopedia)
    Deploying Autoware on real-world vehicles, Efimia Panagiotaki (Software Engineer) – Antonis Skardasis (Lead Software Engineer) (StreetDrone)
    Adopting Manycore as Autoware Accelerator, Stephane Strahm, Sr Product Marketing Mgr, (Kalray)
  • 12:30-13:15 Lunch and Networking
  • 13:15-14:00 Building Applications with Autoware 2
    Lessons learned: Integration of Autoware at our demonstrator, Markus Schratter, Konstantin Lassnig, Daniel Watzenig (Virtual Vehicle Research Center)
    Evaluation of LiDAR Based Localization in Autoware Enabled Autonomous Vehicle, G. Ajay Kumar (DGIST), Jin-Hee Lee. (DGIST)
    3D Mapping for Autoware, Simon Thompson (TierIV)
  • 14:00-15:00 Simulation and Autoware 1
    Gazebo, Brian Gerkey
    CARLA: Open-source Simulator for Autonomous Driving Research, Nestor Subiron, Principal Engineer of CARLA at the Computer Vision Center (CVC)
    Integration of Carla with Autoware, Frederik Pasch (Intel Germany)
  • 15:00-15:30 Coffee Break and Opportunity for Lightning Talks (15:15-15:30)
  • 15:30-16:00 Tools and Autoware
    Preventing Execution Erosion at the Intelligent Edge, Maximilian Odendahl and Benjamin Goldschmidt (Silexica)
  • 16:00-16:30 How to get started with Autoware
    Bring up Autoware.Auto from scratch on a computer, Esteve Fernandez (Apex.AI)
    Bring up Autoware in a docker container in the cloud using DeepSky, Ignacio Alvarez (Intel)
  • 16:30-17:00 Panel: Autoware and its impact on autonomous driving
    Shinpei Kato (TierIV), Dejan Pangercic /Esteve Fernandez (Apex.AI), Brian Gerkey (Open Robotics), Allison Thackston, (TRI-AD), Maria Elli (Intel), Brian Holt(Parkopedia), Moderated by Jeff Ota (Intel)
  • 17:00 Feedback and Fin

Please find more details here: https://www.autoware.org/agenda

Extended Object Tracking and Sensor Fusion:
Theory and Applications Tutorial


Sensor fusion, multi-object-tracking, extended objects, data association, clustering, radar, lidar, vision, autonomous vehicles, environment perception


Environment perception for autonomous vehicles involves the important tasks of detecting, classifying and tracking all objects of interests in the vicinity of the vehicle, such as vehicles, pedestrians, and bicyclists. Solving these tasks accurately and robustly is paramount for the safe operation of the vehicle. In this context, this tutorial provides an overview of sensor fusion and multi-object tracking techniques. The focus lies on high-resolution sensors (e.g., lidar, radar, and camera devices) that give rise to multiple detections per object. In the first part of the tutorial, the modeling of different object and measurement types is discussed and suitable (single) object tracking approaches are presented. The second part of the tutorial introduces recent data association and clustering methods for tracking multiple objects. Finally, the last part of the tutorial will demonstrate applications and discuss pitfalls of these approaches in real-world scenarios.

Tentative Agenda

  • 9:00-9:30 Introduction
  • 9:30-10:00 Motivation
  • 10:00-10:30 Coffee break
  • 10:30-11:30 Models for tracking extended objects
  • 11:30-13:00 Lunch break
  • 13:00-14:00 Tracking multiple extended objects
  • 14:00-15:00 Applications
  • 15:00-15:30 Discussion
Please find more details here: https://www.uni-goettingen.de/en/606544.html