Learning Navigation Priors based on Adaptive Data Aggregation

Publikation: AbschlussarbeitMasterarbeit

Abstract

There are 6 levels of autonomous driving, whereby level 0 means the driver is steering the car with no automation and level 5 is fully automation and can drive without any human driver [Int18]. Today car manufacturers are above level 2 and close to level 3 where the autonomous vehicle is driving by its own but it is required that the driver take over the control of the vehicle if it is requested by the vehicle [QLL22]. To reach the next level, further developments of machine learning are essential. To enable autonomous driving, the vehicle must know where it is during the driving process and perceive its immediate surroundings. Therefore, algorithms for autonomous driving need human like scene understanding by analyzing specific driving data from various sensors like images from a RGB camera over time. The collected data and the derived information from the data supports the autonomous vehicles to recognize and understand their surroundings. To further improve autonomous driving, it is necessary to get access to accurate information of obstacles and their geolocations. Autonomous vehicles are able to share and exchange the collected information with other autonomous vehicles. This means, for example, that obstacles and its geographical positions are already known to autonomous vehicles, even though they have not yet detected the obstacles itself with their sensors. However, stationary infrastructure is still essential like traffic signs and lights are essential for safety and proper behavior in road traffic due to the fact that autonomous vehicles and human-driven cars have to share the roads as long as humans are still driving cars by their own. Self-localization is needed for autonomous driving even though the localization can be inaccurate due to a disturbed GPS signal. To ensure the localization of the vehicle as well as other objects, the driving environment must be perceived by a sensor such as a camera. However, the environmental perception by image sequences is hampered by certain disturbing factors such as recording artifacts, traffic obstacles while driving or moving background.To address the challenges and to support a robust visualization, a combined approach based on convolutional neural networks is introduced. Within this thesis the performance of traffic sign localization and self-localization are the result of three major factors, namely: the data preprocessing and aggregation approach, the object detection and classification part and the localization module. While the first one prepares preprocessed data to train the models, the next two modules, the detection and classification modules extract the road-related information. The models are trained to identify cars, bicycles, trucks, persons, traffic lights and traffic signs along the traffic area. By using the temporal data, the results of the previous modules are improved. For that, optical flow and object tracking are used. The localization module performs the final localization of the objects within the street map.
OriginalspracheEnglisch
QualifikationMaster of Science
Gradverleihende Hochschule
  • TU Wien
Betreuer/-in / Berater/-in
  • Sablatnig, Robert, Betreuer:in, Externe Person
  • Keglevic, Manuel, Betreuer:in, Externe Person
  • Steininger, Daniel, Betreuer:in
Datum der Bewilligung14 Juni 2023
DOIs
PublikationsstatusVeröffentlicht - 14 Juni 2023

Research Field

  • Assistive and Autonomous Systems

Diese Publikation zitieren