Volltext-Downloads (blau) und Frontdoor-Views (grau)
  • search hit 1 of 6
Back to Result List

Extracting Training Data for Machine Learning Road Segmentation from Pedestrian Perspective

  • We introduce an algorithm that performs road background segmentation on video material from pedestrian perspective using machine learning methods. As there are no annotated data sets providing training data for machine learning, we develop a method that automatically extracts road respectively background blocks from the first frames of a sequence by analyzing weights based on mean gray value, mean saturation, and y coordinate of the block’s middle pixel. For each block labeled either road or background, several feature vectors are computed by considering smaller overlapping blocks within each block. Together with the x coordinate of a block’s middle pixel, mean gray value, mean saturation, and y coordinate form a block’s feature vector. All feature vectors and their labels are passed to a machine learning method. The resulting model is then applied to the remaining frames of the video sequence in order to separate road and background. In tests, the accuracy of the training data passed to the machine learning methods was 99.84 %. For the complete algorithm, we reached hit rates of 99.41 % when using a support vector machine and 99.87 % when using a neural network.

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Author:Judith Jakob, József Tick
DOI:https://doi.org/10.1109/INES49302.2020.9147183
ISBN:978-1-7281-1059-2
Parent Title (English):INES 2020 : 24th International Conference on Intelligent Engineering Systems, July 8-10, 2020, Reykjavík, Iceland
Document Type:Conference Proceeding
Language:English
Year of Completion:2020
Release Date:2020/09/07
Page Number:6
First Page:49
Last Page:54
Open-Access-Status: Closed Access 
Licence (German):License LogoUrheberrechtlich geschützt