Lane Level Localization on a 3D Map

Overview

One of the biggest challenges of automated driving is to accurately determine the location of a vehicle relative to the roadway. Equipped with GPS, in-vehicle sensors (cameras), and a highly accurate 3D map, an automated driving system must be reliable, even under harsh conditions, due to GPS denial or imprecision, in-vehicle sensor malfunction, heavy occlusions, poor lighting, and inclement weather. Lane level localization on a 3D map allows the vehicle to function reliably in such conditions.

To advance the technology and enable safer and more reliable automated driving, we present a grand challenge to localize a driving vehicle on a 3D map using the vehicle’s GPS data and in-vehicle camera sensor data in real time.

Task Description

A monocular video is acquired from a forward-facing camera mounted on top of a vehicle. The video was recorded while the vehicle was driving on a real access-controlled road (highway) in reasonable traffic conditions. A GPS position is given for each video frame. A 3D map is provided for the section of road driven by the vehicle during the acquisition.

The problem is to localize the vehicle at the correct lane and longitudinal position (a high resolution “mile marker”) on the 3D map in real time. A possible solution is to detect objects from the video frames and match them to the objects in the 3D map to derive the lane level location.

Dataset Description

We collected the above input data on a highway in San Francisco with the following properties:

  • Reasonable traffic
  • Multiple lane highway
  • Reasonable weather conditions
  • The road markings are in good condition
  • Data was collected over 20km. We will provide 10km of the data to participants to develop their algorithms, and the remainder will be used for evaluation. Ground truth data of camera locations will be provided for the training set.
  • The car makes reasonably frequent lane changes during the collection.

Input

We will offer a training dataset to each team to prototype and test their methodology. It contains:

  1. Images: these images are acquired with a commercial webcam mounted on top of a car and have the following properties:
    • 10 HZ
    • RGB color, 800 x 600 resolution
  2. GPS data: a set of consumer phone grade GPS points with time stamp synchronized with the image timestamp
  3. 3D map for the driven road segment including:
    • Road and lane boundaries (including the boundary type e.g., road edge, solid marking, dashed marking)
    • Marking color (white or yellow)
    • Elevated objects in voxels near the roadway
    • Traffic sign location and text content
  4. Camera calibration parameters

Figure Example. A visualization of training dataset: green lines represent lane marking and road boundary, yellow points represent occupancy grid corners and red polygons represent sign corners.

Details are included in README of the test data to be downloaded.

Contact Andi Zang   for data download and submission information

Submission Format

Each participant is expected to submit:

  1. A single .zip file that contains a single executable file or script (main function) and all dependencies/compiled files (i.e. .pyc file if you do not want to distribute your scripts). Please include a readme.txt file for any special instructions on how to compile the submitted code if you submit your source code/scripts. Submission is mandatory to ensure originality of the submitted work.
  2. The single executable file or script should be named as “runme.*”, suffix depends on your language (e.g. runme.exe or runme.py). Programming in Python or MATLAB is highly recommended. “runme.*” accepts six command line parameters. The usage is as follows:
    runme.* <3d map dir> <imagery dir> <camera.config file> <GPS.csv file> <ImageryTimestamp.csv file> < output.csv file>

    • <3d map dir>: This folder contains all segments of 3d map json files *.
    • <imagery dir> : This folder contains all images.
    • <camera.config file>: Single camera configuration file.
    • <GPS.csv file>: Single csv file of phone grade GPS points.
    • <ImageryTimestamp.csv file>: Single csv file contains all image timestamps
    • <output.csv>: Program result csv file (3 columns), each column contains: image id, latitude and longitude.

    *The evaluation dataset will have the same file name format string for each file with training data.

  3. A technical paper describing the algorithm (pdf).
Evaluation Metric

We will use an evaluation dataset unknown to participants but with the same file name format as the training dataset) and will run the submitted program on this dataset to generate the metrics below:

  • Latitudinal/lane accuracy*
  • Longitudinal accuracy*

Figure Accuracy. At timestamp t, the latitudinal/lane accuracy is defined as the distance from predicted point to travel direction vector; longitudinal accuracy is defined as the distance from predicted point to ground truth point along the road center line.

Important dates
Dataset available for download April 15, 2017
Results submission June 10, 2017
2017: Objective evaluation (on testing set) June 11 - June 25, 2017
Evaluation results announced June 30, 2017
Paper submission deadline (please follow the instructions on the main conference website) July 14, 2017
Paper Submission

Please follow the guideline of ACM Multimedia 2017 Grand Challenge for the paper submission.

Contact
  • Xin Chen, , HERE North America, Chicago, IL USA
  • Andi Zang, , Northwestern University, Evanston, IL USA