Safe Operation of Automated Vehicles at Intersections

Safe Operation of Automated Vehicles at Intersections

Research Team:

Professor Pravin Varaiya (EECS)
Dr. Alexander Kurzhanskiy (PATH)
Dr. Aditya Medury (SafeTREC)
Mengqiao Yu (CEE)
Dr. Offer Grembek (SafeTREC)


Funding Organization:

University of California Center on Economic Competitiveness in Transportation


Publications and Resources:


Study Description:

Intersections present a very demanding environment for all the parties involved. Challenges arise from complex vehicle trajectories; the absence of lane markings to guide vehicles; split phases that prevent determining who has the right of way; invisible vehicle approaches; illegal movements; and simultaneous interactions among pedestrians, bicycles and vehicles. Unsurprisingly, most demonstrations of automated vehicles (AVs) are on freeways; but the full potential of automated vehicles – personalized transit, driverless taxis, delivery vehicles – can only be realized when AVs can sense the intersection environment to safely and efficiently maneuver through intersections. As is evident from intersection incidents with Google [1], Uber [2] and Tesla [3] AVs, their performance can be improved.

AVs are equipped with an array of sensors (e.g., video cameras, RADARs, LiDARs, GPS) to interpret and suitably engage with their surroundings. Advanced algorithms utilize data streams from such sensors to support the movement of AVs through a wide range of traffic and climatic conditions. However, there exist situations, in which additional information about the upcoming traffic environment would be beneficial to better inform the vehicles’ in-built tracking and navigation algorithms. A potential source for such information is from in-pavement sensors at an intersection that can be used to differentiate between motorized and non-motorized modes and track road user movements and interactions. This type of information, in addition to signal phasing, can be provided to the AV as it approaches an intersection, and incorporated into an improved prior for the probabilistic algorithms used to classify and track movement in the AV’s field of vision. Any connected vehicle (CV) with Advanced Driving Assistance System (ADAS) or an AV can form a real-time map of an intersection, provided that its on-board sensing capability is augmented by infrastructure sensors that:

  1. capture all vehicle movements in the intersection;
  2. provide full signal phase information;
  3. indicate vehicle encroachment on bicycle and pedestrian movements; and
  4. detect hazardous illegal movements.

Figure of common intersection conflict scenarios

Figure 1. Common intersection conflict scenarios

We refer to an intersection capable of providing all this functionality as an Intelligent Intersection. Intelligent Intersection requires the following algorithms:

  1. analysis of intersection geometry to identify possible maneuvers, conflicts and blind zones;
  2. computation of blind zone activation likelihood, given a traffic pattern and signal timing;
  3. classification of conflicts and blind zones by their importance;
  4. computation of optimal and minimal viable sensor placements in the intersection to ensure desired coverage of blind zones;
  5. interpretation of sensor readings to determine traffic presence and dynamics in the blind zones; and 
  6. prediction of signal phase duration for adaptive and actuated signals.

Figure of analysis of intersection configuration using Intelligent Intersection toolbox.

Figure 2. Analysis of intersection configuration using Intelligent Intersection Toolbox.
Left: Center lines of guideways constructed automatically from the Open Street Map.
Right: Actual guideways, conflict zones, and a blind zone of one conflict zone.

Additionally, we must be able to quantify intersection’s safety and mobility performance by analyzing its design and network configuration (Figure 2). All these algorithms will be implemented in an open-source software suite called Intelligent Intersection Toolbox that we started developing in the course of this project [4]. The impacts of this development will include:

  • Cities will be given a tool to evaluate performance of their signalized intersections. In particular, compare potential improvements resulting from Vision Zero (VZ) plans with those provided by Intelligent Intersection.
  • Caltrans and DMV are unavoidably getting more engaged in the regulation (i.e. design, testing and modifying the rules of deployment) of AVs in California. In most intersections safe operation of AVs will require augmentation of their capabilities with infrastructure-based sensing. Such sensing capability must be provided by Caltrans and local transportation authorities both because they own and operate the intersection and because this capability will be provided to all AVs. This project is a step towards specifying what these sensing capabilities should be.
  • For AV makers it is important to know, which intersections have hidden dangers, such as blind zones. Knowledge of blind zones improves AV’s safety. Additional real-time information about presence of agents in blind zones improves AV’s efficiency.

References:

  1. C. Ziegler. A Google Self-Driving Car Caused a Crash for the First Time. https://www.theverge.com/2016/2/29/11134344/google-self-driving-car-crash-report. The Verge, February 29, 2016.
  2. D. Z. Morris. Uber’s Self-Driving Systems, not Human Drivers, Missed at Six Red Lights in San Francisco. http://fortune.com/2017/02/26/uber-self-driving-car-red-lights. Fortune Magazine, February 26, 2017.
  3. M. Cassidy. Tesla ‘Autopilot’ Car Hits Phoenix Police Motorcycle. https://www.usatoday.com/story/tech/nation-now/2017/03/28/tesla-autopilot-car-hits-police-motorcycle/99719978 . USA Today, March 28, 2017.
  4. Intelligent Intersection Toolbox, 2018. https://github.com/ucbtrans/intelligent_intersection.