Edge Case of the Week: Red Lights

23rd Aug 2023 dRISK

 

For the “Edge Case of the Week” dRISK would like to nominate a class of events that shouldn’t be edge cases at all. Red lights.

Surely, stopping at red lights should be among the first tasks any driver learns, whether human or autonomous.  You’d think red lights would be trivial for AI. After all, even the simplest LEGO robots have color sensors. But the few ADAS and Level 2 autonomous systems that can stop at red lights are much less reliable than humans (just try it sometime on a Hands Free system with your foot over the brake, or see dozens of examples documented on social media). And even though Level 4 fully driverless systems have come a long way since the early days of blowing reds, it still occurs occasionally, even as recently as a month ago in San Francisco.

Human driver running a red light, simulated in NVIDIA DRIVE Sim
Human driver running a red light, simulated in NVIDIA DRIVE Sim. Full testing of a huge range of red-light-running scenarios is possible in sensor-real simulation.

 

 

Moreover, as risky as it is that an autonomous vehicle might fail to stop at a red, more important is how they behave when the lights themselves or the human drivers act unpredictably. Will the AV know how to handle a power outage, when all the stoplights are off?  What about unusual scenarios, such as those that can happen in the UK, where green and red lights can be lane-specific and unusually positioned, and the consequences for mixing them up could be dire.

Or what about one of the edgiest cases – when a human driver runs a red, and t-bones your vehicle at high speed? Last week a fully autonomous vehicle was hit by a red-light runner in San Francisco. It wasn’t the AV’s fault in a legal sense, and thankfully the AV had no passenger at the time. But might a human driver have done an emergency brake soon enough to avoid the crash altogether?

 

 

A conservative human driver approaching a light that just changed from red to green knows to slow in case of a red-light runner in the crossing lane. What’s more humans have exquisite motion detection, honed over millions of years of evolution and connected to fast-response pathways that brake within a half second. Can AVs demonstrate the same capabilities? Another driver running a red is responsible for less than 2.5% of all human accidents, but responsible for more than 5% of fatalities involving semi-autonomous systems.  For fully driverless vehicles reliant on traditional LIDAR and teleoperation, a t-bone scenario can be particularly hard because of the very limited information about other vehicles approaching at high speed from the side and from behind occlusions.

Autonomous Vehicles could have superhuman responses if they could take better advantage of their sensorium and compute. Even better, if trained with next-generation techniques, perhaps AVs could develop something approaching a conservative driver’s common sense. As AVs become a commercial reality, let’s insist that they handle everything that can happen around a red-lights not merely as well as the average human driver, but many times better.