Investigators from the National Highway Traffic Safety Administration and the National Transportation Safety Board are descending on Tempe, Ariz., to investigate the death of a woman struck Sunday night by an Uber self-driving car.
Given the NTSB’s well-deserved reputation for thoroughness, and given the amount of data that should be available from the witness, the physical evidence and the car’s data storage devices, there is little reason to doubt that NTSB will be able to identify any and all causes of the collision, and present a comprehensive report to the public—perhaps within a year.
While police, commentators and the press chase issues of fault and liability, safety advocates and the self-driving community will be focused squarely on how this collision could have occurred when Uber’s suite of sensors, hardware and software all exist specifically to avoid collisions like this one.
TechCrunch’s Devin Coldewey published an informative piece on Monday describing Uber’s self-driving hardware and its role in detecting pedestrians and vulnerable road users.
The Uber’s redundant and parallel suite of sensors include radar, LIDaR and a camera array, all focused on sensing and seeing persons and objects in the roadway, day or night.
So how could they all not have detected Elaine Herzberg walking at least 40 feet across an open street? By all accounts she was not cloaked in a high-tech, radar-evading, light-deflecting material designed for stealth; she was an ordinary person in ordinary clothing, doing what ordinary people do—crossing a street.
She was also pushing a bicycle laden with bags, which should have created a larger profile and presented materials that would have made her even more likely to have been detected by the car.
Unfortunately, there are numerous ways autonomous vehicle hardware and software can fail, alone or in combination.
It is important to note that there is no reason to believe that any of the following occurred here; these are merely hypothetical discussion points for illustration purposes only.
First, sensors may not sense.
Ms. Herzberg was walking at night, and less light makes detection more difficult for the human eye and for camera sensors. Absence of light should be irrelevant to LIDaR and radar performance, but each sensor can separately be obstructed, dirty, soap-scummed or fly-specked.
Video footage reviewed by police will only be as good as the sensor, aperture, light-gathering and resolving power of the camera and lens. In other words, footage they have seen may not necessarily correlate to what a human could see that evening, or would see under similar lighting conditions.
Investigators will undoubtedly attempt to verify that all sensors were online, clean, calibrated and operational. NTSB may ultimately recommend that more sensors—or different kinds of sensors, such as thermal or infrared—might be prudent to fill any detection gaps in similar operating domains.