Too Much Trust in Technology Is Dangerous for Drivers (and Everyone Around Them)
Amid the ongoing debate about self-driving cars, there’s already a “lite” version in use in thousands of passenger cars and SUVs. With consumer names like “Autopilot,” “Pilot Assist,” and “Dynamic Radar Cruise Control,” these partial automation systems are designed to intervene in braking, acceleration, and steering, while offering road monitoring and feedback to drivers.
For the first time, the Insurance Institute for Highway Safety (IIHS) conducted tests on 14 widely available versions of this technology, and was unequivocal in its conclusion: “There’s no evidence that (partial automation) makes driving safer, and, in fact, it can create new risks by making it easier for the driver’s attention to wander.” Only one system, Lexus Teammate, received an overall “acceptable” grade from IIHS.
As it does with its well-known crash-test ratings, IIHS set up a testing protocol to compare the systems from GM, Ford, Tesla, Lexus, Genesis, BMW, Nissan, Mercedes Benz, and Volvo. Basic performance testing included seeing how the systems function in prescribed conditions at maintaining speed, distance, and lane control. Most of the systems worked as designed during these sessions, conducted in clear weather and favorable light conditions.
But because these systems are also supposed to monitor whether drivers are in position to take control in an instant, IIHS set up creative ways to test their effectiveness, including attaching ankle weights to the steering wheel to simulate pressure, and putting cheesecloth over the driver’s head to test the sensors that track eye activity. They even crafted a fake cellphone out of foam to test whether simulated surfing would trigger a driver alert.
According to IIHS, this is how the technology is supposed to work:
Partially automated systems need to ensure that the driver's eyes are directed at the road and their hands are either on the steering wheel or ready to grab it. Escalating alerts and appropriate emergency procedures are required when the driver does not meet those conditions.
All automated lane changes should be initiated or confirmed by the driver.
When traffic ahead causes adaptive cruise control to bring the vehicle to a complete stop, it should not automatically resume if the driver is not looking at the road or if the vehicle has been stopped for too long.
The sustained lane centering function should encourage the driver to share control of the steering rather than switch off automatically whenever the driver adjusts the wheel. Switching off risks disincentivizing drivers from staying physically engaged in the driving task.
Partial driving automation should be designed to prevent drivers from using it when their seat belt is unfastened or when forward collision warning/automatic emergency braking or lane departure warning/lane departure prevention is disabled.
A look at the IIHS Report Card shows a slew of bad marks in categories such as Attention Reminders and Emergency Procedures, with the report concluding that “most of the systems fail multiple safety feature requirements.”
Note that such systems are not the same as autonomous operation, such as Tesla’s “Full Self-Driving Capability,” but use much of the same technology (the test included both Tesla systems and each received an overall “poor” rating). Seeing how these systems can fail even in controlled testing conditions shows how challenging it is to deploy them safely in more complex urban environments.
Strong Towns has previously reported on the continuing issues with autonomous vehicles (AVs), including robo-taxi crashes in San Francisco (the GM technology used in Cruise AVs was also included in the IIHS test and received an overall “marginal” rating), and AVs failing to yield to emergency vehicles. The Washington Post documented a series of fatal crashes using Tesla’s full self-driving mode.
In his excellent book, Autonorama, author Peter Norton argues that self-driving cars fit a century-long pattern of promises from the automotive industry that complete safety and comfort are always just one technological advance away. Historical examples include building the Interstate Highway System, car features from safety glass to supplemental restraint systems, and now the prospect of completely autonomous vehicles buzzing about our cities in error-free bliss. Yet each utopian promise serves only to enhance our commitment to car dependency, and keeps us from addressing the broader question of the best mobility choices for the greatest number of people.
Strong Towns founder Chuck Marohn acknowledges that there may be some appropriate applications for driving automation “on the open roads when we’re trying to move people or materials at speed over distance.” But when it comes to our cities, he turns that history lesson into a forward-facing caution (and a Venn diagram you must see): “Automated vehicle technology will do nothing to make our streets better places to be and, if we continue to have blind faith in it, has the very real chance of setting our cities back another generation.”
Ben Abramson is a Staff Writer at Strong Towns. In his career as a travel journalist with The Washington Post and USA TODAY, Ben has visited many destinations that show how Americans were once world-class at building appealing, prosperous places at a human scale. He has also seen the worst of the suburban development pattern, and joined Strong Towns because of its unique way of framing the problems we can all see and intuit, and focusing on local, achievable solutions. A native of Washington, DC, Ben lives in Venice, Florida; summers in Atlantic Canada; and loves hiking, biking, kayaking, and beachcombing.