The Case for Self-Driving Cars

In February of 2006, Laura Gorman got into a car with her best friend, Jessica Rasdall. It was her last car ride ever.

According to a 2009 news story on ABC, Laura and Jessica had been working their restaurant shift. Then, they decided to go to a club. A few drinks and lots of dancing later, as Jessica sped down Florida Interstate 275 with her blood alcohol level 1.5 times the legal limit, she lost control of the car. Only a mile from their dorm, the vehicle skidded off the highway and crashed into a tree. Laura died on the spot. Jessica survived, but needed facial reconstruction. Several years after the drunken driving accident, the Gorman’s have not since spoken to the Rasdall’s.

Drunk driving accidents are frequent in the United States. Driver education company Aceable reports that Driving Under the Influence (DUI) is the leading cause of fatal crashes, contributing to almost one-third of all road deaths. Yet, not only are such accidents easily preventable — at the individual level, don’t drink and drive! But the vast majority of fatal traffic incidents in the US occur due to human error. Driving over the speed limit, the second leading cause of road deaths, contributes 31 percent of all victims, and distracted driving (using your phone) is responsible for 16 percent of deaths, according to Aceable. In short, a whopping four out of five people would still be alive today had it not been for avoidable mistakes. Surprisingly, laws against drunk and distracted driving do not work; despite heavy penalties, the past 50 years have seen a massive increase in the proportion of traffic accidents due to human negligence. Jessica Cicchino, a vice president of research at the Insurance Institute for Highway Safety, notes that “laws against distracted driving do not reduce crashes.”

So why do laws against distracted driving not reduce crashes? Why aren’t human drivers thwarted by the potential repercussions – legal and likely, existential – of driving and texting? My view is that biology is the culprit. Humans simply aren’t wired to focus for hours on end on something as dull as the Mass Turnpike. In fact, as Nobel Prize winning economist Daniel Kahneman pointed out, it’s hard for our brains to constantly consciously focus to begin with, let alone to focus on an insipid highway. The road is tedious and mostly uneventful; after all, accidents only occur once every 165,000 miles. That is why we play music and talk to the passengers when we drive. The dopamine rush of a Facebook notification or of a Snapchat from an ex will always be more thrilling than making sure you’re driving in your own lane. Conscious policing is hard, practically impossible — and so, humans will always relapse into errors.

Regardless of how much legislation is passed, there will always be people who think they’re sober enough to drive. By nature, humans are imperfect drivers; they are careless and rule-breaking and dangerous, irresponsible, and wholly unfit to pilot vehicles. The question isn’t whether or not we need non-human drivers now – we’ve always needed them. The only uncertainty in the driver equation has been with whom to replace them. Until recent years, we only had tools that alerted humans when they made a mistake (e.g. cruise control and blind-spot lights). Today, fully autonomous, self-driving cars provide a viable alternative.

Self-driving cars (also known as Level Five cars) are vehicles that, powered with sensors, radars, lidars and an intelligent decision-making program, can navigate between places without any active human intervention. They’re hard to visualize: self-driving cars have no steering wheels, nor pedals. They do not have any rear-view mirrors, nor any gear boxes. They need not have headlights or rear-view lights, handbrakes or blinkers either. In fact, it is physically impossible for the humans in the car to influence the car’s trajectory — we lose the notion of a driver. With enough self-driving cars on the road, autonomous vehicles will communicate directly amongst themselves to coordinate traffic maneuvers. Lanes will become irrelevant. Speeding tickets will become archaic. And most importantly, we will have drivers that are perfect and error-free; drivers that will not drive under influence or text and drive – drivers that are inhuman.

Of course, we are far from this reality. Level Five has not yet been achieved: Google has been developing the Google Car since 2009, and efforts will continue into the next decade too. Lyft, Ford, General Motors and Tesla are working on their own self-driving iterations. Moreover, the failures of autonomous vehicles have become highly popular. In March 2018, a self-driving Uber Volvo struck and killed a pedestrian in Tempe, Arizona resulting in the suspension of Level Five research and development at Uber. Yet, prospects for the self-driving industry have never looked brighter: billions of dollars are pouring into Silicon Valley startups and well-established companies focusing on autonomous vehicles. Most recently in August, Toyota invested $500 million in Uber’s self-driving outfit. Ford has committed to invest $4 billion dollars in autonomous vehicles by 2023. In Las Vegas, Lyft has partnered with Aptiv; if you’re lucky, you could be picked up by a self-driving car with a human conductor.

Computers are also getting smarter by the day. As self-driving cars drive more miles, they generate more data to learn from. The data is then fed back into the decision-making algorithms, allowing autonomous vehicles to learn from their mistakes. Waymo, Google’s self-driving arm, for instance, has data on 8,000,000 fully autonomous miles. This is in addition to data on five billion miles its self-driving cars have driven in simulations. So even though our computer systems do not display the intellectual maturity they need to fully replace humans yet, it is only a matter of time until their decision-making systems consistently exceed legal standards for safety. Our best drivers will then no longer be subject to biological constraints like faltering concentration. As machines, they will be perfect. So, when functional self-driving comes around, it will be our moral responsibility to adopt it. We owe safety to others, and to ourselves.

Nonetheless, skeptics have raised concerns about the effect of self-driving cars on jobs. Millions of truck and taxi drivers, chauffeurs and highway patrol officers find their employment at risk with the advent and mass adoption of autonomous vehicles. However, self-driving is poised to contribute up to $7 trillion dollars to the US economy, according to analysts at Intel and Strategy Analytics. Although the boon will primarily affect the tech sector, a reinvigoration of several adjacent industries like mechanics is also expected. Since autonomous vehicles are likely going to be deployed, at least initially, as part of ride-hailing services, white-collar jobs in the services industry are predicted to rise. And a spike in ride-sharing will correspond to a reduction of vehicles on the road, something likely to boost the construction industry: because there will be lesser traffic to handle, we’ll rebuild our cities around humans and no longer around parking lots or highways. Thus, our Luddite intuitions are likely wrong; self-driving vehicles will create a net displacement of jobs across the same skill-level, in addition to a net creation in the tech sector.

More serious challenges to functioning self-driving are ethical ones. Should an autonomous vehicle swerve, crash and kill its passenger or run over a child who unexpectedly sprints into its path? Should a car strike an 80-year-old or a teenager, given that a collision is inevitable? And by corollary, should a car always prioritize the safety of its passenger to that of pedestrians or should car purchasers be given a choice? Resolving such ethical quandaries raise complicated questions on human moral intuitions. And yet, philosophers like Nicholas Evans at UMass Lowell are working with engineers to build algorithms according to specific ethical frameworks. Though answers to such problems might exhort deeper questioning of the self than of code, like technical ones, moral questions, too, will be resolved in due course.