Who Drives Your Car?

With all the hype of self-driving cars, from Google’s cars to Tesla’s new autopilot feature, the journey of self-driving cars becoming accessible to the public is going to be a bumpy ride. Elon Musk, CEO of Tesla, claims that fully autonomous cars could become a reality within two years, as reported by Fortune. Undoubtedly an admirable claim. However, the nuanced decisions made by drivers on a daily basis are easily overlooked. While driverless cars are not prone to the same human weaknesses when driving, such as intoxication or fatigue, these automated cars aren’t equipped to handle ethical and moral dilemmas that even we, as humans, find challenging. 

The decisions made in the process of self-driving differ from instinctive split-second human reactions to coded decisions generated by the vehicle’s autopilot software. For example, if a deer jumped into the path of a car, a driver may instinctively swerve to avoid the deer. In the process, they could crash into another car or kill a pedestrian. This may be an unfortunate scenario in which the driver is not at fault as they were startled by the deer. Now pretend that the car is self-driving, and instead of a deer, there is a pedestrian rushing across the street. The decision is to either hit the pedestrian or swerve to avoid them and risk hitting oncoming traffic. This involves a premeditated decision made by the programmers of the autopilot software. Not only does this raise the issue of accountability and responsibility, but it also raises the issue of ethics and which option to choose when faced with two negative options. 

Most drivers are selfish. Their objective is to get from point A to point B in the least amount of time possible. When conflicts arise, a driver is more likely to prioritize their interests over those of others. For example, when waiting for passerbys at the crosswalk, a human driver might get impatient after waiting a few minutes and slowly inch forward, forcing pedestrians to let the car through. A self-driving car programmed would wait as long as it detected a human presence moving in front of the car and challenge the passenger’s patience.

This issue of self-preservation is prominent in scenarios where drivers are faced with a dilemma of causing harm to themselves versus saving a crowd of people. A human driver would likely swerve to prevent risk of personal harm. However, a self-driving car must be pre-programmed to select certain options. It might subscribe to a utilitarian model of thinking, the greatest good for the greatest number of people. But human passengers would not support sacrificing their own lives to save others.

The issue of self-driving cars is that ethical dilemmas are pushed onto the car manufacturers. With no standard code of ethics dictating which choice a self-driving car should prefer when faced with two negative options, the moral terrain of self-driving cars becomes a slippery slope. The issues of accountability and responsibility for tragedies or accidents that may occur while driving also raises a red light to speeding forward through these issues. Perhaps one day we’ll be able to call a self-driving Uber car, but in the meantime, we should shouldn’t toss out our driver licenses just yet. 

Leave a Comment
More to Discover
Donate to The UCSD Guardian
$210
$500
Contributed
Our Goal

Your donation will support the student journalists at University of California, San Diego. Your contribution will allow us to purchase equipment, keep printing our papers, and cover our annual website hosting costs.

Donate to The UCSD Guardian
$210
$500
Contributed
Our Goal

Comments (0)

All The UCSD Guardian Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *