Self-Driving Cars: Here There Be Dragons
Pixar presented us with a dystopic future, why did we ignore it?
If Dang Dude What The Heck controlled the universe every city would ban cars. I’ve written about that here. Unless a huge organizing push that no one knows about reaches its final stages soon, we will have cars for a long time. Unfortunately, I’ve never been able to keep my mouth shut so this newsletter has more of my opinions about cars. In this case I’ll put forward a harm reduction argument. Not my favorite thing to do, but sometimes necessary. My argument runs as follows: self-driving cars should not exist. If one – sorry for using that phrasing, nothing more self-indulgently “high prose” than the use of the impersonal pronoun – needs to have a harbinger for the arrival “the future,” a fully automated self-driving car does a pretty good job of that. Every piece of science fiction at this point practically requires some type of autonomous travel technology. The self-driving trucks in Logan, the automated cars in Blade Runner, the robot vehicles in I, Robot, that weird train in Westworld. All self-driving. The twin genre of science fiction, fantasy, also has a trope. A space on a map marked: Here There Be Dragons. It marks the unknown, warning of hidden dangers and unfortold catasrophes. An important idea for this article. The obsession with non-human controlled transportation didn’t appear out of the blue. For the past few years, various engineers and business owners have promised self-driving cars, usually sooner rather than later. Thousands of weirdos have tied themselves to the Elon Musk brand, part of which promises self-driving Teslas. These promises haven’t been completely empty either. Some cars promise various forms of self-driving processes, including parking and accident avoidance technologies. Like many new quote/unquote “innovations” the public at large has embraced them with open arms. The press, and many purchasers, have praised these new products in an almost obscene fashion. They have met almost no widespread regulatory or popular resistance. This does not mean that self-driving technology has perfected itself, in fact far from it. This, if you haven’t noticed, is the part where I get to my argument. Many pressing problems exist around this tech, enough to outweigh any positives self-driving cars might have. These problems include the outsourcing of moral decisions to corporations, incredible security concerns, declining ownership rights, and increased polarization between those who own and don’t own self-driving vehicles. Let’s go through each of them.
Prepare yourself. This next sentence reads like a freshman year philosophy student wrote it. That doesn’t mean it’s not true, it just means it’s a little obvious. So here we go. Every conscious activity that humans take requires some sort of moral choice. This includes driving a car. While a lane change, a right turn on red, or braking for a rabbit might not come up as test question in a moral philosophy class, they none the less require the driver to make a decision that has moral consequences. For example, switching lanes to pass a slow-moving car, no matter how well executed, requires that the driver make a choice that their need for speed outweighs the potentially dangerous effects of their passing maneuver. Like that the extra gas that they expended during their acceleration adds toxic fumes to the environment worsening climate change. The same applies to braking for a rabbit crossing the road. Braking to save the life of a rabbit means that you value the life of the rabbit over the life of the person behind you, who could perhaps not see your sudden brake, and ram into you, causing an accident. While these seem like outlandish situations, they exist in the realm of the possible, and therefore we must tak them into consideration.
If regular, analogue drivers have to think about these things, then self-driving car manufacturers must also think about them. The designers and engineers of these cars have to program in all possible actions for a car to take in every potential situation. They have to make moral decisions without knowing the context in which those decisions get made. This means that people who buy self-driving cars outsource their moral choices to companies like Toyota, Ford, or GM. While even the possibility that someone could outsource their moral decision-making responsibility to a soulless, profit-driven company deserves a book-length examination, I’ll address one big practical concern here. It revolves around the idea of Utilitarianism. Utilitarianism, as described by John Stuart Mill, argues that the morally correct decision brings about the most good for the most people. Most people feel comfortable with this definition of morality, at least in broad terms – disregarding any blibbering Ayn Rand followers and many elected officials of course. Most individuals for instance, when presented with a scenario where they have to choose between letting one or four people die, choose to let the single person die and save the four. If you have watched The Good Place, or just have paid attention in the last ten years, youknow that philosophers refer to this as the “Trolley Problem.” However, if presented with a slightly different scenario – kill yourself to save four people, or let those four humans die – most people end up saving themselves instead of saving the four. Human instinct is for preservation.
The question then becomes how do self-driving cars solve the trolley problem? Do we let corporations program a moral course of action for us? Codifying utilitarianism, or any moral code into a car, seems to me a problematic area. Companies should not have that sort of standing over our actions. Not enough questions have been asked about this, or whether it is even moral or good, to have others make these decisions?
Moral issues alone do not plague self-driving cars. Security risks also abound. Over the last few years the world has seen hackers tap into massive private and public databases, accessing the personal and private information of millions, if not billions, of people. Companies like Sony and Target all weathered data leaks. To suggest that cars wouldn’t see the same level of attack smacks of the ludicrous. Concerns around hacking grow as we inch closer to a world of complete interconnectivity. At the risk of coming off like Andy Rooney, when refrigerators can talk, we need to worry. This interconnectivity – at one point called the “Internet of Things” by our Silicon Valley Overlords – would surely include personal vehicles connected to the internet. At the very least the navigation systems that these cars use would require some sort of hackable connection. I do not step on any small ledges by suggesting that hackers can access the millions of cars produced by a company like GM.
Other security concerns exist as well. All software and hardware platform at some point need upgrades/new parts. As evidenced by multiple product failures, these upgrades do not always work perfectly. Just look at the Galaxy Note 7. While product recalls occur semi-regularly in the current market, a regular person can replace a broken muffler much easier than re-coding a self-driving car. Having to replace a windshield wiper is an annoyance, but not a life-threatening one, unlike say, a bug in the code for a self-driving car.
A third concern I have with self-driving cars comes from declining ownership rights. While this may sound weird for Dang Dude, What The Heck, a proudly socialist newsletter, it does have some merit. Our current economic system has long enshrined private ownership of property as the ultimate totem of participation. The government has long held, while not always enforcing, that, within reason, an individual can do whatever they want with what they own. This ideal has weakened within recent years, however. Especially when it comes to cars. Legislators have passed an increasing amount of legislation designed to make it illegal for car owners to “tinker,” as engineers place more and more advanced computer systems in cars. This trend would likely only increase with self-driving cars. While I certainly don’t think that private ownership marks the end all be all of rights, I do think that tinkering is a wonderful hobby, and a fine way to build skills. It certainly should not be illegal.
The fourth and final reason that I do not like self-driving cars comes from the fact that they further the gap between the rich and the poor. Certainly in the short-term, and definitely in the long-term. Self-driving cars, as with all new technology, will see prohibitive initial costs expensive when they first arrive on the market. And yet much of the marketing and lobbying marks them as a potential savior of humanity. This in a subtle but present way marks people that can’t afford to buy a self-driving car, as drains on society? The same evolution happened with cell phones. At some point it became so common to have a cell phone, that not having one became a sign that something was up with that person.
The U.S.’s current fascination with new technology and “disruption” has mostly been considered a net positive. However, with the recent controversies surrounding other “disrupters” like Theranos, Soylent, and Uber, it is high time that we take a longer look at new “disruptive” technologies and make some hard decisions about where we take them in the future. It can’t be just about “can we do it,” but “should we do it.” Sometimes dragons really do be here.
Here’s a funny video