The Privacy Implications of Self-Driving Cars
By: Ross Sempek
The virtues of altruism are hard to argue with. The mission of saving lives, protecting the environment, and enabling human mobility are admirable bases for worldviews, and even business ventures. Some argue that profiting from these endeavors sullies the foundation of their inherent goodness. But we all have to make money. I see nothing wrong with choosing to make a buck while helping people in the process. However, when this magnanimous approach is used as a veneer to gloss over ethically questionable practices, consumers need to remain vigilant and skeptical. It is for this reason that I want to examine the privacy implications of autonomous vehicles.
Waymo, a Google subsidiary, has been at the helm of this new tech, and their reasons for introducing this novelty into our world is to save lives, and mobilize the blind and elderly. Their reasoning? Robots can’t drink and drive, they don’t get distracted, or cause accidents via carelessness. One can also extrapolate that it will save millions on unwanted insurance fees, reduce our unslakable thirst for gasoline which will rid cities of oppressive pollution. Sounds nice enough, right?
But as with all technology, our world will not suddenly be inundated with self-driving cars, thus engendering a utopian existence. It will be a gradual (and seemingly inevitable) shift toward this futuristic milieu. OK, so this altruistic mission waged by Waymo is a long-game. But if the eventual availability of autonomous vehicles will only supplant taxis, then their altruism is moot. Replacing “profit-gobbling” cabbies with robots engenders no improvement to road-safety beyond this superficial description. Certainly, it will remove the element of human error, but other human drivers will continue to operate vehicles under the influence of substances, cause accidents, and inevitably take the lives of innocents. Even in the distant future when you can buy your own autonomous car, truck, or SUV, surely the ideals of democracy will not force all drivers to adopt this novel technology. So apart from a farcical dystopian future in which private enterprise dictates your ability to be mobile, people will still be able to control their own cars. Um…so I have a question – and I’m just trying to understand here…what’s the real reason for this paradigm shift that nobody asked for?
One reason I can ideate is Google’s aggressive pursuit of users’ information. This is, after all, their raison d’etre, and it has spurred the new economy of surveillance capitalism which has been adopted by virtually every other company in existence. So while I pick on Google, it’s only because their business model inspired this new landscape from which everyone else borrows. Admittedly, Waymo is not the only AI ride service on the market, and established automobile companies are investing in this technology as well. If, as the above-cited Wired article suggests, “the idea of car ownership goes kaput,” then it would make sense for bottom-line chasers to hedge their bets with a firm toehold in the emerging market of self-driving cars.
Google already has access to your queries and results if you use their search engine. They have access to all of your internet activity if you use their web browser. They know where you’ve been and plan to go if you use their maps service, the content of your emails if you use gmail, the information contained in your documents if you use google drive, and your gameplay data if you use stadia. But apparently that’s not enough. No, they want to unleash fleets of mobile surveillance units that will record video of everything surrounding the vehicle up to 900 feet in all directions. They will record audio of their cars’ surroundings in order to hear emergency vehicles and react accordingly. In the event of a collision, Waymo is obliged to phone the police and roll-down the windows, or unlock the doors in order to communicate with law enforcement. And underlying all of this is the apprehension of data: Waymo “collect[s] information about you from third parties, including but not limited to identity verification services and publicly available sources.” Imagine experiencing a collision in which you bear no responsibility, but having your information nonetheless available to law enforcement. It would be akin to being a passenger during an human-induced accident, but having the cops run a check on your criminal history anyway. Furthermore, there exists nothing of your constitutional rights in this description of potential police interactions.
This all being necessary to ensure the safe operation of AI vehicles. To me, it ironically smacks of more moving parts. It’s naive to think that technology is a panacea for all that ails society. You can never remove the human element from anything that humans create, and the tradeoff between the spurious potential of a higher quality of life for a precipitous lack of privacy is unequivocally lopsided in the favor of private enterprise.
Now, you may want to interject here and say: “Pump the brakes, Ross. Your expectation of privacy is at its lowest when out-of-doors, and that’s exactly where all of this will occur.” To which I’ll counter: That may be true, but in my opinion the approach Google has taken exhibits a gross exploitation of our current privacy regulations. They’re changing the game, and our expectation of privacy needs to adapt alongside Waymo & co’s assumption of acquiescence to their unparalleled reach for control. Indeed, in addition to privacy, I’m also concerned about human autonomy. By accepting the trajectory of this technology, we’re literally handing over the wheel to a company that profits off of privacy violations.
As if this weren’t enough, Google’s chummy relationship with the US government dovetails my privacy concerns. Autonomous vehicles have been on the radar of the Defense Advanced Research Projects Agency (DARPA) since 2004 when the agency initiated the grand challenge – a contest that pitted tech-savvy teams against one another in order to crown a champion of self-driving cars operating in a wild landscape. What followed three years later was the urban challenge. With millions of dollars at stake, this branch of the military incentivized teams to make self-driving cars that could safely navigate the treacherous areas of a busy city. In a continuation of a narrative that has been active since the 1960s, DARPA and its academic cohorts nurture the types of technology that promise to enact behavioral prediction and societal control. The seductive and sparkling whiz-bang approach to rebranding militarized tech into consumer must-haves obviates any discussion of the ethics that drive this progress. This is why I’m writing this post, and it’s exactly why you should question the motives of billionaire altruists.
Ross Sempek is a recent MLIS graduate as well as a volunteer for the Multnomah County Library System in the beautiful state of Oregon. As a makerspace program assistant, he facilitates a weekly gaming club for local teens. He comes from a blue-collar family that values art, literature, and an even consideration for all world-views. This informs his passion for intellectual freedom, which he considers to be the bedrock for blooming to one’s fullest potential. It defines this country’s unique freedoms and allows an unfettered fulfillment of one’s purpose in life. When he is not actively championing librarianship, he loves lounging with his cat, cycling, and doing crossword puzzles – He’s even written a handful of puzzles himself.