By: Lisa Hoover
Technology and privacy have been making headlines again recently, and the ways we can use technology to give up our privacy seem to be expanding – and becoming more novel – rapidly.
A March opinion piece in the NY Times really blew my mind; cardiologist Dr. Eric Topol participated in a two week experiment that used a smartphone app to track the food he ate, what he drank, any medications he took, and how much he slept and exercised.
So far, this might not sound that different from the average fitness app, but it doesn’t stop there. He also wore a sensor that monitored blood glucose and sent in a stool sample for analysis of his gut microbiome.
“All of my data, amassed with similar input from more than a thousand other people, was analyzed by artificial intelligence to create a personalized diet algorithm. The point was to find out what kind of food I should be eating to live a longer and healthier life,” Topol said.
“It turns out, despite decades of diet fads and government-issued food pyramids, we know surprisingly little about the science of nutrition,” he explained. The experiment defied his expectations as to what food was healthy for him. Topol argues that science is now telling us that there is no such thing as a perfect diet for anyone; there is no such thing as a universal diet.
To this end, some companies are selling DNA kits to tell you what foods you should eat, although Topol argues that a truly personalized diet would require a lot of information and number crunching about each person and would need to consider things beyond DNA like lifestyle, medical history, and the environment.
Meanwhile, San Francisco recently made a big splash in the news by banning facial recognition technology. Kate Conger reports that the Board of Supervisors banned the use of the technology by “the police and other agencies,” making it the first major American city to block the use of such tools in criminal investigations.
On the other hand, Deutsche Welle is reporting that a school in Hangzhou has started using facial recognition to learn and track all sorts of information about their students. Students’ faces are used to track everything from when they arrive at school, what they eat for lunch, and what library books they borrow. Echoing the Topol article, they argue that tracking what they eat can identify students who are not eating healthy. They argue that keeping track of library book borrowing will help them make collection development decisions around what is most interesting to the students.
The system also keeps track of whether students have arrived in their classrooms on time. Teachers get smart phone messages telling them which students were late. Apparently the system also was originally designed to track facial expressions to determine whether students were concentrating or bored, but that feature has been turned off (for now).
Hitting a little closer to home, BuzzFeed recently reported that the New York State Department of Education has instructed Lockport schools to delay use of facial recognition technology. The school district was slated to become the first in the US to “pilot a facial recognition system on its students and faculty,” according to the article.
The school was planning a soft launch in early June to “troubleshoot the system, train district officials on its use, and discuss proper procedures with local law enforcement in the event of an alert triggered by the facial recognition tech,” the article said. (Emphasis added)
The district funded the system through the New York Smart Schools Bond Act, which BuzzFeed says was intended to help schools pay for instructional technologies. The system reportedly cost $1.4 million and is intended “as an early warning system that informs staff of threats including guns or individuals who have been identified as not allowed in our buildings,” according to a FAQ page distributed to parents. Lockport plans to keep the footage for 60 days, after which it will be erased.
The state Department of Education has temporarily blocked the plan after the New York Civil Liberties Union wrote letters asking for the project to the be stopped. State Assembly Member Monica Wallace has also introduced a bill that would ban use of the technology for a year in order to further study the issue, according to BuzzFeed.
Although Hangzhou school says the data is used only for internal purposes, in contrast to the Lockport system that will involve law enforcement, this raises some serious privacy concerns in my eyes. First, how is the data being protected? Under what conditions, if any, will they release the data to third parties (like law enforcement)? In the case of Lockport’s system, is there any double checking before the system alerts police? How much data are they giving students’ parents, and under what circumstances? These concerns are particularly pressing when they involve minors, who may not have the sophistication or knowledge yet to really consent to sharing their data.
In addition to privacy concerns regarding the Lockport system, I am also concerned by the use of the technology to alert policy, given that we know facial recognition technology is far from perfect, especially with regard to minorities. How reliable is the particular system Lockport wants to use? Given the tensions between law enforcement and minority communities in the US right now, I can imagine the flaws in facial recognition technology leading to increased hostility, at the very least. This particularly concerns me when we are dealing with minors, especially in a school setting; while security is important, students also deserve to be able to learn without having the police show up to interrogate them due to a misidentification by technology.
Privacy issues are especially important with regard to library materials. I have written before regarding whether or not better collection development warrants collecting more patron data. Librarians have a professional responsibility to protect patron privacy, and the more data we collect – especially to share with third parties – the harder that is.
On the other hand, as I mentioned in my discussion of OCLC’s Wise, it’s becoming increasingly difficult to hold a hard line against personalization and collection of information. In a piece on algorithms and democracy Aniek van den Brandt describes an “algorithmic culture,” which I think is a fantastic phrase. The phrase originated with Ted Striphas, who she quotes as saying “what one sees in Amazon, and in its kin Google, Facebook, Twitter, Netflix and many others, is the enfolding of human thought, conduct, organization and expression into the logic of big data and large-scale computation, a move that alters how the category culture has long been practiced, experienced and understood.”
Brandt summarizes this as the phenomena by which “human beings began delegating the work of culture to computational products more and more. The sorting, classifying and hierarchizing of people and things is no longer done by ourselves, but by an algorithm,” which she argues in turn effects democracy. We see this more and more as big tech companies increasingly turn to complex algorithms that filter the information we see. Google filter “bubbles” and Amazon’s surprisingly complex star rating system are just a few examples.
We are increasingly surrendering our own privacy. There’s always a benefit (or an alleged one, anyway); better health, better security, more control over what our kids are doing. But it’s easy to lose sight of the potential costs. Any time your data is collected, there is potential for it to be shared or stolen. When law enforcement is involved, there is the additional risk of stigmatizing – or worse, punishing – someone who is innocent.
From a library perspective, I also fear the chilling effect on the 1st Amendment. Are users less likely to use our materials if their information – especially their face – is tied to it? In the broader educational context, what message does it send to our students if they cannot even be trusted to pick out their own lunches after a certain age? And how can we expect them to transition to the real adult world after that level of monitoring? And, by habituating them to that level of control do we desensitize them to it, opening the world to greater state control in the future?
What role can libraries play? Well, we can continue to advocate for privacy protections for our patrons and for society in general. We can push back when our parent institutions, particularly schools, seek changes that would require more collection of patron data. And we can educate our patrons. We can make sure patrons are aware of the risks of sharing their data, and the downsides of all the convenience created by artificial intelligence and other new technologies that harvest our personal data. Above all, we can teach them to think critically about how their data is being collected and used. We also need to keep abreast of technological changes and consider how they may impact library technology and our patrons.
The world continues to change quickly, and I would argue that librarians are uniquely situated to consider the implications of technology changes on our users and society. I think we have an ethical responsibility as keepers of our patrons’ data and educators of the next generation of digital citizens to make this a priority.
Alba, D. (2019) The first public schools in the US will start using facial recognition next week. BuzzFeed. Retrieved from https://www.buzzfeednews.com/article/daveyalba/lockport-schools-facial-recognition-pilot-aegis?fbclid=IwAR3MOpY71e-RMZawNii53s2Ti4ei6_SWxGY6h3fnqsLGyjW_hvbm5qz87ig June 3, 2019
Brandt (2019) The algorithmic public sphere and democracy. Diggit Magazine. Retrieved from https://www.diggitmagazine.com/papers/algorithmic-public-sphere May 28, 2019.
Conger, K. et al. (2019) San Francisco Bans Facial Recognition Technology. The New York Times. Retrieved from https://www.nytimes.com/2019/05/14/us/facial-recognition-ban-san-francisco.html May 28, 2019.
Deutsche Welle. (2019) Total Surveillance in High School. DW.com. Retrieved from https://www.dw.com/en/total-surveillance-in-highschool/av-48589592 May 28, 2019.
Topol, E. (2019) The A.I. Diet. The New York Times. Retrived from https://www.nytimes.com/2019/03/02/opinion/sunday/diet-artificial-intelligence-diabetes.html?login=email&auth=login-email May 28, 2019.
Lisa Hoover is a Public Services Librarian at Clarkson University and an Adjunct Professor in criminal justice at SUNY Canton. In addition to her MLS, Lisa holds a JD and an MA in political science. She began her career as an editor and then manager for a local news organization, adjunct teaching in her “spare time.” She teaches courses in criminal procedure, criminal law and constitutional law. She is passionate about 1st Amendment issues. She recently began her career as a librarian, starting at Clarkson University in June 2017 teaching information literacy sessions and offering reference services. Lisa and her husband Lee live in Norwood, New York with their cats Hercules, Pandora and Nyx and pug-mix Alexstrasza (Alex). Find her on Twitter @LisaHoover01.