In April, Google quietly rolled out a policy expansion for U.S. citizens to request a removal of personal information from websites- information such as phone number, email address, or physical address, handwritten signatures, as well as non-consensual explicit or intimate personal images, involuntary fake pornography, personal content on websites with exploitative removal practices, select personally identifiable information (PII) or doxxing content from Google Search, images of minors from Google search results, irrelevant pornography from Google search results for a personal name, etc. When I first saw this news a few weeks ago, I was (and frankly, still am) surprised at the lack of response and news coverage. This is HUGE!
To be clear, this is a removal from search results to certain information, and not deleting the information itself. I think this is always a good point to reiterate on this topic- As librarians,I think we can appreciate how we can find ways around search obstacles to find information. And also good to point out- Many of the main social media platforms like Facebook, Twitter and Instagram have mechanisms in place to review and remove malicious content.
“Privacy and online safety go hand in hand. And when you’re using the internet, it’s important to have control over how your sensitive, personally identifiable information can be found.” Michelle Chang, Global Policy Lead for Search (Google), 2022
A similar framework has been in place in the European Union since 2014. Google’s Transparency Report tracks the number of requests for delisting as well as the approvals (which, has hovered around the 44-49% approval rate since I first noticed took note of the Transparency Report in 2016 which routinely updates the statistics) and also offers some tools to view requests by country. And what is fascinating to me to think about is the process these requests go through for removal. It has to be a very case-by-case basis, and is likely an intensive manual review. Further, “Determining whether content is in the public interest is complex and may mean considering many diverse factors, including—but not limited to—whether the content relates to the requester’s professional life, a past crime, political office, position in public life, or whether the content is self-authored content, consists of government documents, or is journalistic in nature.” (EU Google Transparency Report, linked above) Reviewers look at factors like: A person’s role in public life; Where the information comes from; How old the content is; The effect on Google’s users; Truth or falsehood; and Sensitive data. How this is interpreted by the individual reviewer is unknown, and I have always been interested in bias in decision-making in these types of scenarios.
The April announcement also states: “We’ll also evaluate if the content appears as part of the public record on the sites of government or official sources. In such cases, we won’t make removals.” I found this to also be tremendously interesting, as the definition of a public record varies state to state (and this 1989 murder case that has remained in my mind from California as to what lead to some changes there on releasing certain types of information), as well as what exemptions exist and who can make a request, adding more layers of complexity. There are many degrees of accessibility to public records between U.S. states, with some states being easier to request and receive documents/information than others (more information on individual state freedom of information law found here).
What I would now like to see is a U.S. Transparency Report from Google as these requests are reviewed and completed, as they did in the EU. I am curious to see if our overall approval rates are lower, as there has been general consensus in recent history that the U.S. is more in favor of the availability of information over the privacy rights of the individual (Forbes article in 2020 gave a good overview, “The Privacy Mindset of the EU vs. the US”).
Virginia Dressler is the Digital Projects Librarian at Kent State University. Her specialty areas are project management and digitization, working primarily with the university’s unique collections. She holds a Master’s of Library and Information Science from Kent State University (2007), a Master’s of the Arts in Art Gallery and Museum Studies from the University of Leeds (2003) and a certificate in advanced librarianship (digital libraries) from Kent State University (2014). Her research areas include privacy in digital collections and the Right to be Forgotten. She is author of Framing Privacy in Digital Collections with Ethical Decision Making (Morgan & Claypool, 2018).