By: Kate Lechtenberg
Every spring, I look forward to the day when the Office for Intellectual Freedom releases its annual “Top 10 Most Challenged Books.” What questions, issues, and topics sparked conversations for communities, schools, and the nation? Which books became the most recent proxies for our national debates, corns, and preoccupations? Like every year, the Top 10 Most Challenged Books of 2017 brings my questions into stark relief. Suicide, sex and sexuality, race relations, gender identity, religion, and politics are at the center of this year’s challenges, making it clear once again that these books often represent the unresolved tensions and ongoing debates that plague/motivate/inspire/challenge us as readers, citizens, parents, educators, and humans.
So how does this list, this snapshot of our literary and civic psyche, come about? Recently, I sat down to talk with Kristin Pekoll, Assistant Director of the Office for Intellectual Freedom to hear more about how the OIF compiles the list and the questions that OIF staff wrestle with along the way.
The nitty gritty: Where do the numbers come from?
Kate: There’s a little information about how you gather challenge reports on the Top 10 “Resources and Graphics” page, but could you tell me a little more about your process of collecting data on book challenges?
Kristin: Every time a challenge is reported through the online form, through a phone call or an email, or if I see if on social media, or in the press– anything even hinting of a challenge, we will capture as many details as possible and put it in our Lexis Nexis database. Even if it’s not something that we’ve been asked for support on, we put it on the list. Usually once a month, we go through those entries, and as an office we decide whether it’s within the scope of our challenge: Is it in the U.S.? Did it happen in a school library or curriculum? Did it happen in a public or academic library? Did the issue involve the attempt to remove or restrict access to a library service? We generally don’t track commercial decisions or selection decisions in libraries, although self-censorship is an issue.
So the first column in the spreadsheet is whether it falls within that scope definition. The second column is about how many items the challenge impacts. So if it’s a challenge to a reading list that has 30 different items, we record 30. If it’s a challenge to a program or a display, and not a specific resource, we would record 0. So that’s how we get those two numbers, when we say on the infographic that there were 354 challenges tracked in 2017, that includes challenges to social media, programming, and databases. But then it says 491 materials were challenged or censored, and that reflects the physical materials.
Kate: And does that 491 number include duplicates? I mean, if a given title is challenged 10 times in ten different places, is that one material or 10?
Kristin: That’s 10 items. And we thought about that many, many times. The idea is that it’s significant because we’re not just interested in the book or title itself– it’s about how many different items were challenged in different places.
Preparing the list: “A labor of love”
Kate: So tell me about the choice to report the list as a “Top 10” list. Do you see the numbers as important, or is the format more of a hook?
Kristin: Right, it’s kind of a tradition, and it’s how we’ve always promoted and talked about banned books. I was looking back at some of the older Censorship Datelines, and there have been some amazing cases of censorship that never get talked about because they were not on the Top 10. You’d have to know to research a certain book if it wasn’t on the Top 10, so this format lets us talk about different books each year. Sometimes when there’s a tie, it’s an office call which books get included on the list–do we want to feature a new book, one that hasn’t been on the list in the past?
Kate: So maybe it’s more accurate to call the Top 10 list a representation of the state of book challenges, rather than a strict statistical tally?
Kristin: Yes, definitely. It’s a labor of love. My background is in libraries. I’ve been here almost four years, but I’m not a statistician. I’m pretty thorough and I try to be as comprehensive as possible, and I look at the data again and again and again. When we review it as a staff, we look at it a thousand times. But the data we report is not about professional statistical reporting. It really is a snapshot of what comes in that we try to report out. We do the best that we can with the data that we have to make it both accurate and interesting, and relatable to the average person, not just librarians.
For example, on the question about why materials are challenged, we used to have a huge data set that doesn’t make sense anymore, now that we’ve changed the way we collect the information. So now we report reasons for challenging as a Wordle that’s more of a visual representation of the reasons reported, not a direct count of reasons.
Reporting numbers of challenges: Culture and confidentiality
Kate: I notice that you don’t report how times each book on the Top 10 was challenged. Can you talk about that decision?
Kristin: The reason is twofold. First, it’s a cultural shift in the office, because in the past, the office had been very secretive about its numbers. That was something new when Jamie [LaRue, Director of the Office for Intellectual Freedom] started; he wanted more transparency about how we record and report the data.
But also, we have to consider whether or not the cases that the list represents are reported confidentially. For some books in the 8, 9, 10 spots, they might have been challenged twice. So if one of those cases was very confidential, then someone somewhere will know that was reported. Whose confidentiality are we betraying if we report those numbers? Confidentiality also leads us to make the call to leave some books off the list if there’s a tie or if the numbers are low. We may not want to bring up a certain case in the press because someone may ask for more information and we won’t betray a librarian’s trust.
Unreported challenges: Estimating and looking for patterns
Kate: I’m always really interested in the statistic that says “OIF estimates that 82-97% of challenges remain unreported.” How do you arrive at that estimate?
Kristin: There’s a blog post from last year that has a link to a webinar where we talk about unreported challenges in more detail. There were three different independent studies on this several years ago: one was from the University of Missouri School of Journalism, one through the Oregon Intellectual Freedom Clearinghouse, and one from the Texas ACLU. In those studies, they did their own separate requests for challenges, whether if was through FOIA [Freedom of Information Act requests] or their own methods of asking libraries about reporting. When we studied that versus what we get reported at ALA, that’s where we get the 82-97% unreported.
Kate: Do you have any sense of what types of challenges go unreported? Can you generalize, or do you feel comfortable hypothesizing what kinds of “everyday” censorship might get not reported.
Kristin: I have a hard time saying what I hear without something to back it up with research, because I see see so many things that go unreported. But there are a few things that come to my mind. There are still people who don’t even know that OIF exists. Those are the ones that pop up on Think Tank or LM_Net when someone talks about a challenge and someone responses “Did you talk to OIF?” and the librarian says, “What? I didn’t know about that!” So that’s an awareness thing, even though we talk a lot about why we should report challenges.
Then, there are situations where librarians don’t know they should be reporting. I was at a conference recently talking to a branch manager, and the way their system is set up, any time he receives a challenge, he completes the paperwork and forwards it to the collection development department. He assumed that they were doing the reporting.
I also hear situations where the librarian has personal values that disagree with the book so they don’t mind that the book is being removed–that’s a big one. For example, recently Mike Pence’s daughter wrote a book about the a day in the life of their bunnies, and then a Hollywood exec donated a copy of the John Oliver’s parody book, A Day in the Life of Marlon Bundo, to every elementary school in Indiana. Librarians are talking about it in their listservs, and some of them don’t want to add it, some are donating it to their public libraries collections, and some public libraries are putting it into adult nonfiction under “parody.”
Kate: Would that book meet most elementary school selection criteria? Or since it’s a parody, is the book actually more for adults?
Kristin: I’ve pulled reviews, and there aren’t a lot of professional reviews on it. I actually wish I could solicit professional reviews in cases like this. There’s one from SLJ and one from Kirkus. They say it’s obviously a parody… it’s got a stink bug that looks a lot like Mike Pence. But the reviews say it can stand alone as its own children’s picture book. But it’s just like the penguins [in frequently challenged And Tango Makes Three]. It’s about two male bunnies who fall in love, so it definitely has a political point of view. But its an adorable children’s book with gorgeous images, and it really depends on the selection policy.
Kate: And it sounds like a lot of the conversation was less about the book and more about the political controversy around including it.
Kristin: Yes, so in our guidance with librarians we ask, “What does your selection policy say?” So yes, it’s common to hear about self-censorship or when librarians disagree with it on a personal basis, and those aren’t reported.
Retained, Removed, Restricted: Resolving challenges
Kate: Do you have any stats on the results of challenges? Whether a book or program was retained, removed, limited, etc.?
Kristin: Yes, that’s something we’re working on more. I did a report for Jamie at Midwinter, and I found that about 120 materials or programs were removed, cancelled, or restricted so that was about one-third of all the challenges in 2017. In the past we’ve thought it was closer to 10%, so we were all a little surprised about that. This is complicated, because with the new focus on recording challenges beyond books, it can be hard to say if a display is removed before it normally would have been.
Kate: And I suppose that even if you know that one third were removed, cancelled or restricted, that doesn’t necessarily mean that the other two thirds were retained?
Kristin: Right, because there’s a lot that we don’t know. The number of removals or restrictions could be higher. From what I understand, before I started they they didn’t do as much follow-up. Now, I try to run a report about once a month and email people to find out how the case was resolved. Usually within a day or two, people will reply saying, “oh yeah, we retained that book” and then I can update our database. We started using our new LexisNexis database at the end of 2016. If we looked at 10 years ago, we’d notice a strong increase in resolutions to the cases because our reporting system has improved.
Did They Follow the Policy?
Kate: Do you keep stats on whether the institution followed policies throughout a challenge process?
Kristin: I’ve added a new column in the spreadsheet, and if there’s a definitive answer, I do mark it. I would estimate that about 20% are starting to list whether they are following the policy when they report the challenge, and of those, about I’d say it’s about 50% who do follow their own policies and 50% who don’t. That might be a good question for me to ask when I email to follow up. We try to keep the initial challenge reporting form as brief as possible, because the longer form, the more of a barrier it is to reporting. So maybe in follow-up we can gather more info on policies. Also in my follow-ups, I’m starting to ask whether there is a publicly available document that refers to the challenge–board minutes, media reports, etc. That gives us a way to gather more info.
Final thoughts: Reflections on diversity, hope, and cloning Kristin
Kate: What other stats or details would you like to collect if you could?
Kristin: In 2016 we noticed the issue about diversity in the books on the Top 10 list; nine out of ten books had diverse content. I would love to collect more info about the content of the books and their authors, including race, class, gender, sexuality. On the other hand, I’m a little hesitant because we don’t want to suggest that diverse content causes challenges–it might just be correlation.
Kate: In the end, does compiling this data leave you hopeful or despairing? I guess it’s not just “data” to you–these are people you work with and talk to every day, right?
Kristin: Right… the data doesn’t do much for me either way. The numbers don’t surprise me. What upsets me more than anything is when I hear from librarians who are struggling. I got an email from the librarian in a town that recently went through a very public challenged, and she has resigned her position. I work a lot, and there are so many days where I feel like I’m never gonna do enough for those librarians who lose their jobs or who choose not to work in that hostile environment. There’s never going to be enough support for librarians. There are never enough ways that I can communicate what we can offer for people. How can I get to every collection development policy? How many librarians are there in the united states? How am I going to reach them all? How am I going to reach all the administrators? I’m just talking with Oregon now about doing a webinar that’s going to be specifically for school boards and administrators. And then I have those days when I see librarians online who support removing books that they don’t agree with, I think, “Who are these librarians and why did you become a librarian if you don’t believe in free speech?”
Kate: I feel like we need to clone you… to help get the word out.
Kristin: Maybe! On the plus side, there are so many people working for intellectual freedom. There’s an ALA Emerging Leaders group who is going to work on a project for school libraries about protecting intellectual freedom in school libraries, particularly for LGBTQ materials. That’s hopeful too.
Kate: Thanks, Kristin, for sharing your process in this important “labor of love!” We appreciate everything you do, and I hope others will continue the conversation in the comments.
Kate Lechtenberg is a doctoral student in Language, Literacy, and Culture in the University of Iowa’s College of Education. After working in public schools for fourteen years as a high school English teacher and school librarian, her doctoral research now focuses on text selection, multicultural literature, educational standards, and equity initiatives. Kate teaches a young adult literature course in the College of Education and a school librarian course on print and digital collection management in the School of Library and Information Science. She is also a member of the AASL Standards Implementation Task Force. Find her on Twitter @katelechtenberg.