With so much of the world’s information available online, some ask “who needs librarians?” Users at the University of Minnesota Libraries know the answer is “everyone.”
Researchers from the Pew Internet & American Life Project report that, after email, the most popular internet activity across all age groups is the use of search engines. Even for the Millennial generation (defined as ages 18-33), a higher percentage of people are searching online than are participating in social networking, watching videos, buying products, and listening to music.
Our personal experience bears this statistic out—most of us search online for a wide variety of information dozens of times each day. With all this experience, it could be assumed we have become expert searchers. But are we expert researchers?
Certainly we have a great deal of success when searching for something specific—what librarians refer to as “known-item searching.” Using Google, we don’t even need to know exact terminology or understand Boolean search strategies. The Google autocomplete feature offers suggestions for what it thinks you might be looking for. Type “chemical symbol for ir” and you will see options to search for iron, iridium, iron oxide, or iron iii nitrate. Google also corrects your spelling: search for “Tolkein” and it will present results for “Tolkien” while also giving you the option to search on the word as entered, in case your misspelling was deliberate. But what if your information needs are more complex than the price of an airline ticket or the rules of chess? What if you have a question but aren’t sure where to start looking for the answer, or know there isn’t necessarily a single answer? This is the essence of research: the search process is where the learning happens; the journey is as important as the destination.
Long before Google, researchers and librarians worked closely together throughout this kind of research, navigating the various printed abstracts, indexes, directories, encyclopedias, and other reference sources traditionally available in libraries. John Butler, associate university librarian for information technology, describes this kind of librarian-mediated search and retrieval process from his earlier days as director of the University’s Science & Engineering Library. “A student or faculty member would start by looking up particular keywords in a multi-volume, subject-specific reference work like Engineering Index, Science Citation Index, or Chemical Abstracts. Bibliographic details for the relevant articles would be transcribed, and then the library’s catalog (print or, later, online) would need to be consulted to determine the availability of the required journals. Available volumes would be retrieved—either by the researcher or a librarian if the stacks were closed to patrons or if the volumes were held by another library. Once in the hands of the researcher, the articles would be consulted, as would the list of references and journal table of contents, which could lead to new searches and requests for journal volumes.”
Eventually, publishers of these reference sources began making them available online—first in CD-ROM format and then on the internet. To be marketable to libraries and researchers, developers of these information retrieval systems were interested in designing effective interfaces that reflected the actual research behaviors of users. They relied on the work of people like Marcia J. Bates, professor in UCLA’s Department of Information Studies, who in 1989 published “The Design of Browsing and Berrypicking Techniques for the Online Search Interface.”
Dr. Bates describes an information seeking behavior more complex than the “query in/answer out” model common at the time. She outlines a process that reflected the librarian-mediated searching described above, but is also familiar to many of us searching online today: you have an idea in your head of what you need, you try searching on a word or phrase you think describes what you need, you find something and analyze it a bit, and if it’s not on target you think, “the way they’re using this phrase here makes me realize I’m looking for something a little different.” So you learn more about what it is you really want by looking at your search results, you refine your search criteria, and then find some useful information while uncovering new leads to follow up on. Bates writes, “In other words, the query is satisfied not by a single final retrieved set, but by a series of selections of individual references and bits of information at each stage of the ever-modifying search. A bit-at-a-time retrieval of this sort is called berrypicking.”
So if we’ve had a clear understanding of the iterative and serendipitous nature of how people search for information since at least 1989, those information retrieval systems must be perfected by now, right? As powerful as Google may seem, serious researchers know we still have a long way to go.
Cody Hanson is one of many librarians at the University of Minnesota working to support users throughout their research processes. As the Libraries’ web architect and user experience analyst, he is at the center of the efforts to meet the needs of users accustomed to search success in Google and Amazon but whose skills don’t always translate into success in the library search environment. Hanson is co-chair of the Libraries’ “Discoverability” group, originally charged to recommend ways to make relevant resources easier to find, especially within the user’s workflow.
Key among the group’s findings is that users “expect discovery and delivery to coincide.” In other words, when a user conducts a search, they expect to find more than a reference to what they’re searching for, but also a direct link to access the complete resource. However, as vast as Google’s search universe may be, users run into problems accessing the books, articles, and other resources available through libraries. If Google finds these resources at all, users are often frustrated to find that there are barriers to moving from the citation of an article to the full-text as licensed by whatever institution they are affiliated with.
Hanson is quick to dismiss any criticism of users who search these commercial services first. “We all use them every day and they’re amazing. People are not using Google because they’re lazy, or because it’s ubiquitous, they’re using Google because it works. It works for me, many times a day, just like it does for you, I’m sure.” Hanson continues, “There’s nothing wrong with Google searching, but the good stuff is simply not available to Google.”
Many users also start with Amazon, which Hanson says may be a better example of the kind of searching and browsing that people expect in a library environment. “Our staff frequently get calls from users looking for particular books they found on Amazon,” he reports. “The way Amazon supports library-like activities is great. I get to a product page by searching for a particular item, and then am able to back out to the category and browse other related items. That kind of sorting and sifting and ranking would be incredibly useful in academic research, too.”
The Libraries are therefore challenged to develop tools that allow users to be successful using the methods they learn from the likes of Google and Amazon, but then take into account the complexities of academic libraries, namely the diversity of publisher systems, the varying needs of novice and expert researchers, and the differences among disciplines. For example, how do you develop a system that is a good fit for incoming freshmen and also serves the sophisticated research needs of faculty and graduate students? Can the same system be both sensitive to nuance for humanists and precision for scientists? How do we serve research teams working across disciplines and multiple universities?
The Discoverability group has a vision for achieving these goals. But beyond simple information provision, Hanson believes the Libraries must help with information filtering. “Instead of serving up scads of information indiscriminately, we can design the tools to let the researcher hold back the irrelevant.”
One example of this is a new tool rolled out across campus this past summer, the Library Course Pages. Dynamically created for all courses at the University, each Library Course Page (LCP) brings together resources tailored to a specific course. Content is selected and organized by librarians in collaboration with faculty and instructors, with some pages including general resources useful for a discipline or major and others created in close consultation with an individual faculty member.
Andy Howe, instructor in the U’s College of Education and Human Development, worked with librarian Laurel Haycock on an LCP for his course this spring. Howe reports that his collaboration with Haycock led to an “improved course syllabus, course resources, and methodology.” Haycock has also reached beyond the LCP to connect with students within the course’s Facebook group. “All of the students say they love that there is a librarian helping them. Laurel puts up tutorials, answers questions, gives great insight on library tools, and more,” says Howe.
The Human Touch
While Hanson and his colleagues recommend continued improvements to the usability and effectiveness of search systems, they also note the continued value of human interaction within these systems. Those of us who rely on suggestions from friends and “others like us” on Amazon, Netflix, and Facebook would agree with the group’s finding that “discovery increasingly happens through recommending.” This makes it increasingly important that the Libraries find ways to push relevant content to users and allow users to share content with others.
The Libraries have been exploring two different strategies for generating recommendations. The first shows students the journals and databases that are most frequently used by others in their degree program. The second uses a system similar to Amazon’s “Customers Who Bought This Item Also Bought” recommendations to suggest articles frequently read by researchers who viewed the current article. In both cases, great care is taken to ensure that users’ privacy is protected, and individual users’ reading habits are not recorded.
Although systems like these are valuable for harnessing the power of the crowd, one-on-one consultation is still in demand. Statistics gathered by reference staff indicate that, while the number of reference transactions lasting fewer than 5 minutes has declined over the past three years, those lasting anywhere from 6 minutes to over 30 minutes have remained steady. In other words, fewer users need reference staff to answer the quick-and-easy questions (no doubt many are answering these questions for themselves online), but librarians are still in demand for the meatier questions. It’s clear that, while Google may be king among searchers, behind every great researcher is a great library.