AI in academic libraries: the future of higher education?

2023-11-27
student-using-AI-in-academic-library

Artificial intelligence (AI) has been a hot topic for quite some time, but until relatively recently, it still had the patina of science fiction in the minds of many. That was before the advent of ChatGPT and AI-powered image generators turned machine learning and large language models into household terms; in 2023, all it takes to witness AI technologies in action firsthand is a web browser.

In previous blog posts we have looked at various ways academic libraries use AI to improve their services, from collection management to content indexing. This time around we'll explore the impact of these emerging technologies on academic librarianship, and how higher-education institutions are preparing the next generation of library professionals and educators to navigate the practical applications and ethical implications of AI tools.

See also:

Artificial intelligence has entered the chat

chatgpt-open-ai

Depending on how you feel about the use of artificial intelligence in academic settings, you have OpenAI to either thank or blame for pushing the technology into the mainstream. Incredible as it may seem, it was only a year ago — on November 30, 2022, to be precise — the US-based AI research organizations launched ChatGPT, its large language model–based chatbot.

We'll assume you know the basics of how ChatGPT works, from an end-user perspective; the simplest summary is that it generates text based on user prompts. It's a bit of an understatement to say that ChatGPT was a success. By January 2023, it had become what was then the fastest-growing consumer software application in history, and just five weeks after the chatbot's launch, the Wall Street Journal reported that OpenAI was "in talks to sell existing shares in a tender offer that would value the company at around $29 billion".

Teachers and librarians had concerns

As the use of generative AI tools grew exponentially, so too did questions about its potential for misuse.

On this very blog we have looked at potential data-security risks posed by the use of ChatGPT in libraries, and explored at the ways that AI can contribute to the spread of misinformation online.

Educators and administrators at every level, from K-12 teachers to academic librarians, expressed ethical concerns about the use of artificial intelligence by students, faculty members and researchers. ChatGPT, after all, soon proved to be too unreliable source of information to play a significant role in student learning.

On the other hand, a growing number of voices within academic librarianship not only acknowledge that generative artificial intelligence is here to stay, but also see this as an opportunity rather than a crisis.

Training for academic librarians

smiling-librarian-academic-library

A recent Inside Higher Ed article quotes R. David Lankes, the Virginia and Charles Bowden Professor of Librarianship at the University of Texas at Austin.

“This does change things, but in a very good way,” Lankes said of the emergence of AI technology. “Librarians, every decade or so, are getting good at dealing with an existential crisis of ‘Do we need librarians?’ But with this one they’ve been very open to embrace, discuss and analyze this.”

The article goes on to note that, after receiving funding from the federal Institute for Museum and Library Services in November 2022 (the same month that OpenAI launched ChatGPT, remember?), the School of Information at UT Austin (known informally as iSchool) launched a pilot project to train grad students to work with librarians on AI and data science.

Building AI literacy

Participants in the program go into high schools to teach students about artificial intelligence, and also assist librarians with research projects, including helping them use ChatGPT to get better results.

In a news post on the iSchool website, Lankes said:

One of the complaints I often hear from librarians and library science students is that technically-oriented faculty don’t understand or have experience in librarianship. In this project, rather than ‘skilling up’ library-oriented doctoral students, we will embed data-oriented students in library settings so they learn the context, values, and core strengths of librarianship. Given that libraries are one of the few accessible institutions for the general public to engage with new information technologies (in schools, in their communities, or in academia) with trained information professionals present to guide their use of these technologies, it is critical to ensure that future librarians receive the best education in AI and data science possible.

Moving toward more transparent AI

open-book-on-desk

As we noted in a previous blog post, a few months back the Biden-Harris Administration in the US secured voluntary commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI to help move toward safe, secure and transparent development of AI technology.  

According to a White House factsheet issued in July, these commitments include the following:

  • The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.

  • The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.

  • The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.

New call-to-action

Shaping AI's potential to transform education

Biden followed this up on October 30 by issuing an Executive Order that, among other things, "establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world".

As part of this Executive Order, Biden called on relevant decision-makers to "shape AI’s potential to transform education by creating resources to support educators deploying AI-enabled educational tools, such as personalized tutoring in schools."

Societal changes call for a policy response

This echoes a May 2023 report from the US Department of Education's Office of Educational Technology, which argued that "AI is moving fast and heralding societal changes that require a national policy response."

According to the report, policies are urgently needed to implement the following:

  1. leverage automation to advance learning outcomes while protecting human decision making and judgment;

  2. interrogate the underlying data quality in AI models to ensure fair and unbiased pattern recognition and decision making in educational applications, based on accurate information appropriate to the pedagogical situation;

  3. enable examination of how particular AI technologies, as part of larger edtech or educational systems, may increase or undermine equity for students; and

  4. take steps to safeguard and advance equity, including providing for human checks and balances and limiting any AI systems and tools that undermine equity

AI policies are taking shape

AI-ethics-policy-concept

Policies regarding AI are taking shape all over the world. At the beginning of November, 28 countries attending the AI Safety Summit in the UK, including the United States, China and the European Union, issued an agreement known as the Bletchley Declaration, calling for international co-operation to manage the challenges and risks of the new technology. Last week, Reuters reported that France, Germany, and Italy had reached an agreement on how AI should be regulated.

When it comes to college and university libraries, it's up to individual institutions to set their own policies around the use of artificial intelligence.

As the above-cited Inside Higher Ed article points out, there are no general blanket guidelines on AI from any governing library body.

University libraries can't afford to be left behind

The article quotes Leo Lo, president-elect at the Association of College and Research Libraries (a division of the American Library Association), who said his organization is looking to incorporate artificial intelligence into the “gold standard” of its framework for information literacy for higher education.

Lo makes the case that, as AI evolves, librarians can't afford to be left behind:

With all the lawsuits out there with copyright, data privacy, it’s all things we [as librarians] care about, so it makes sense to be a bit more cautious. At the same time, we can’t wait until something is perfect to use it. Look at the internet: it’s not perfect, but we can use it in a way to help us. I feel the same with AI tools.

Find out how academic libraries all over the world use PressReader to provide  students, faculty and other users with a world of information at their  fingertips. Click here to learn more.

Let’s work together

Featured Libraries library trends Highlights Reading Insights academic library artificial intelligence


Related Articles