Libraries can fight AI-generated misinformation with media literacy education

2023-08-07
student-using-tablet-in-library

The rapid advancement of artificial intelligence has been an unavoidable topic of conversation for the past several years, and for very good reasons.

The use — and potential misuse — of AI has implications for just about every sector, from tech and academia to the travel and tourism industry.

In fact, some have pegged AI as the future of the hospitality industry, with the technology enhancing smart hotel rooms and even improving the hotel employee experience.

The downside is that the same tools that make all of those wonderful things possible can also be employed by bad actors engaging in disinformation campaigns and spreading hate speech.

Fortunately, we can all learn ways to check the facts and counter disinformation — and libraries have a major role to play.

See also:

AI can't tell fact from fiction

You can't believe everything you read, and that is especially true of AI-generated content.

Sometimes this is because the content is being prompted or disseminated by malign actors who are intent on spreading misleading information that supports their political leanings. (We'll delve into that later on.) Most of the time, however, the blame lies within the AI model itself.

As Melissa Heikkilä put it in an MIT Technology Review article, large language models like GPT-3.5 and GPT-4 (which form the basis of OpenAI's ChatGPT chatbot) are incapable of discerning fact from fiction.

The magic — and danger — of these large language models lies in the illusion of correctness. The sentences they produce look right — they use the right kinds of words in the correct order. But the AI doesn’t know what any of it means. These models work by predicting the most likely next word in a sentence. They haven’t a clue whether something is correct or false, and they confidently present information as true even when it is not. 

Safe, secure and transparent

robot-finger-pressing-ai-button

Around the world, governments are waking up to the threat posed to human rights by the unchecked use of emerging technologies such as artificial intelligence.

In the US, for example, the Biden-Harris Administration has secured voluntary commitments from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI to help move toward safe, secure and transparent development of AI technology.  

According to a recent White House factsheet, these commitments include the following:

  • The companies commit to developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system. This action enables creativity with AI to flourish but reduces the dangers of fraud and deception.

  • The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use. This report will cover both security risks and societal risks, such as the effects on fairness and bias.

  • The companies commit to prioritizing research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy. The track record of AI shows the insidiousness and prevalence of these dangers, and the companies commit to rolling out AI that mitigates them.

A potential threat to democracy

misinformation-written-in-handwriting

In Ireland, meanwhile, Housing Minister Darragh O’Brien, who is also responsible for electoral reform, wrote to the country's newly established Electoral Commission to express his concerns about the impact that AI could have on the spread of online misinformation or "fake news" during election campaigns.

According to an article in the Independent, O'Brien said:

In the hands of bad faith political parties or nefarious activists AI could be weaponized against the pillars of our democracy. In addition to that AI exponentially expands the capacity of malicious state or non-State actors in directly attacking our democratic processes. We must be on guard against these new challenges.

Noting that one of the key functions of the commission is to tackle disinformation and educate the public about the proliferation of fake news and conspiracy theories, O'Brien requested that the Electoral Commission draft a work plan explaining the potential dangers of unregulated AI in order to inform citizens about the threat of “democracy by algorithm diktat”.

“Research and education of citizens is critical to standing up to the twisting and growing threat of AI,” the minister said.

Librarians have an important role to play

At PressReader, we are strong believers in the notion that librarians have an important role to play in fighting for media literacy and combating disinformation.

Wherever we get our info — be it the local paper, TV, online content or social media — the news we consume can shape our beliefs, attitudes and perceptions. For a democratic society to function, a population that can discern which sources of information are truthful, accurate and unbiased is essential.

As long-time journalist Alan Miller, founder of the News Literacy Project, once astutely noted, “We’ve lost any sense of a common narrative, of a shared reality. We not only can’t agree on what the facts are, we can’t even agree on what a fact is.”

The good news is that libraries are ideally positioned to support media literacy in an age of misinformation by equipping patrons with fact-checking skills and media know-how through workshops and other resources. The first challenge, however, is being able to tell the difference between AI-generated content and that created by a human being.

New call-to-action

AI as a fact-checking tool?

There may come a day when the best identifier of AI-generated text is AI itself.

On the Fagen Wassani Technologies blog, Anna Singh argues that AI-powered fact-checking tools have the potential to "quickly analyze vast amounts of data, identify patterns, and determine the veracity of claims made in news articles, social media posts, and other forms of content."

The key word is potential. Singh acknowledges that that the current generation of AI tools has its limitations:

AI algorithms are only as good as the data they are trained on, and there is a risk that biases in the training data can be perpetuated by the AI system. Furthermore, AI-powered fact-checking tools may struggle to understand the nuances and complexities of human language, particularly when it comes to sarcasm, humor, or cultural references.

To overcome these challenges, Singh says, it is essential that AI fact-checkers are developed and refined in collaboration with their human counterparts.

Even the reigning champion of generative AI has proven fallible. In January of this year, OpenAI announced with great fanfare that it had trained a "classifier" that would be able to distinguish between text written by a human and text written by AIs from a variety of providers (and not just its own ChatGPT).

Six months later, OpenAI admitted defeat in an updated blog post:

As of July 20, 2023, the AI classifier is no longer available due to its low rate of accuracy. We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.

An inquisitive attitude can counter disinformation

inquisitive-woman-with-magnifying-glass

Until fully automated fact-checking tools can keep up with the rapid pace of development of generative AI bots like ChatGPT, human readers will need to develop their critical thinking skills. One need not have deep technical expertise in AI techniques or computational linguistics to tell the difference between "real" and "fake" text — and to identify false information contained in both.

Libraries should consider strengthening their media-literacy and digital-literacy offerings by providing patrons with information on distinguishing AI copy from genuine writing.

There are a number of online tools that patrons can use to detect AI-generated text, including Copyleaks, Sapling and Winston. As each generation of large language model gets more sophisticated, however, the better the bots become at fooling these apps.

In an article for the BBC Future Now blog, freelance journalist Alex O'Brien notes the following:

Here is the real challenge for humans as AI-produced writing spreads: we probably cannot rely on tech to spot it. A skeptical, inquisitive attitude toward information, which routinely stress-tests its veracity, is therefore important.... The war on disinformation has already shown us that automated tools alone do not suffice, and we need humans in the loop. 

O'Brien also offers a few tips on bot-spotting, including:

  • Verification: "Can you verify and check the sources? Can you check the evidence — both written and visual?" O'Brien suggests cross-checking and looking for supporting material from other reputable sources.

  • Examine the text: Take a close look at spelling, grammar and punctuation. O'Brien writes: "If the spelling and grammar is not appropriate for the publication or the author writing it, ask: why?" If the copy quotes people or institutions that do not seem to exist, that's a dead giveaway, as are outdated references. As O'Brien notes, AI is often still limited in terms of what information it can access, and it may not be up-to-date when it comes to current news.

  • Check the tone: Often, AI-generated text simply doesn't read as if a person wrote it. Giveaways might include stilted linguistic patterns or abrupt changes in tone or voice.

The website Real or Fake Text is a fun way to test your own ability to tell human writing from that created by AI models.

Libraries can provide trusted media content

Critical thinking and media literacy skills are essential in a functioning, democratic society. That’s because they support strong institutions, enable societies to hold those in power accountable, and help to reduce inequalities.

In addition to helping patrons differentiate between human writing and text generated by artificial intelligence, librarians can also provide free access to trusted journalism with platforms like PressReader, which provide readers with digital editions of newspapers and magazines.

It's important for readers to engage with ideas and info from across the political spectrum, and it's equally crucial for them to be able to find the truth in a world rife with misinformation.

By introducing them to resources like PressReader, librarians can help counter disinformation by opening patrons’ eyes to the wide range of thoughts, ideas and perspectives to be found in genuine journalistic content.  

PressReader provides searchable, up-to-date editorial content from around the  globe.Click here to learn how we can help serve the needs of your local  communities.

Let’s work together

Featured Libraries library trends Highlights Reading Insights artificial intelligence media literacy


Related Articles