These online platforms leverage trained professionals to manually evaluate stories and claims.
Snopes - Launched in 1994, Snopes is one of the oldest and most trusted fact checking websites. It covers everything from viral news to old urban legends across a wide range of topics.
FactCheck.org - Operated by the Annenberg Public Policy Center, FactCheck.org takes a closer look at the accuracy and truth behind political statements and advertisements in the U.S.
PolitiFact - Through a manual process, PolitiFact fact-checkers use a Truth-o-Meter to determine the truth behind statements made by politicians, which range from True to “Pants on Fire”.
These tools leverage a combination of machine learning, natural language processing and sophisticated algorithms to help pinpoint false information.
ClaimBuster - ClaimBuster scans through large amounts of text, like political speeches, to uncover statements that need fact-checking. Currently, it’s very much in the realm of developers, journalists and other professionals and is a work-in-progress.
Full Fact - Full Fact uses AI to automatically identify false claims in real-time. It’s generally used to flag statements in speeches by prominent public figures.
Originality.AI Fact Checking Tool - Originality.AI’s own comprehensive fact-checking tool leverages extensive use of AI, machine learning and NLP to fact-check statements made in your text either by pasting in the text or uploading a document. As with all of Originality.AI’s tools, it’s one of the most robust and reliable automated fact-checkers currently available.
If you cannot take AI-cited sources at face value and you (or the AI's programmers) cannot determine where the information is sourced from, how are you going to assess the validity of what AI is telling you? Here you should use the most important method of analysis available to you: lateral reading. Lateral reading is done when you apply fact-checking techniques by leaving the AI output and consulting other sources to evaluate what the AI has provided based on your prompt. You can think of this as “tabbed reading”, moving laterally away from the AI information to sources in other tabs rather than just proceeding “vertically” down the page based on the AI prompt alone.
Lateral reading can (and should) be applied to all online sources, but you will find fewer pieces of information to assess through lateral reading when working with AI. While you can typically reach a consensus about online sources by searching for a source’s publication, funding organization, author or title, none of these bits of information are available to you when assessing AI output. As a result, it is critical that you read several sources outside the AI tool to determine whether credible, non-AI sources can confirm the information the tool returned.
With AI, instead of asking “who’s behind this information?” we have to ask “who can confirm this information?” In the video above, lateral reading is applied to an online source with an organization name, logo, URL, and authors whose identities and motivations can be researched and fact checked from other sources. AI content has no identifiers and AI output is a composite of multiple unidentifiable sources. This means you must take a look at the factual claims in AI content and decide on the validity of the claims themselves rather than the source of the claims.
Here's how to fact-check something you got from ChatGPT or a similar tool:
For an example of this in action, take a look at the video at the bottom of the page.
Check out the videos below to see these lateral reading strategies in action!
The first video has information on fact-checking AI-generated text and links:
And the second video has advice on fact-checking AI-generated citations and scholarly sources:
But just checking specific claims isn't all we need to do. Click the "next" button below to learn about critical thinking beyond fact-checking.
Information taken from the University Libraries at University of Maryland.