The process of evaluating a source's credibility and suitability is typically an invisible process. In order to make this process visible, I-Know has adapted a series of Thinking Routines into editable lessons and activities. These Thinking Routines were developed by Project Zero, a research center at Harvard Graduate School of Education, and have been adapted to meet the needs of SLO 2: Evaluate. These routines help build students’ skills and confidence related to evaluating credibility and suitability of resources for their information need, and can be used across disciplines. They are designed to be scaffolded practice activities, and not a summative assessment of this learning objective.
The Red Light, Yellow Light Thinking Routine was developed by Project Zero, a research center at the Harvard Graduate School of Education. This activity provides students with practice finding generalizations, bold claims, and gaps in evidence within a resource or argument, and practice pausing to ask questions when those moments arise.
Applications: Students can use this activity to evaluate many different kinds of resources in various disciplines. Project Zero suggests using this routine to evaluate news resources, political speeches, math proofs that might have weaknesses, and popular science resources.
The Claim, Support, Question Thinking Routine was developed by Project Zero, a research center at the Harvard Graduate School of Education. This activity provides students with practice identifying claims, examining claims with evidence, and asking questions to find gaps in evidence.
Applications: Students can use this activity to evaluate many different kinds of resources in various disciplines. Some examples of resources include a piece of text, poem, artwork, speech, advertisement, and social media post.
This activity allows students to critically evaluate AI-generated research responses within Canvas, without needing direct access to AI tools. It uses Stimulus Questions in a Canvas New Quiz, which present AI-generated responses alongside related quiz questions.
The quiz content and example questions are available below as a PDF export from Canvas.
To use this activity in your course, contact Eric Cosio to copy the quiz into your Canvas course. It can be used as-is or modified as needed.
This assignment can be used alone, or as a follow-up to the Verifying Sources Gathered with AI Research Tools Quiz, providing hands-on practice comparing AI-generated research results with library search tools. The full text of the example search prompts in MS CoPilot Chat and Perplexity AI used in the Verifying Sources quiz are provided for comparison.
Students will:
Students can submit their answers directly onto the worksheet and submit this to a Canvas Assignment.
This assignment can be used alone, or as a follow-up to the Verifying Sources Gathered with AI Research Tools Quiz. The activity helps students compare results using an AI research tool versus a conventional search tool (Quick Search, Google Scholar, or another library database), using a research question of their choice. For the AI component, they can choose between Chat GPT, MS CoPilot Chat, and Perplexity AI; all offer the ability to perform web searching and retrieve external sources in response to a query in their free versions, as of March 2025.
The activity has three parts:
Part 1: Prompt the AI tool with an open-ended AI research query
Part 2: Create a structured AI prompt that explicitly requests scholarly and peer-reviewed sources
Part 3: Conduct a search on their topic using with a conventional search tools (Quick Search or Google Scholar) with relevant keywords
For each, they will evaluate how/whether the criteria of their queries were met (date range, source type, scholarly source, available full text, etc...) identify at least one relevant source and evaluate it's suitability.
The end of the assignment is a short reflection essay on their experience with using both AI and conventional searches for their research question.
If you want help creating or adapting activities that reflect the I-Know Student Learning Objectives (SLOs) into your own course, you can contact Eric Cosio at eric.cosio@tamucc.edu
The SIFT Method was created by Mike Caulfield as a way for students to look beyond the webpage to determine whether or not the resource in front of them is or is not credible. SIFT stands for:
Professor Chimene Burnett from TAMU-CC has created a lesson for students to apply the SIFT method in class.
The Civic Online Reasoning, or COR Curriculum was developed by the Digital Inquiry Group as a free resource for K-12 teachers to teach the skill of evaluation to their students. While the curriculum is targeted for younger students, the activity ideas and concepts could give you some inspiration. In order to access lessons from this curriculum, you need to create a free account.