I-Know Faculty Guide:Activity Ideas for SLO 2: Evaluate

Practicing SLO 2: Adaptable Lesson Plans, Project Zero

The process of evaluating a source's credibility and suitability is typically an invisible process. In order to make this process visible, I-Know has adapted a series of Thinking Routines into editable lessons and activities. These Thinking Routines were developed by Project Zero, a research center at Harvard Graduate School of Education, and have been adapted to meet the needs of SLO 2: Evaluate. These routines help build students’ skills and confidence related to evaluating credibility and suitability of resources for their information need, and can be used across disciplines. They are designed to be scaffolded practice activities, and not a summative assessment of this learning objective.

Red Light, Yellow Light

The Red Light, Yellow Light Thinking Routine was developed by Project Zero, a research center at the Harvard Graduate School of Education. This activity provides students with practice finding generalizations, bold claims, and gaps in evidence within a resource or argument, and practice pausing to ask questions when those moments arise. 

Applications: Students can use this activity to evaluate many different kinds of resources in various disciplines. Project Zero suggests using this routine to evaluate news resources, political speeches, math proofs that might have weaknesses, and popular science resources.

Claim, Support, Question

The Claim, Support, Question Thinking Routine was developed by Project Zero, a research center at the Harvard Graduate School of Education. This activity provides students with practice identifying claims, examining claims with evidence, and asking questions to find gaps in evidence.

Applications: Students can use this activity to evaluate many different kinds of resources in various disciplines. Some examples of resources include a piece of text, poem, artwork, speech, advertisement, and social media post. 

AI Activity Ideas

Using an AI Prompt as a Canvas Stimulus Quiz 

This activity allows students to critically evaluate AI-generated research responses within Canvas, without needing direct access to AI tools. It uses Stimulus Questions in a Canvas New Quiz, which present AI-generated responses alongside related quiz questions.

How It Works:

  • The quiz contains one or more Stimulus Questions, each displaying the full text of an AI-generated response.
  • This example includes responses from Perplexity AI and MS Copilot, chosen for their ability to provide linked sources. Other tools like ChatGPT (with web browsing), Google Gemini, and Liner AI could also be used.
  • AI response text is pasted directly into the Stimulus for accessibility (instead of screenshots). The AI tool used, prompt, and query date are included.
  • Hyperlinks to sources should remain active, though quiz questions in this example do not require clicking external links. If using clickable links, ensure they are still accessible.
  • Quiz questions follow each Stimulus, prompting students to evaluate:
    • Source format issues (e.g., web pages instead of peer-reviewed articles).
    • Date range mismatches (e.g., sources outside a specified time frame).
    • Misalignment between citations and AI-generated claims (e.g., sources that do not accurately support AI statements).
  • Example questions are designed so answers can be found within the Stimulus. If evaluating a source article itself, instructors can embed the article as a Canvas Course Document (as demonstrated in Question #3).

The quiz content and example questions are available below as a PDF export from Canvas.

To use this activity in your course, contact Eric Cosio to copy the quiz into your Canvas course. It can be used as-is or modified as needed.

Comparison: AI Research Results & Quick Search

This assignment can be used alone, or as a follow-up to the Verifying Sources Gathered with AI Research Tools Quiz, providing hands-on practice comparing AI-generated research results with library search tools. The full text of the example search prompts in MS CoPilot Chat and Perplexity AI used in the Verifying Sources quiz are provided for comparison. 

Students will:

  • Search for sources using the Bell Library Quick Search or Google Scholar
  • Identify a relevant journal article and book, citing them in APA
  • Compare AI-generated responses to library search results
  • Reflect on search strategies and source reliability

Students can submit their answers directly onto the worksheet and submit this to a Canvas Assignment. 

Searching and Evaluating Scholarly Sources: AI vs. Conventional Research Tools

This assignment can be used alone, or as a follow-up to the Verifying Sources Gathered with AI Research Tools Quiz. The activity helps students compare results using an AI research tool versus a conventional search tool (Quick Search, Google Scholar, or another library database), using a research question of their choice. For the AI component, they can choose between Chat GPT, MS CoPilot Chat, and Perplexity AI; all offer the ability to perform web searching and retrieve external sources in response to a query in their free versions, as of March 2025. 

How It Works

The activity has three parts: 

Part 1: Prompt the AI tool with an open-ended AI research query

Part 2: Create a structured AI prompt that explicitly requests scholarly and peer-reviewed sources

Part 3:  Conduct a search on their topic using with a conventional search tools (Quick Search or Google Scholar) with relevant keywords

For each, they will evaluate how/whether the criteria of their queries were met (date range, source type, scholarly source, available full text, etc...) identify at least one relevant source and evaluate it's suitability. 

The end of the assignment is a short reflection essay on their experience with using both AI and conventional searches for their research question.

I-Know Consultation

If you want help creating or adapting activities that reflect the I-Know Student Learning Objectives (SLOs) into your own course, you can contact Eric Cosio at eric.cosio@tamucc.edu

The SIFT Method and Lesson Idea

The SIFT Method was created by Mike Caulfield as a way for students to look beyond the webpage to determine whether or not the resource in front of them is or is not credible. SIFT stands for: 

  • S: Stop
  • I: Investigate the Source
  • F: Find Better Coverage
  • T: Trace Claims, Quotes, and Media to the Original Context

Professor Chimene Burnett from TAMU-CC has created a lesson for students to apply the SIFT method in class.

Civic Online Reasoning

The Civic Online Reasoning, or COR Curriculum was developed by the Digital Inquiry Group as a free resource for K-12 teachers to teach the skill of evaluation to their students. While the curriculum is targeted for younger students, the activity ideas and concepts could give you some inspiration. In order to access lessons from this curriculum, you need to create a free account.