19 Strategies for Pedagogical Evaluation

  1. Define Learning Objectives: Determine how the AI tool can complement or enhance the achievement of course learning objectives.
  2. Trial and Pilot Testing: Conduct a trial or pilot test of the AI tool with a small group of students or colleagues. Gather feedback on its effectiveness and usability.
  3. Learning Analytics: Assess the tool’s ability to provide valuable learning analytics and insights for instructors and students. Analytics can help identify areas for improvement and measure learning outcomes.
  4. Feedback and Assessment: Collect feedback from students who used the AI tool and assess its impact on their learning experience and outcomes.
  5. Integration with Curriculum: Ensure the AI tool can be integrated seamlessly into the course curriculum without disrupting the overall flow of the course.
  6. Comparison with Traditional Methods: Compare the AI tool’s effectiveness with traditional teaching methods to gauge its added value.
  7. Support for Multimodal Learning: Verify if the AI tool supports multimodal learning, allowing students to engage with content using various formats, such as text, audio, video, and interactive elements.
  8. Long-Term Viability: Assess the long-term viability of the AI tool, considering its potential for future updates and scalability.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

SUNY FACT2 Guide to Optimizing AI in Higher Education Copyright © by Faculty Advisory Council On Teaching and Technology (FACT2) is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book