Students can use AI as a study tool, tutor, brainstorming partner, and much more. Teachers must consider how students should interact with AI tools and AI-driven resources. To this end, teachers need to be aware of the many ways in which AI can be used and get familiar with the idea that AI can be made a valuable co-teacher. Here are a few examples of how teachers can adopt AI as co-teacher.
For instance, teachers should seek using generative AI for
to create tailor-made learning materials and feedback, e.g. to support personalized learning paths, to support students with minor prior knowledge closing knowledge gaps, or to support or to motivate particularly talented students with extra tasks,
to create challenge-based, game-based, and real-world learning environments to motivate students to engage more with the contents,
or to provide real-time feedback and recommendations based on personal learning progress.
Embracing generative AI as a learning partner for students includes designing and training chatbots and integrating them into courses as ’learning buddies’, tutors, feedback providers, and more.
At a more advanced level of AI proficiency, analytical and predictive AI tools can be deployed deployed to analyse student engagement patterns, and to predict students-at-risk or student success.
AI can help tailor learning paths to individual student needs, enabling personalizing feedback, learning processes, and learning materials. This allows students to grasp complex topics at their own pace, catering to different learning speeds and styles. While teachers are often accustomed to designing interaction with students for a uniform group of students, AI provides many opportunities to adapt teaching and learning processes to diverse needs and interests.
Generative AI can be used for multiple purposes and in many different ways in courses and, thus, may take very different roles for students. As a teacher, you should be very clear about both.
AI can take very different roles (see Mollick & Mollick 2023):
Mentor: Provides frequent, immediate, and adaptive quality feedback to support student learning.
Tutor: Uses questioning techniques, collaborative problem-solving, and personalized instruction to guide students.
Coach: Helps students understand learning processes, identify gaps in their knowledge, and develop effective study techniques.
Teammate: Contributes specific knowledge or skills to a team, offers social support, and presents diverse perspectives.
Simulator: Facilitates skill and knowledge transfer by creating contexts of application, AI-based agents, and settings with distributed roles.
Tool: Assists in carrying out developmental, analytical, evaluative, design, creative tasks.
There are numerous didactical scenarios for integrating AI into a course. Below is a – highly selective – list of teaching concepts with some practical indications that may be useful when preparing a course.
| Pedagogical Approach | Implementation |
|---|---|
| Problem-based Learning (PBL) | PBL is a practice-oriented teaching and learning method in which students work autonomously on authentic problems in small groups in a self-directed manner, while teachers supervise them as tutors. AI can assist in the development of problem-solving skills through: - Search for information about problem structure and manifestations. - Discussing and evaluating problem-solving techniques. - Evaluating pros and cons of problem-solving approaches. - Providing expert knowledge for specific tasks and challenges. |
| Project-oriented Learning (POL) | POL fosters collaboration to develop and implement solutions to real-world problems. AI may: - Act as a virtual collaboration partner. - Suggest project team formations based on expertise. - Act as a coach to improve collaboration quality. - Examine the progress and quality of project work. |
| Constructive Feedback & Coaching | Constructive feedback reinforces positive learning achievements and reformulates errors as areas for improvement. AI can: - Provide personalized feedback. - Assist students in self-assessment. - Reinforce and motivate students. |
| Discourse-oriented Learning | Acquisition of knowledge can be achieved through formal or informal exchange of perspectives. AI can: - Use chatbots to represent different perspectives in discussions. - Act as a Socratic dialogue partner. - Let students test their opinions against a virtual sparring partner. |
| Linking Theory and Practical Application | Integrating theory with practice enhances understanding. AI can: - Visualize linkages between theoretical concepts and practical challenges. - Explore AI’s applicability in real-world settings. |
| Self-directed Learning | AI helps learners by: - Identifying learning needs. - Organizing individual learning paths. - Searching and evaluating learning resources. - Documenting and reflecting on progress. |
| Game-based Learning (GBL) | AI can take roles in game-based learning as: - Game co-developer. - Game character. - Teammate. - Consultant for game strategy. - Referee. |
| Critical Thinking | AI requires critical evaluation due to potential inaccuracies. AI can: - Encourage students to verify factual accuracy. - Support reflections on AI-generated outputs. - Foster awareness of AI’s influence on perceptions and work processes. |
Table 1: Didactic scenarios of teaching with AI
Whichever teaching scenario you choose, using AI in the teaching setting often requires more time than expected: Teachers must allocate time for explaining the purpose and roles of AI, the rules of usage, assessment criteria, and providing instructions on smart prompting, prompt evaluation, as well as fostering critical reflection and evaluation of AI outputs. As we know from previous experience, the time required to clarify AI-related questions with students is often underestimated.
These activities cannot be done casually, and more time instructing students on using AI and evaluating outputs should be scheduled when designing a course. As the number of teaching units usually remains unchanged and working with AI is an add-on, teachers may need to adjust the course timeline to ensure sufficient time for AI-related activities.
AI may affect learner behaviour in many ways. When assessing the potential implications, consider how AI can positively impact student learning and how to mitigate possible drawbacks. To better anticipate changes, it often helps to complete assignments that require AI use yourself or discuss this task with colleagues to identify potential challenges and improvements, too.
If students are encouraged or required to use AI for assignments and examinations, teachers should establish criteria to assess the results when part of them is AI-generated. Two options are particularly relevant: First, define assessment criteria for correct, responsible, and successful use of AI itself; and second, define assessment criteria for independent parts of students’ work, e.g. critical reflection of the quality of outputs and using outputs for other tasks.
AI can do parts of the work we have done ourselves before. Generative AI in particular often produces false or misleading output. Therefore, students need to learn how to evaluate this output critically. This can be done with or without the help of AI.
Students may be assigned
to evaluate outputs regarding factual accuracy, consistency, and academic integrity in the classroom without the help of digital media
to reason and reflect on the relevance and impact of AI-generated outputs for particular groups in terms of diversity, inclusion, and sustainability
to reflect on the risks that arise when AI-generated outputs are adopted without critical assessment
to reflect on the ways the use of AI is shaping our results, how we perceive reality, and the way we work and collaborate with others
To develop students’ ability of critical thinking, some teaching methods have proven to be particularly helpful and effective. One common element of these methods is to test the output against other sources of information and knowledge.
| Method | Approach | Implementation |
|---|---|---|
| Observe-Question-Compare (OQC) | Analyzing an AI output by examining its details and comparing its information with an authoritative source | Observe: Identify and examine the features of the AI output, even if they feel self-evident. For instance, ask: “How many different aspects are addressed in the output?” Question: Critically evaluate every aspect of the output by asking questions such as: Is the AI-generated information true and accurate? Is it relevant? Is it fair? Compare: Let students compare AI output with other credible sources of information, e.g., textbooks, journal articles, descriptions of good practices, professional association information, or market research data. |
| Review-Evaluate-Reprompt (RER) | Defining the quality standards for a specific task and evaluating AI outputs based on those criteria. | Review Criteria: Define (also in collaboration with your students) what constitutes a quality, desirable, or “good” output. This could include specificity, context, organization, clarity, usability, format, limitations, and other factors. Evaluate: What are the gaps between the defined criteria and the AI-generated output? What has the AI added, deleted, or modified? What advice does it offer for improvement, based on the feedback it provides? What might AI not be able to improve, given its inherent limitations? Refine the Prompt: Use the identified criteria to craft more detailed or specific instructions for the AI. Regenerate the output with a refined prompt, or decide to stop prompting and make the necessary changes manually. |
| Ideas-Connections-Extensions (ICE) | Generating ideas, making connections between them, and extending them into new applications. | Ideas: Generate initial ideas on the questions to be answered or problems to be solved using AI. For example, what potential solutions could help reduce plastic waste in the community? Connections: Identify and explore connections between these ideas and the insights gained from AI. For example, how can these solutions be integrated with existing recycling programs or community initiatives? Extensions: Extend these ideas into broader applications using AI for further research. For example, how can these solutions be scaled up or adapted for other environmental issues? |
Table 2: 3 Approaches to Critical Thinking (from Paulson 2024, with minor adaptations)
Imagine a co-teacher who is available to students for feedback 24/7. Teachers may use AI as a co-teacher that provides real-time feedback on assignments, quizzes, and essays, allowing students to identify areas for improvement immediately, without waiting for personalized feedback. This instant feedback loop promotes continuous improvement and engagement.
AI can be used to design assignments, develop examination methods, and refine assessment criteria.
AI could also assist teachers in grading multiple-choice exams, providing rubric-based feedback on essays, and assessing assignments, and other tasks. By streamlining these processes, AI can save instructors significant time, enabling them to focus more on improving assessment quality and giving effective feedback.
Furthermore, AI-driven adaptive assessments can adjust the difficulty level of tasks and assessments based on students’ responses. This approach offers a personalized evaluation experience and provides a more accurate picture of students’ knowledge, skills, and competencies.
However, despite its advantages, there are serious limitations to using AI for assessment and grading.
In the European Union, the AI Act plays a critical role in regulating the use of AI in teaching, particularly for testing and assessment purposes.
The AI Act explicitly identifies high-risk systems in the education sector (European Commission, 2024). These include AI applications related to admissions, performance assessment, educational standards evaluation, and monitoring during examinations. Performance assessment also encompasses systems designed to control the learning process.
AI applications explicitly intended for assessing examination papers are deemed particularly critical. Even the use of general-purpose AI for examination assessments is subject to stringent risk management requirements, including transparency, thorough documentation, and human oversight.
As a result, using AI for high-stakes decisions, such as course admissions or automated examination assessments, is discouraged for practical and ethical reasons. Universities must carefully evaluate whether deploying high-risk AI applications aligns with their responsibilities and regulatory obligations.
If AI is included as a means of learning and working on assignments and assessments, existing assessments are highly likely in need of improvement.
When revising your assessments, develop a clear system for criteria that ensures careful
distinction between different skill levels of AI usage according to your learning outcomes,
identification of different quality levels of practical work with AI,
identification of different compliance levels with standards and criteria of academic integrity and good scientific practice.
When students must use AI for examinations (written or oral), think about
how your examination questions and examination tasks must be designed to capture independent knowledge, skills, and competencies,
how students must document AI- and non-AI parts of assessments and how to explain this to students,
and how AI could assist you in correction and grading in a legally compliant way.
Teachers realize that AI enables and requires assessments focused on skill mastery rather than memorization, tracking students’ competencies and skills in real-time. This shift encourages students to demonstrate understanding through the application and elaboration of personal approaches and results, rather than relying solely on traditional testing.
Hence, teachers need to develop further assessment methods that meet three main objectives:
Second, design assignments and examinations in such a way that the use of AI is documented in a way that it is possible to distinguish between AI-generated and independent performance parts.
Third, to develop examination methods that require students to practically demonstrate the knowledge and skills they have acquired, as well as to explain the solutions they have developed components.
Teachers are also increasing the oral or performative components of examinations in order to make individual learning progress and performance more visible. Here, it is very helpful to ask students to explain in more detail how they arrived at their results and to justify the choices they made.
Teachers should bear in mind, however, that increased oral examinations also lead to more time being spent on examinations.
Ultimately, it is clear that teachers need to refine three key elements of assessments when using AI in courses:
examination methods
assessment criteria
time required for implementing refined examinations
With generative AI tools capable of generating text, images, and code, academic institutions need to re-emphasize the relevance of original work. This shift necessitates the development of new academic integrity policies focused on requiring students to disclose AI use, to quote text generated by AI properly, and to explain their own and AI-generated components of their work.
Teachers need to decide on the appropriate documentation to implement in their courses to clearly distinguish between student work and AI-generated content, thus ensuring fair assessments.
Provide students with clear instructions on how to document and cite their work with AI. This is important for students to learn how to use AI in compliance with standards and rules of academic integrity. It is also important to be able to distinguish between AI-generated outputs and personal achievement in student work and assessments.
Many universities have developed specific templates for acknowledgment statements, requiring students to document their use of AI by explaining, as minimal requirements,
which AI tools they used (name and version),
the purpose for using one or more AI tools (e.g, generating ideas for study and assessments, paraphrasing and summarising sources, translating or optimizing text, developing presentations etc.)
Provide feedback on your ideas and work, and help you improve it
the prompts, follow-up prompts, and other inputs provided to the tool,
and how they adopted the AI output in their assignments, examinations, or theses.
Many universities require from students to provide declaration statements to acknowledge any permitted use of AI tools and technologies.
Teachers are strongly advised to communicate these requirements to students at the beginning of a course and to revisit their relevance during the course.
Reproducibility, understood as the ‘ability of independent investigators to draw the same conclusions from an experiment by following the documentation shared by the original investigators’ (Gundersen 2021) is a key criterion of scientific work in various sciences such as mathematical, physical, and engineering sciences. In other sciences such as the social sciences and the humanities, the natural science concept of reproducibility is less practical due to the properties of data, research methodologies, and research paradigms (Moody et al. 2022).
Since AI-generated outputs often lack explainability, it is important to raise students’ awareness of the relevance of making results and the way they have been developed comprehensible for others. This also supports the students’ ability to double-check the parts they have done independently, and which are AI-generated. Overall, reflecting reproducibility or comprehensibility of results raises awareness that students have to take responsibility for the entire work or results.
Classroom reproducibility refers to the ‘ability of an instructor to easily regenerate the results and conclusions of a student report from the submitted materials’ (Bean 2023). This implies that students submit materials appropriately organized, enabling the teacher to evaluate the final results as well as the process they were developed (ibd.).
One approach gaining more importance when using AI in the classroom could be to require students not only to submit the final work but also to require documentation of the entire work process, including the use of AI in relevant work phases and its impact on the process.
Designing courses is a great opportunity to think about how teaching with AI should work out. The most important question is: How well is AI supporting students in achieving learning objectives? Several universities have already adopted relevant questions in their course evaluation and are able.
However, standard course evaluation systems may not capture all relevant topics, especially when using AI for the first time in a course and some trial and error can be assumed. In this situation, teachers may develop and apply relevant methods and criteria on their own.
Regarding the main evaluation topics, ask at least how helpful students find instructions and explanations on
the purpose and the method of AI use,
the required documentation;
and ask for information on
their tool-related experiences,
the perceived impact of AI on their learning activities and assessments,
and on the students’ ability to distinguish between self-generated and AI-generated results,
It is often useful to combine standardized and non-standardized methods and to ask relevant questions at the end of a teaching unit where AI was used, while reflection of specific topics may require a longer time of user experience and a greater variety of experiences.
Some AI tools may be more suitable for your course than others due to technical reasons.
In addition, teachers have to select appropriate AI tools for educational purposes by applying legal, didactic, social and ethical criteria, such as:
Compliance with data protection regulations
Alignment of the AI tool with learning objectives
Equal accessibility for all students: Every student must be able to use the permitted or prescribed AI application under the same conditions.
Documentability of AI outputs
Potential of the AI tool to foster creativity in teaching and learning
Often it is advantageous to specify which AI tools students must use, rather than allowing them to decide themselves. Teachers should decide:
Which AI applications are permitted or prohibited for a certain task or assignment
Whether specific AI applications are mandatory
How students must document their use of AI, including purpose, form, and extent.