Teachers are obliged to use AI in compliance with the following regulations and inform students in courses about their mandatory nature for all users of AI.

In all courses, teachers and students:

1. commit to the data-protection-compliant use of AI and AI products

2. are responsible, within the scope of teaching, learning, and research activities, for independently checking the legal restrictions of the data (input) used for data processing with AI before processing the data

3. are particularly obliged to

  • not use data as prompts which is subject to copyright

  • not use prompts or parts of prompts owned by third parties

  • not insert into AI applications sensitive and/or personal data or data that violate privacy

  • protection rights which also include company-related data such as emails by companies or individuals, in-company drafts, etc.

  • not use AI applications for discriminatory, offensive, or illegal purposes

4. use only data that does not infringe upon the rights of others when using AI applications for teaching, learning, and research activities

5.2 Putting responsible and ethical use to practice

Many universities have created institutional statements on the ethical and responsible use of AI. Since developing compliance skills takes time, and learning is an ongoing process, teachers are recommended to explain such regulations in every course, even at the risk of being repetitive.

Regardless of institutional policies, the following topics should be discussed with students where appropriate (cf. Shalevska 2024, European Commission 2022):

  • Transparency and oversight: As many generative AI are developed and owned by corporations, it may be important to think about how the tools are trained or what safeguards are in place to protect users from inaccurate information or harmful interactions.

  • Privacy, data governance, and safety: These aspects concern two levels of teaching action.

    • The first addresses the need to evaluate the policies of AI providers and the features of their tools: How is user data or copyrighted material used, stored, or shared? Who has access to user data? Which personal data (e.g. submitted when signing in as a user) is used for training data or for other purposes?

    • The second addresses AI users employing data: Which data am I allowed to use, e.g. as a part of a prompt? Which legal provisions and ethical considerations must be observed?

  • Diversity, non-discrimination, and fairness: AI systems may show algorithmic biases that perpetuate or exacerbate social inequalities. Do AI tools demonstrate unfair bias in their outputs? Are diverse groups characterized and represented in a non-discriminatory way in AI outputs? Are AI tools universally accessible? Are there discriminatory biases in algorithms?

  • Societal and political impact: What safeguards are in place to prevent AI from being used to spread inaccurate or discriminatory content? What is the impact on specific social groups if we follow the suggestions or decisions generated by the AI directly? How does AI affect the work, autonomy, and societal participation of different social groups? Is AI used to violate political rights by distorting information or influencing it in favor of a particular view?

  • Environmental impact: Training of AI tools with large data sets requires more and more energy consumption. What is the environmental impact of this energy use?