
Ethical Considerations and Limitations
From time-saving tools for quick and accurate grading of assignments, quizzes, and even essays to leveraging AI-powered, 24/7 student support and tutoring systems, AI offers the capability to create a personalized learning experience for each student based on tailored educational content, learning pace, and preferences. However, with generative AI’s exciting creative possibilities come concerns and ethical considerations. Before making any determinations about if and how to integrate AI into teaching and learning, it is important to consider the implications and limitations.
- Academic Integrity and Plagiarism: The ability of generative AI to produce written content raises concerns about academic integrity and the originality of submitted work. Some disciplines are more suited for using AI/Gen AI than others. Take steps to address your expectations around the use of AI by:
- 
                           
                           - Adding the self-paced Academic Integrity Module to your syllabus to educate students on the University’s Academic Integrity Policy
                                 and plagiarism.
 Discussing the ethical considerations and limitations of using AI/Gen AI. Have students sign an honor code or statement of understanding.
 Utilizing plagiarism and AI-generated content detection tools to help identify instances of academic dishonesty.
 
- Adding the self-paced Academic Integrity Module to your syllabus to educate students on the University’s Academic Integrity Policy
                                 and plagiarism.
- Inherent Bias and Gender Diversity: Generative AI models can inherit biases present in their training data, potentially
                           perpetuating inequalities and discrimination in educational content and decision-making.
                           It is important to note that ChatGPT, for example, is not governed by ethical principles
                           and cannot distinguish between right and wrong, true and false. This tool only collects
                           information from the databases and texts it processes on the internet, so it also
                           learns any cognitive biases found in that information. Therefore, it is essential
                           to critically analyze the results it provides and compare them with other sources
                           of information.
- False and Misinformation: A well-known phenomenon in large language models called ‘hallucination’ occurs when
                           a system provides an answer that is factually incorrect, irrelevant, or nonsensical
                           due to limitations in its training data and architecture. It is important for users
                           of GenAI to understand the potential for inaccuracies in AI-generated content and
                           learn how to research and derive factual information.
- Data Privacy and Security: It is the responsibility of Seton Hall faculty, staff and students to safeguard
                           confidential information. Users of GenAI tools need to be aware that sharing sensitive or confidential information
                           with third-party AI tools may expose that data to potential security risks and unauthorized
                           access. Members of the Seton Hall community must adhere to the University’s data security policy when managing, using, accessing, storing, or transmitting University data.
- Accessibility and Equity: Institutions should ensure that AI systems are compliant with accessibility standards and do not inadvertently disadvantage certain groups of students.

