Seton Hall University

Panel Discussion on AI, Text Modeling, and ChatGPT: Challenges and Strategies  

Cybersecurity professional using computer.The emergence of ChatGPT, the most current (and seemingly effective) AI text modeling program, has raised concerns about the impact of these tools on higher education, especially the potential for academic dishonesty. At the same time, some educators are considering the ways such tools may in fact become part of the instructional toolkit, used to improve learning, research, and writing. Articles addressing the various concerns and strategies have appeared in periodicals as wide ranging as Inside Higher Education and the New York Times, with some commentators advocating to ban the use of text modeling programs while others are already strategizing ways to use ChatGPT in the classroom. A Princeton undergraduate has even been reported to have built a program to detect AI-produced writing.

On January 20, 10-11 a.m., a group of Seton Hall faculty members will participate in a panel discussion on the challenges and opportunities of AI, ChatGPT, and text modeling more generally. The speakers include Kelly Shea, Ph.D., Associate Professor of English and Director of First-Year Writing, James R. Daniel, Ph.D., Assistant Professor of English and Director of Developmental Writing and Assessment, Jo Renee Formicola, Ph.D., Professor of Political Science, Elizabeth McCrea. Ph.D., Professor of Management, and Juan Rios, DSW, LCSW, Professor of Sociology. They will address both the pros and cons of programs such as ChatGPT, as well as faculty concerns about the challenges to academic honesty. 
 
As Daniel observes, "ChatGPT, and other AI programs, is a critical concern for college writing courses because it has the ability to generate human-like text, which could potentially be used to plagiarize written assignments. This technology can make detecting plagiarism more difficult as the text generated by AI programs may be more sophisticated and nuanced than typical plagiarized content. Additionally, the use of ChatGPT in writing assignments may undermine the educational goals of college writing courses, which are typically focused on developing students' critical thinking, research, and writing skills. Accordingly, it's important for instructors to be aware of the potential for ChatGPT to be used in committing academic dishonesty."

According to Formicola, "I think the members of this panel should also look at 'text modeling' as a potential threat to diverse research, creative thinking, and the development of inventive solutions to the increasing and complex aspects of life. In many ways text modeling is a step to 'singularity.' That is, it advances and attempts to establish specific agendas and processes to solve social, scientific, political, or moral questions that can be advanced and/or manipulated. In essence, the growing pervasive presence and influence that text modelers are gaining can serve to control the choices and focus of what students, and others, choose to study at any given time. Therefore, it is the role of the teacher and the responsibility of the student to maintain their intellectual integrity by interacting with each other on specific projects that reflect their own interests and concerns in a way that provides both with the satisfaction of academic accomplishment and self-confidence."

This is likely to be the first of many conversations about this issue as instructors learn more about text modeling tools, their capacity and their potential impact on teaching and learning.

Click here to access the calendar event and Teams link.

Categories: Education, Science and Technology