3 Pros and Cons in Higher Education

There are many pros and cons related to generative AI, and both must be weighed as we move forward with policy development.  Given the magnitude and variable nature of AI, there will not likely be a one-size-fits-all solution to the application and adaptation of generative AI in higher education instruction (Piscia et al., 2023). However, there are still many important points to consider concerning generative AI.

It seems impossible and inadvisable to not consider the interoperability of ethics and equity across domains of higher education (Currie, 2023; Hutson et al., 2022; Nguyen et al., 2023). One cannot underestimate the significance of privacy, security, safety, surveillance, or accountability whatsoever. The integration of AI into medicine and healthcare, financial systems, security systems, and smart city technologies represent very real-world situations in which machine malfunction or bad actors can result in loss of life, access to essential services, or loss of resources (Ayling & Chapman, 2022; Currie, 2023). However, many of the challenges surrounding higher education involve barriers to equitable access to higher education and educational services and resources. Therefore, any way in which AI may undermine equity should be treated as a significant ethical concern. It is worth noting, however, that AI also has the potential to improve or enhance accessibility and inclusivity (Çerasi & Balcioğlu, 2023). AI also has the potential to enhance teaching and learning (du Boulay, 2022; Perkins, 2023; Sabzalieva & Valentini, 2023; Sullivan et al., 2023) in ways that can improve or increase equity, which suggests that perhaps higher education has an obligation to integrate AI into its operations as much from an equity and ethics perspective as it does an experiential learning/workforce development or industry obligation to adequately prepare its students for real world work.

In considering the ethics of AI in higher education, it may be most useful to approach this situation through different stakeholder groups, namely students, instructors, and the institutions themselves (du Boulay, 2022; Holmes et al., 2023; Irfan et al. 2023; Miron et al., 2023; Ungerer & Slade, 2022), as well as through external groups such as industry collaborators and the communities in which those institutions operate. Within the institution, as noted above, AI has the potential to affect non-academic elements which cannot be ignored. Furthermore, the impact on the educational elements can vary in terms of programs, disciplines, and modalities, such as in-person instruction versus distance-based education (Holmes et al., 2023). Some researchers have expressed concern around how AI may or can compromise the autonomy of both students and instructors (du Boulay, 2022).

What does this all mean for  educators?  If we are to believe the experts as well as our own recent experiences, many issues need to be addressed. The current version of artificial intelligence seems to be just the beginning. The emergence of AI has been described as the dawn of a new era, a virtual big bang if you will. That is the world for which our students need to be prepared.

It is important to acknowledge and consider the positive aspects of the learner’s experience regarding the use of generative AI in higher education. In many cases, generative AI may improve the experiences of our students both in the classroom and in their assigned work by introducing new methods of teaching and assessment (Piscia et al., 2023). As learners experience these tools in the classroom, students are learning and strengthening skills for their future endeavors and new realities within the classroom and in the workforce.

The inclusion of current and up-and-coming technology is imperative in education in the same way it drives progress and change in society. Fluency with generative AI tools will increase digital literacy and technology application for learners (Piscia et al., 2023). Additionally, students may be drawn to the inclusion of this tool in instruction, increasing the sense of relevancy of classwork and participation for students (Piscia et al., 2023).

The application of generative AI by instructors can also strengthen instruction, personalize learning opportunities, increase adaptability of instruction and learning, and strengthen accessibility for all learners (Piscia et al., 2023 and Shonubi, 2023). Each of these opportunities together increases inclusion in the classroom for all learners. AI tools can also be applied to the creation and/or modification of instructional objectives, pedagogy, and assignments and assessments.

Further, generative AI can be used to automate administrative tasks to improve workflows, decrease human transcription errors, and decrease processing times in many areas. Additionally, the application of generative AI in this way has the potential to decrease administrative costs and streamline administrative tasks (Parasuraman & Manzey, 2010, Piscia et al., 2023, and Shonubi, 2023). Generative AI has significant potential across a variety of higher education settings, instructional and learning environments in particular.

Additionally, institutions could be preparing students now for professions that are reduced or eliminated by generative AI presence in the workforce in the future. And the human aspect of interacting with generative AI must be not only considered, but studied as we move forward with this new tool at our disposal (Piscia et al., 2023 and Shonubi, 2023).

Congruently, it is imperative to consider the negative aspects of generative AI as well. The current lack of regulation and inconsistent accuracy of output are shortcomings that cannot be ignored. Generative AI is an evolving tool that needs to be carefully considered prior to its use.

In terms of academic integrity, educators will need to adapt teaching practices to ensure AI supports the learning process without reducing students’ cognitive abilities and preserving their access to prerequisite skills and the social aspects of teacher-student and peer learning relationships (UNESCO, 2022). In addition to concerns about cheating and fraud, other ethical concerns for academic integrity include the reliability of AI to produce trustworthy and accurate results. For instance, ChatGPT has already been documented to fabricate information and to adamantly defend these fabrications (Knight, 2022). Often these cases are referred to as hallucinations, as the chatbot produces responses as though they are correct. Because ChatGPT produces approximately 4.5 billion words a day (Vincent, 2021), a steady flow of questionable information has the potential to degrade the quality of information available on the web. Celeste Kidd and Abeba Birhane (2023) argue that repeated exposure to AI (in daily life, like chat-bots and search engines, in addition to engaging deliberately with ChatGPT for example) conditions people to believe in the efficacy and “honesty” of AI. They contend that AI’s method of using declarative statements without expressions, nuance, and caveat continues the process of convincing people to “trust” the AI. Use of unmonitored AI tools may result in a decline of critical thinking and may negatively impact content area learning, retention, writing development, creativity, and application (Miller, 2023).

Artificial Intelligence is rich in potential but cannot be counted on to be accurate or representative.  Both of these are of concern for our students. We do not want students to believe and/or use misinformation, and we do not want the information presented to them to be based on misleading data. There’s also a potential mental health concern that arises when dealing with chatbots. Chatbots have come closer to sounding as if they are human, which can have a psychological impact on students as they build relationships with bots that may not respond humanely and with the student’s best interest in mind (D’Agostino, 2023). Despite these challenges for academic integrity and student learning, students will need to use these tools and educators have an obligation to instruct them on AI literacy, ethics, and awareness. Part of higher education’s obligation in this regard is that we can include disciplines outside of the STEM fields to research and contribute to our knowledge on AI development and capabilities (UNESCO, 2022).

Interacting with generative AI tools may increase anxiety, addiction, social isolation, depression, and paranoia (Piscia et al., 2023). Although the studies of the impact of interacting with AI systems are in progress and shaping our understanding of the potential impacts of the tool on individuals and on society, a deeper, more complete understanding is yet to come and will develop in the coming years.

While it is critical to consider the ethical impact of AI like ChatGPT on academic integrity and academic dishonesty, there are other aspects of higher education that will be impacted that have ethical components. AI has been integrated into processes in human resources, financial aid, the student experience, diversity, equity, inclusion and belonging (DEIB), and institutional effectiveness. In many cases, AI integration in these domains is meant to enhance decision-making and assist in data analysis (du Boulay, 2022; Holmes et al., 2023; Naik et al., 2022; Nguyen et al., 2023). One must also consider the ethical impacts of integrating AI into educational platforms, like expert systems, intelligent tutors/agents, and personalized learning systems/environments (PLS/E), from teaching and learning perspectives (du Boulay, 2022; Hutson et al., 2022; Ungerer & Slade, 2022).

These applications necessitate that we consider how a variety of unsurfaced biases: -– language bias, culture bias, implicit bias -– can potentially affect the AI outputs we may obtain  and utilize within these various departments (Ayling & Chapman, 2021; Hutson et al., 2022; Nguyen et al., 2023; Ungerer & Slade, 2022). Consequently, some of the previously mentioned concerns around data privacy and security, consent, accessibility, and labor and economy will be reflected in the microcosm of higher education (Irfan et al., 2023; Nguyen et al., 2023; Ungerer & Slade, 2022).

Information security is a crucial factor to consider when adopting generative AI tools (Piscia et al., 2023). It is important to evaluate the information required to use generative AI tools, the confidentiality of completed queries and potential for data hacking. Additionally, data sharing between the tool and private entities must also be evaluated.

Other potential cons related to generative AI concern the ethics of the tool itself and the role it might play in education settings. It is important to determine whether the application of generative AI is considered plagiarism or cheating, and what the requirements for modification of outputs and citation of the tool will be.  Institutions will have to develop strong course policies to mitigate the potential for misuse (Piscia et al., 2023 and Shonubi, 2023).

Unfortunately, there is no correct answer, as we move forward considering the future of generative AI in higher education. However, it is important to consider the potential aspects of the tool, both positive and negative, as we work together to determine the policies to guide the tool application for students and professionals for years to come.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

SUNY FACT2 Guide to Optimizing AI in Higher Education Copyright © by Faculty Advisory Council On Teaching and Technology (FACT2) is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book