2 AI in Society: Ethical and Legal Issues

AI began to affect society and the employment landscape prior to the introduction of ChatGPT.  Customer service chatbots and self-checkout have become commonplace. Some jobs, particularly in the customer service and manufacturing sectors, have been replaced by AI.  Many resources, articles, and subject matter experts assert that AI is a powerful tool that is not a passing fad. Generative AI will continue to impact employment and the economy.  Experts suggest that AI will continue to take on specific tasks, but it will not fully take the place of workers (Hawley, 2023).  There are many concerns that AI will replace human work and interaction.  While AI can perform some tasks well, it is not perfect and cannot replace the human-to-human experience required by fields such as teaching and nursing.

Although AI may be used to replace task-based, entry-level jobs, new jobs will likely be created to design and manage it. Individuals and businesses must plan for the growth of AI and anticipate the ways in which they will be affected (Marr, 2023).  From a higher education standpoint, this creates an opportunity to re-evaluate educational pathways in order to prepare students for a world in which AI is prevalent (Abdous, 2023).  How can we prepare students to use AI effectively, and to have the skills required to work with and manage AI at a higher level?

Artificial intelligence has been used in one way or another in education for years. For example, search engines, personal assistants on phones, assistive technology to increase accessibility, and other technology all use some form of applied artificial intelligence. However, the recent widespread availability of generative Artificial Intelligence tools across higher education gives rise to ethical concerns for teaching and learning, research, and instructional delivery. Implicit bias and representation (Chopra, 2023), equitable access to AI technologies (Zeide, 2019), AI literacy education (Calhoun, 2023), copyright and fair use issues (De Vynck, 2023), academic integrity, authenticity and fraud (Weiser & Schweber, 2023; Knight, 2022), environmental concerns (Ludvigsen, 2023; DeGeurin, 2023), and ensured development of students’ cognitive abilities (UNESCO, 2022) all represent ethical challenges for higher education as AI integrates further into the curriculum, the classroom, and our work and personal lives.

Data sets play a critical role in machine learning and are necessary for any AI that uses an Artificial Neural Network (including what runs ChatGPT) to be trained. The characteristics of these sets can critically mold the AI’s behavior. As such, it is vital to maintain transparency about these sets and to use sets that can promote ideals that we value, such as mitigating unwanted biases that may promote lack of representation, or other harms (Gebru, et al. 2022). Biases have already been identified in AI systems used in healthcare (Adamson & Avery, 2018; Estreich, 2019) as well as in auto-captioning (Tatman, 2017). The EEOC, DOJ, CFPB, and the FTC issued a joint statement warning how the use of AI “has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes” (Chopra et al., 2023). The FTC is investigating Open AI’s potential misuse of people’s private information in training their language models and violation of consumer protection laws (Zakrzewski, 2023). Industry leader Sam Altman, CEO of OpenAI (the developers of ChatGPT), recently testified at a Senate hearing on artificial intelligence expressing both concerns and hopes for AI. He warned about the need to be alert regarding the 2024 elections, and he suggested several categories for our attention, including privacy, child safety, accuracy, cybersecurity, disinformation and economic impacts (Altman 2023).

Furthermore, lawsuits have been filed alleging everything from violation of copyrights to data privacy to fair use issues (De Vynck, 2023).  In addition to these legal challenges, labor concerns factor into this conversation as low-wage, uncontracted workers and labor from the global South have been used to train AI away from violent and disturbing content (Perrigo, 2023). Because AI is transforming labor and the economy through automation, higher education must respond to AI’s potential to displace workers in many industries by fostering in our students the unique attributes and capabilities that humans bring to the labor market (Aoun, 2017).

Environmental factors add to the list of ethical concerns, especially in terms of energy and water consumption. For instance, ChatGPT uses as much electricity as 175,000 people (Ludvigsen, 2023) and the ChatGPT engine (GPT3) used 185,000 gallons (700,000 liters) of water to train.  Each use of ChatGPT uses roughly a one-liter bottle of water (DeGeurin, 2023). In November 2022, New York became the first state to enact a temporary ban on crypto mining permits at fossil fuel plants (Ferré-Sadurní & Ashford, 2022). The New York state legislature is attempting to lead the way on addressing the environmental costs of new technologies, and academia must be cognizant of the environmental costs of generative AI.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

SUNY FACT2 Guide to Optimizing AI in Higher Education Copyright © by Faculty Advisory Council On Teaching and Technology (FACT2) is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book