Appendix C: Glossary
The glossary below is a living document meant to inform a nuanced, evolving understanding as AI usage grows and new tools are developed. It should reflect and support the interests and usage within the SUNY System. All members of the community are invited to suggest new entries.
For suggestions, changes, or additions, please contact Lynn Aaron at lynn.aaron@sunyrockland.edu or Abby Adams at aadams5@albany.edu.
Artificial Intelligence (AI)
Artificial intelligence leverages computers and machines to attempt to mimic the problem-solving and decision-making capabilities of the human mind.
Tasks may require human abilities such as perception, reasoning, problem solving, and understanding natural language. Large collections of data, as well as new experiences, are used by algorithms to find patterns and use them to take actions or make predictions/provide insights (IBM, 2023).
AI Forensics
This refers to the use of forensic techniques to identify if text was AI-generated and then the source of the AI product. Once the source is known, it can be checked for accuracy and credits. This can ultimately reveal the bias in the training data set (Martineau, 2023).
Algorithm
A set of step-by-step directions for solving a problem or accomplishing a specific task (Berkman Klein Center, 2019).
As an example, here’s a simple computer algorithm for finding the highest number in a list:
- Start with the first number in the list and remember it as the current most significant number.
- Compare the current largest number with the next number in the list.
- If the next number is larger than the current most significant number, update the current largest number to be the next number.
- Repeat steps 2 and 3 for all the numbers in the list.
- When you reach the end of the list, the current largest number will be the largest number in the list.
Deep Learning
A subset of machine learning using a neural network with at least three layers
“Deep learning distinguishes itself from classical machine learning by the type of data that it works with and the methods in which it learns.
Machine learning algorithms leverage structured, labeled data to make predictions—meaning that specific features are defined from the input data for the model and organized into tables… it generally goes through some pre-processing to organize it into a structured format.
Deep learning eliminates some of data pre-processing that is typically involved with machine learning. These algorithms can ingest and process unstructured data, like text and images, and it automates feature extraction, removing some of the dependency on human experts. For example, let’s say that we had a set of photos of different pets, and we wanted to categorize them by “cat”, “dog”, “hamster”, et cetera. Deep learning algorithms can determine which features (e.g. ears) are most important to distinguish each animal from another. In machine learning, this hierarchy of features is established manually by a human expert” (IBM, n.d.-b).
Generative AI
Generative AI is a type of AI system capable of generating text, images, or other media in response to prompts. It uses its collection of data and experiences to generate new content. Generative AI is different from General AI (see below) (Benefits and Limitations, 2023).
General AI / Artificial General Intelligence (AGI)
General AI refers to the development of AI systems that possess human-level intelligence across a broad range of tasks and domains. AGI aims to create machines that can understand, learn, and perform complex cognitive functions that mimic human intelligence. This is in comparison to the specific, task-focused output of Generative AI (Mock, 2023).
Hallucinations
Since generative AI is based on statistical patterns, it may not always produce accurate or meaningful results. “Hallucinations” refers to computer-generated information that does not correspond to objective reality (Mair, 2023)(Alkasissi & McFarlane, 2023).
Large Language Model (LLM)
A deep learning algorithm that can recognize, summarize, translate, predict and generate text and other forms of content based on knowledge gained from massive datasets (Lee, 2023)
Machine Learning
A subfield of AI where a computer imitates human learning using data and algorithms to gradually improve its accuracy without additional programming changes or corrections (IBM, n.d.-a)(Brown, 2021)
Neural Network
Mathematical models for programming inspired by the human brain, primarily for problem solving and pattern recognition. These can be fairly simple or include multiple internal layers meant to increase learning capacity, efficiency, and accuracy (Zwass, 2023)
Prompt
Prompts are the requests/information we provide to AI to let it know what we’re looking for. They may be snippets of text, streams of speech, or blocks of pixels in a still image or video. The importance of an effective prompt has generated a new job – Prompt Engineer. (Martineau, 2023)(Popli, 2023)(Shieh, 2023).
Prompt Injection Attack
A prompt injection attack crafts a prompt that causes the AI tool to provide output that has been forbidden by its training (Selvi, 2022).
Puppeteering
Puppeteering refers to the manipulation of full-body images to perform actions and behaviors determined by AI (like a puppeteer). It is also known as full body deepfakes. For example, the image of someone who has two left feet when it comes to dancing could be made to perform as if they were a talented dancer (Jaiman, 2022).