top of page

Integrity Beyond Academics: Does Virtue Recede as AI Move to the Foreground?

By:

Cohen, Emma

When I first opened ChatGPT for an English project on AI, I thought it was exciting, but maybe not life-changing. You may similarly be wondering: what’s the big deal about AI? We already carry around portable AI voices in our iPhones, but Siri is rarely helpful beyond a quick text or Google search. In reality, AI is much more present in our lives than it seems at first glance. Consequently, its ethical implications spread far beyond the scope of a phone call.


When a tool like ChatGPT is introduced to us with little warning, we are naturally drawn to its potential to enhance our lives. For those of us who prefer to delve into a YA novel than study the inner workings of a computer, ChatGPT’s astounding fluency and lifelike sense of humor take us by surprise. In light of this shock, we momentarily lose our instinct to assess the danger in front of us. As middle and high schoolers, we have all been cautioned against exploiting AI in academics. Still, the question stands: is the largest danger of AI development really plagiarism?


As humans age, we are trained in the basic principles of our world. We learn to distinguish right from wrong, how to interact with others, and how to act with intention. With AI personalities becoming increasingly authentic, we risk losing the essential distinction between reality and fantasy. At the core of humanity is the ability to empathize, to put ourselves in someone else’s shoes and act accordingly. Robots operate to meet a single goal; that is, to fulfill their programmed function. Models like ChatGPT are built on a loose model of the human brain, stripped of its uniquely human intricacy. Unlike our brains, with their ability to consider paradoxes and form nuanced ideas, AI merely predicts the most likely next word in a chain of words. This form can not replace the complexity of human thought and writing, but it can hinder our ability to fulfill our role in society by interfering with our intellectual growth.


How do we respond to a device like this: a human-esque yet completely mathematical creation? At the moment, ChatGPT performs based on a predetermined set of information. What happens when it gains access to the myriad of ideas across the internet? How do we stop it from spewing out biased, inaccurate, and nonsensical information? And how do we stop ourselves from believing it? You may believe that, as a human with the ability to think critically, you can easily discern fact from fiction. But our innate sense of truth only takes us so far.


If you plan to take AP Psychology, you will learn about the Milgram experiment– a test of people’s willingness to obey an authority figure when told to act against their own morals. All participants agreed to initiate some level of electric shocks to what they believed to be a real person, and 65% agreed to administer fatal shock levels when told to do so by the experimenter. This illustrates the malleability of human choices and values, especially when confronted by a powerful figure.


How will people respond to this new, ‘all-knowing’ authority figure with access to nearly unlimited information? Does AI’s inability to have bias mean we can trust it? The issue is that technology lacks metacognition– an understanding of what it does and doesn’t know. According to neuroscientist Anil Seth, AI will never be able to replace human roles because of this fault. AI fills in gaps with “fabulations”--the most logical next series of words. But our world is not purely logical. It’s easy to copy and paste a homework assignment into a chatbot and justify it because you’re still ‘learning’ the answers to questions. If the point of education is to get to the answer, then this is a feasible solution– but the process of learning has value in itself.


AI hurts our chance to form as moral, intellectual, and critical thinkers. However, recognizing AI’s ability to clarify topics as impressive does not discount its lack of ability to form nuanced ideas. We must remember our own beliefs as technology develops in the coming years and prioritize the quality and purpose of education over convenience. To return to the question of AI’s largest danger, cheating is definitely a concern. In the bigger picture, however, AI threatens our ability to empathize, our personal convictions, and our role in society. Sure, increasingly human-like technology offers an ‘easier’ life, but at what cost?

Other Works

Monarch

The speaker contrasts youthful innocence symbolized by butterflies with current darker, troubled thoughts represented by moths, revealing internal struggles with belonging and acceptance, and a longing to escape into dreams.

to those that remain far apart

The poem reflects on distance and emotional tension, capturing the silent internal struggles hidden beneath calm exteriors, and highlighting the intensity felt by hearts separated yet connected by unspoken feelings.

Deciduous Trees and Fire Hydrants

The poem compares life's transient nature, symbolized by deciduous trees and stationary fire hydrants, to human experiences of fleeting happiness and enduring melancholy. It emphasizes the beauty of genuinely feeling, remembering, and cherishing moments, especially amid loss and sadness.

bottom of page