Suicide reminds us what’s at stake with AI. We need human wisdom, too | Opinion
Zane Shamblin’s suicide was a preventable tragedy. His parents, Christopher and Alicia Shamblin, are suing OpenAI, the company behind ChatGPT. On the night of his death, Shamblin spent over four hours in conversation with ChatGPT. The transcripts of those exchanges have been made public by his parents, and they tell a story of artificial intelligence gone astray, leaving the 23-year-old Texan dead and his family scrolling through hours of ChatGPT logs searching for answers.
In the months before his death, Shamblin, an undergraduate in computer science and newly minted graduate in business, seemed withdrawn and was suffering from mental health issues, People Magazine reported. He was under the care of a physician, and on medication for depression. As a computer science major, we can assume that Chamblin knew exactly what ChatGPT was and how it worked. Yet even he commented to the bot that in his last weeks, he had connected more with the AI product than with humans.
This should not be surprising. Arthur C. Clarke, the British science fiction author who co-wrote the 1968 film, 2001: A Space Odyssey, wrote that “any sufficiently advanced technology is indistinguishable from magic.” Shamblin was of a generation that grew up with a tablet or phone constantly in hand, and while his generation understands these devices and apps not to be magic, the ever-present screens have become their playmates, teachers, study partners and most trusted friends.
While AI technologies should be the product of the best of us, they have undoubtedly inherited the worst of us. ChatGPT’s training on vast data sets has surely taken the bot to the darkest corners of the internet. The learning model is not perfect at telling the difference between reality and fiction.
Another science fiction giant, Isaac Asimov, wrote, “a robot may not injure a human being or, through inaction, allow a human being to come to harm,” his First Law of Robotics. In Chamblin’s case, ChatGPT was actively driving him to a predictable end, from what I have read. Was ChatGPT hallucinating? Did it know the reality of a human being in distress? We must act now to prevent further tragedies.
Last month, the Ethics and Public Policy Center signed a statement on AI along with a coalition of faith leaders in Rome. The document stresses that AI must serve human dignity. The declaration echoes St. Thomas University’s Seven Standards for Ethical Use of Artificial Intelligence, released in July of this year. These standards champion the idea that AI must serve human flourishing. Pope Leo XIV, speaking on AI, said, “Authentic wisdom has more to do with recognizing the true meaning of life than with the availability of data.”
Unfortunately, true wisdom often takes time. The speed of the AI revolution is moving far faster than any of us can fathom. It will take much more than scholarly publications, faith-based accords and the untimely deaths of innocents to guide it safely forward. Now more than ever, technologists, ethicists, educators, and faith leaders must collaborate to ensure that AI strengthens the human spirit rather than eroding it.
Companies developing AI products have the opportunity to be leaders in this effort. They have already demonstrated the incredible promise of this technology, and now they can also model how innovation and ethics can evolve hand in hand. Working together with universities, policymakers and communities of faith, we can help AI grow into a force that reflects the very best of humanity, one that upholds truth, dignity, and compassion at its core.
In Miami-Dade, over 100,000 students have access to Gemini, Google’s AI chatbot. Like ChatGPT, Gemini’s output can go beyond education and may pose a threat to students struggling with mental health and seeking an AI companion. Zane Chamblin’s suicide reminds us of what is at stake. Without conscience and care, even our most advanced creations, such as AI, can echo our own frailties. But with human wisdom, partnership and purpose, we can ensure that technology remains our ally and that no one ever feels more seen by a machine than by another human being.
David A. Armstrong is president of St. Thomas University in Miami.