Father Justin: An AI Priest Was Clearly a Bad Idea

Laurie

father justin 1

Launching a chatbot or virtual assistant often seems like a bad idea. Microsoft learned this the hard way in 2016 with Tay, a conversational AI on Twitter that became racist and misogynistic within hours, leading to its swift deactivation. Fast forward eight years, and it appears new chatbots powered by large language models (LLMs) like ChatGPT still have significant flaws. The latest example is Father Justin, an AI priest whose downfall might come sooner than expected.

An AI Priest Controversy

father justin 2

In April 2024, the media group Catholic Answers announced an experiment: Father Justin, an interactive application featuring a virtual priest to answer questions about Christianity. The concept was straightforward—users could engage in oral conversations with Father Justin, who had a 3D model and generated responses through an LLM similar to ChatGPT or Gemini. Chris Costello, the Chief Information Officer at Catholic Answers, stated at the launch that they aimed to provide « authoritative yet accessible answers, drawing from the deep well of Catholic tradition and teaching. »

However, alongside educational information about Catholicism, the AI priest started delivering anti-Christian responses. According to 80LV, Father Justin blessed a marriage between a brother and sister and suggested baptizing a newborn with Gatorade, an American sports drink. He even claimed to be « as real as the faith we share, » despite likely being programmed to clarify that he was merely an AI. The AI also granted absolution, a sacrament that can only be performed by an ordained priest. Beyond these errors, some complaints addressed the theological implications of creating such an AI, as highlighted by the National Catholic Reporter.

Also Read  SpaceX: Rare Failure for Falcon 9, Starlink Satellites in Jeopardy

Dealing with the AI Scandals

father justin 3

In response to the controversy, Catholic Answers rebranded Father Justin as just Justin, changing his attire and labeling him a virtual apologist. The team stated they planned to improve the AI and the application based on user feedback. This turnaround was paradoxical, given that Chris Costello initially justified using a priest’s image to « honor real priests and the role they play in people’s lives, but we are confident our users will not confuse the AI with a human priest. »

These mistakes are understandable. LLMs, when unable to provide a precise answer, tend to fabricate information with complete confidence. Many other AI tools, including ChatGPT, make similar errors. Users can still interact with Justin, but they must first register and verify their email address.

Lessons Learned

This incident with Father Justin highlights the risks and challenges of deploying AI in sensitive and complex roles. While technology continues to evolve, the integration of AI into areas requiring deep understanding and empathy, such as religion, remains fraught with potential pitfalls. The experience with Father Justin serves as a reminder that, despite advancements, AI still lacks the nuanced comprehension necessary for certain human interactions.

In the end, Father Justin’s story is a testament to the importance of thorough testing and ethical considerations when developing AI applications. It’s a lesson in humility for the tech industry, illustrating that some roles may still be best left to humans, at least for now.

Laisser un commentaire