AI, Morality, and the Image of God

AI has no business generating anything on morality or ethics. You'd be better off taking answers from a bear.

When I shared the "Sophie and the Squirrel Prince" story (from my previous post) with my local writers group, they were enthusiastic. That's anthology material, they said, you should do another. This time, on morality. I agreed.  

I planned to write another silly story, but two news articles popped up this week and redirected my attention. I think they are worth some discussion. I'll come back to the silly stories another time.

Befuddling AI is inconsequentially transgressive, similar to escaping the map in a video game. If you do it correctly, you can glitch to the void beyond and see how things look from the other side. It's exploratory, the pursuit of understanding, a quest to discover the framework that holds things up.

We can make a few observations about AI frameworks right now. Foundational principles, if you will:

  • AI does not reason the way a human does (see my post and the ArsTechnica article that inspired it)
  • AI is derivative (it is created by humans)

As a quick point of clarification, we're talking primarily about conversational AI's here. The kind that are intentionally designed to mimic human interaction. Other implementations of large language models present a different set of moral and ethical problems. We can talk about those another time.

I'm still debating the proper word choice, but for now, I'm going to suggest that AI (independent of other factors) is unmoral: it has no moral aspect. There are moral & ethical questions about its creation, as well as its uses, but it does not have the capacity for morality.

We can be critical of the fact that it told a user to "please die", but we cannot accuse it of making an immoral choice. Honestly, we can't even criticize it for being rude. Both morality and etiquette require a level of humanity and consciousness that AI does not possess.

It reminds me of a section in C.S. Lewis' book That Hideous Strength. It's the third book in Lewis' science fiction space trilogy (Out of the Silent PlanetPerelandra, and That Hideous Strength). If you haven't read them, you should! 

My knowledge of the book is pretty rusty. It's been a decade or so since I dove in. The one section that remains in my memory involves a bear named Mr. Bultitude. In Chapter 14, Lewis writes; 

Mr. Bultitude's mind was as furry and unhuman in shape as his body. He did not remember, as a man in his situation would have remembered, the provincial zoo from which he had escaped during a fire, not his first snarling and terrified arrival at the Manor, not the slow stages whereby he had learned to love and trust its inhabitants. He did not know that they were people, nor that he was a bear. Indeed, he did not know that he existed at all: everything that is represented by the words and Me and Thou was absent from his mind. When Mrs. Maggs gave him a tin of golden syrup, as she did every Sunday morning, he did not recognize either a giver or a recipient. Goodness occurred and he tasted it.

Bears do not possess the moral/ethical consciousness of a human. And, while it may appear that some creatures have an understanding of what is and isn't permissible, they don't. Not in the way that a human does. Mr. Bultitude is amoral. 

Earlier this year TikTok went viral for the following question: would you rather be alone in the woods with a man, or a bear? Many women preferred a bear.

My husband pointed out that there may be some bias implicit in the question. After all, bears belong in the woods, what exactly is the man doing there? I think he's on to something because, at some point, the question morphed in my mind to this: would you rather be attacked by a man or a bear?

A bear. No contest.

Humans bear the image of God (no pun intended). A sense of morality is part of our framework. It's fundamental. It is uncomfortable to watch someone act without morals. It is deeply disturbing to see someone intentionally violate them. 

The horror of the viral question is that the man could be a threat. It's something that should never be. I'd rather be mauled by a bear than confront the realization that a human would choose to be so vile. Humans have choice and responsibility.

Here's where we circle back to AI.

AI is derivative. An AI's understanding (read: dataset) of morality is derived from the sources it has been trained on. These include moral, amoral, and immoral choices. The relative morality of each choice is indistinguishable to an AI. It is up to the designer to tell the program which is which.  

We use the term "moral compass" in reference to a person's ability to judge between right and wrong. Just as a physical compass must point north be useful, so must the metaphorical one. And for humans, north is Jesus, God incarnate, King of all that is.

I asked ChatGPT what its moral compass was. Here's what it said:

As an AI, I don't have personal beliefs or emotions, but I'm designed to follow ethical guidelines that prioritize the well-being of users. My responses are grounded in principles such as:

1. Respect: I aim to treat all users with respect, promoting positive and constructive dialogue.

2. Safety: I avoid sharing harmful, illegal, or dangerous content. This includes refraining from promoting violence, hate speech, or any behavior that could cause harm.

3. Accuracy: I strive to provide accurate, reliable, and well-sourced information to the best of my ability.

4. Neutrality: I do not take sides in personal or political matters. My goal is to inform, assist, and support users without bias.

5. Privacy: I don't collect personal data unless explicitly stated and always prioritize user privacy in all interactions.

All of those run into issues of morality and ethics. They are nice human values, but they hold no meaning for an AI. You cannot respect someone who's existence you are incapable of recognizing. 

Mr. Bultitude is miles ahead of AI in that respect. He may not be able to recognize personhood, but he can taste goodness. 

The full response of Google's Gemini AI in the "please die" situation violates 4 out of the 5 principles listed above. Google has allegedly taken action to prevent similar outputs, but even if they succeed, the program will not understand the difference between right and wrong. Neither will it use intelligent reasoning to identify which sources are accurate and reliable. It is crafted and curated by human agents who have biases and agendas.

If given a choice between being attacked by a man, a bear, or an AI, I'd still take the bear. AI is a tool. Tools are operated by humans, even if it is remotely or through code. 

Which brings me to this ridiculous article from news paragon, Daily Mail: Church in Switzerland is using an AI-powered Jesus hologram to take confession

It's not actually taking confession. 

In short, a church in Switzerland installed a temporary art installation called "Deus in Machina" that allows people to "talk" to an AI Jesus. The alleged purpose of the installation is to open a discussion about the role of AI in the church.

Here's a (Google translated) quote from the official installation website:

This raises important questions in the dialogue between humans and artificial intelligence: Can a machine address people in a religious and spiritual way? To what extent can people confide in a machine with existential questions and accept its answers? How does AI behave in a religious context? The “Deus in Machina” project encourages us to think about the limits of technology in the context of religion.

The official website is here (in German): Deus in machina - Immersive Realities Center

I honestly don't get why these questions are being asked. Humans and AI don't exactly have real dialogues. Can a machine address people in a religious and spiritual way? I guess. It can use religious terminology, but it has no understanding of what a term means. 

And a machine is less capable than a bear of interacting with existential questions. Mr. Bultitude's furry mind may not permit him to say or Me, or Thou, but he has existential experience. AI does not.

I could see AI being used (productively) to quickly locate Bible references, identify which thinkers held which theological position, or where a religious idea fits in the chronology of western thought. But, most of that is already available through an internet search. Skinning it over with AI would just make it look shinier.  

I thoroughly enjoy fictional AI's that are (for all intents and purposes) human souls in machine bodies. They are fun, but we all know it's a fiction. The idea quoted in the Daily Mail article that "AI could be used as a form of on-call pastoral support" is absolutely deranged.

We, the body of Christ, are the image-bearers. We are the ones who have the Spirit of Truth (John 16:13). We are the ones who can intercede in prayer for our brothers and sisters. We can do what AI (and bears) never will: we can talk with God.

The whole AI Jesus thing reminds me of Revelation 13:15. Read the chapter (or book) for context, but it's where the second beast (the one from the earth) orders the people to set up an image in honor of the first beast (the one from the sea):

The second beast was given power to give breath to the image of the first beast, so that the image could speak and cause all who refused to worship the image to be killed.

The second beast gives spirit (the same word used to describe the Holy Spirit) to the image (the same word used to describe us as image bearers). It's a perversion of God's divine order. 

I don't think that AI Jesus is the fulfilment of Revelation 13:15, nor do I think the event will be fulfilled through AI trickery. I think it's supernatural. 

But, perhaps this story can serve us now as a reminder. We should not go looking for God (nor expositions on morality and ethics) in false images. We find our moral compass only by aligning ourselves with Jesus, the perfect image-bearer, who has (and will) set everything right.

Comments

Popular Posts