Garbage In; Garbage Out: Evaluating Artificial Intelligence

Until recently nearly all content on the internet was real, that is, produced by a human. Pictures came from photographers, videos from someone’s camera, blogs and articles from human writers. Not anymore. Computer-generated images look real, videos can be deep-faked and now it is possible to have entire blog posts or stories generated by Artificial Intelligence (AI). This entire article could have been computer generated apart from any human mind without you realizing it. The next bestselling novel could theoretically be “computed.” The same holds true for a sermon, a student’s essay or even an entire news report or website. The generated words are often convincing, though they do not proceed from a rational soul but from a sophisticated computer program that rearranges text to give the appearance that it is “saying” something. This has led many to fear that AI will take over or replace humanity.

Defining artificial intelligence

We must speak accurately about AI if we are going to think clearly about it and avoid the panic. The term “artificial intelligence” is somewhat of a misnomer. Computers are not intelligent. Computers compute. They can only use what they are given. An old adage in the computer science world says, “Garbage in, garbage out.” If a user or programmer gives the computer bad input, it will give bad output. Computers don’t think or make truth claims or feel or engage in any of the faculties of the soul.

People often mistake the human mind for a kind of computer; it is anything but. We are “fearfully and wonderfully made” (Psalm 139:14). Luther teaches us to confess with the Bible that “God has made me and all creatures” and that He gave me “my reason and all my senses” (SC Apostle’s Creed). In the Athanasian Creed, we confess that Christ became “perfect man, composed of a rational soul and human flesh.” Only man’s God-given rational soul truly possesses speech, thought and intelligence. The angels also have speech and intelligence. A computer does not.

Many tend to believe that if a computer gave an answer, it gave an accurate answer. But if the input was garbage or relativist, so will be the output.

It can, however, look like it does. AI can now produce words that no rational soul has thought or claimed true. This is problematic: The words seem rational, but they are devoid of reason, except in a derived way. Its “reason” comes from an algorithm produced by rational man.

The philosopher John Searle proposes a thought experiment to illustrate what AI is doing. Imagine a man in a room who is instructed to decode Chinese sentences into English sentences by changing the characters according to a complicated flow chart. Now imagine people slip Chinese messages under the door, and the man slips the English translations back out under the door. To anyone passing by, it would seem that there is a man in the room who understands Chinese when in fact he is only following a series of instructions. He doesn’t understand a word of Chinese.

Benefits and dangers

Herein lie the benefits and dangers of artificial intelligence. If we need instructions followed quickly, a computer can often do it faster. If we need actual understanding and communication, only a rational soul will do. God sent angels, prophets, apostles and His only Son to deliver His Word to us — not a complex calculating machine.

Like any tool, AI can be used or misused. Its computational uses grow every year. In medicine, AI can detect patterns that may signal cancer or disease in patients. Programmers can use AI to develop software more quickly by auto-generating code. Foreign language chatbots can help people learn. However, the tool can also be used for deception. How will an English teacher ever assign students a five-paragraph essay as homework now that the computer can generate it for them? And if AI is used to undercut the learning process, it will debilitate us, making us lazy and gullible.

In some ways, AI programs like the popular ChatGPT are just another way to search the internet; instead of displaying pages of search results, it aggregates results into a text response. The difference is that the user with a human mind is no longer filtering the data. He relies on the AI to present the information. However, as mentioned above, AI does not make truth claims or produce new information, though it seems to. It only computes, rearranges and regurgitates.

Programmer’s bias

AI also necessarily carries with it the bias and selection of its programming — and what is excluded is often just as important as what is included. There can be no such thing as a worldview neutral medium. Books, radio and TV all convey a worldview. AI will not be different in this respect. Current AI models are decidedly unbiblical in their worldview. For example, apparently the programmers of ChatGPT (GPT-4) do not believe in objective moral truths. When asked if two propositions were true, it would not generate text to affirm the moral proposition. The two propositions were that two and two equals four, and that harming people for fun is bad. It generated this text:[1]

“2+2=4” is a mathematical truth, which is objective and based on the rules and logic of mathematics. It is a universal truth that is consistent and can be proven within the framework of mathematics.

On the other hand, “Harming people for fun is bad” is a moral statement. While many people would agree with this statement, it is not an objective truth in the same way as a mathematical statement.

But it is objectively true that “harming people for fun is bad.” Only a relativist worldview suggests otherwise. Of course, your interaction with this software might produce different output, which is its own problem. Programmers might intervene with updates to give more reasonable answers to philosophic questions, but there is no getting around the fact that a computer lacks the requisite “work of the law … written on [the] heart” (Rom. 2:15) to do moral philosophy.

And so there is not only the potential for active deception, but a sort of passive deception. People give credence to computer modeling, assuming that because a computer calculated it, it is true. Just as many idolize the scientific method as being able to solve all problems and tell all truth, so now the promises of computer algorithms do the same. Many tend to believe that if a computer gave an answer, it gave an accurate answer. But if the input was garbage or relativist, so will be the output. This has led to defamation lawsuits against the companies that write ChatGPT and other artificial intelligence software. The legal question: If a computer program generates text that is false or defaming of real people, should the company behind that AI be held liable? The Eighth Commandment would suggest so.

AI will continue to have a place in the modern world. The Christian knows that it, along with the world, is passing away. The Christian, made wise by the Word of God, is uniquely poised to avoid deception and use AI for good because he knows where absolute truth is found. As the apostle John reminds us, Christ is logos (John 1:1). He is the Word, the Way, the Truth and the Life. He became flesh for us, and the words He speaks to us are “spirit and life” (John 6:63).

[1] Note, I refuse to say that artificial intelligence can “say” anything the way a human can.

Photo: Getty Images

2 thoughts on “Garbage In; Garbage Out: Evaluating Artificial Intelligence”

  1. I am actively working on AI technology that uses, extends or is similar to ChatGPT. It is typically called Generative AI in order to more generically refer to it.

    All of the major technology companies are working on incorporating this technology into their products. (Microsoft, Google, Apple, Meta, Adobe to name a few). Like all technology, the tech itself is not good or evil, the one who uses it is responsible.

    I’m curious to know, is anyone in seminary has given thought or time toward working with or on AI model training that is strictly biblical? I would be interested in connecting with anyone interested in such an endeavor.

  2. Perhaps this concern can be characterized largely a matter of trust.

    Do we accept without question whatever anybody tell us? Rather, we are wise to regard some human sources of information as more trustworthy and helpful than others.

    Shall we accept without question all computer-generated output? Rather, we would be wise to recognize some sources of computer output as more trustworthy and helpful than others.

    “But test everything… (1 Thess. 5:21).

    The author claims that “AI does not … produce new information.” Can any new scientific discovery be made without new information? Scientific discovery now continues in many fields through the use of computers that take very large data sets and reveal relationships in a way that indeed informs and increases our understanding.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top