Scientia potentia est.
Scholars debate whether the statement “knowledge is power” should be credited to Francis Bacon or Thomas Hobbes. Either way, there is little debate about the veracity of the statement itself: knowledge is indeed power.
Is knowledge a power for good? Or is it a power for ill? That’s where it gets messy.
Scripture speaks of knowledge and learning. The Psalmist declares, “Make me to know your ways, O Lord; teach me your paths. Lead me in your truth and teach me, for you are the God of my salvation; for you I wait all the day long” (Psalm 25:4–5). Knowledge is a powerful force for good when it reflects God’s goodness.
And, yet, knowledge of the wrong kind can be problematic: “When you come into the land that the Lord your God is giving you, you shall not learn to follow the abominable practices of those nations” (Deut. 18:9). Learning falsehood, which leads to living falsehood, can be devastating for individuals and entire communities.
The power of knowledge is not just limited to humans. Machines possess intelligence and have the capacity to learn.
This may sound futuristic or at least like a relatively new development. But it’s actually an ancient idea.
Intelligent machines, robots and non-human sapience have a long history in literature. Greek mythology tells of sculptures coming alive, possessing intelligence and causing problems for humanity. Goethe’s Faust describes a “homunculus,” a small intelligent creature, created in a bottle. Mary Shelley’s Frankenstein, a novel about a doctor who created a sapient creature,further perpetuated interest in artificial intelligence.
Even the term “artificial intelligence” is getting along in years. While the conceptual origins of artificial intelligence stretch far back into philosophy, literature and mathematics, the term came out of the field of computer science. The Dartmouth Workshop, held in 1956, is widely regarded as the birth of modern artificial intelligence. The computer scientists at this conference discussed how human intelligence might be simulated by machines. One of the attendees, John McCarthy, successfully argued that the burgeoning field should be known as “artificial intelligence.”
Following the Dartmouth Workshop, artificial intelligence (AI) charged ahead with grand plans and promises. Over the next twenty years, the field of AI made steady advances along with some overly spectacular claims. For example, many people surmised that a fully intelligent machine would become a reality within the next 20 years. In 1957, the economist Herbert Simon predicted that AI could beat humans in a game of chess within 10 years. (It ended up taking 40 years for that to happen).
The simplest characterization of AI is a machine performing a task that would otherwise require human intelligence. In this sense, we use AI on a daily basis: Search engines such as Google or Bing and virtual assistants like Siri and Alexa are all examples of AI. AI might also include suggested products on websites, banking apps, customer service chat bots and speech-to-text tools.
Conversations about AI often fall into two different categories: weak AI and strong AI. Weak AI performs a narrowly defined task. For example, virtual assistants can recognize speech and search the internet. However, these virtual assistants cannot reason, plan or integrate knowledge the way that a human being can. They are really good at what they have been made to do and utterly incapable of moving beyond this narrow form of intelligence. Unlike human intelligence, which is a mile wide and an inch deep, weak AI is a mile deep and an inch wide.
Strong AI, also called artificial general intelligence, is a hypothetical concept — at least for now. Strong AI would be capable of the full range of human intelligence, such as commonsense knowledge, advanced reasoning, planning, unguided learning and natural language communication. Strong AI would be machine intelligence that is both a mile wide and a mile deep. (This is also where the conversation begins to move toward science fiction and dystopian futures.)
An important subset within AI is machine learning. In its most basic form, a computer using machine learning continually improves its intelligence and expands its knowledge without human intervention. These machines become smarter as they encounter more data, strengthen their neural networks and continually improve their functioning.
Machine learning increasingly occurs without human assistance. Supercomputers are given access to massive amounts of data so that the machine can process and learn that data. Since this learning is unsupervised, humans may not fully know how or why the computer has arrived at a particular output. Why is Amazon suggesting this particular book to you? Only the machine knows. Why is this particular stock soon going to rise in price? Only the machine knows.
A tool for helping or hurting?
Machine learning allows computers to become super-savants on certain subjects. For instance, machine learning allows AI to detect fraudulent activity or diagnose cancer significantly faster — and often times better — than humans. This powerful knowledge is rapidly transforming transportation, medicine, utilities and many other areas of life.
While the world wants to harness the power of AI and machine learning, it is important for us to pause and ask some basic questions: What makes a tool either helpful or harmful? How do we know if a technology is either good or bad?
Martin Luther, though he lived hundreds of years before AI, can help us begin to answer these questions. Luther wrote this about the technology of his day:
Just look at your tools—at your needle or thimble, your beer barrel, your goods, your scales or yardstick or measure. . . All this is continually crying out to you: “Friend, use me in your relations with your neighbor just as you would want your neighbor to use his property in his relations with you.”LW 21:237
A simple test of AI technology is the neighbor. Does our use of AI benefit or hurt our neighbors? Is this technology being deployed to the health, well-being and flourishing of all our neighbors or just a few of them? Will this technology help our neighbors in the short term and then harm them in the long term? These are vital questions for Christians to ask regarding AI and machine learning.
Presently, AI and machine learning are running into issues regarding racism and equity. Racial prejudices in the data used to “teach” AI systems have resulted in skewed outputs. For example, suppose that an AI system used data about previous hiring decisions to predict the best candidates for a job. If this data contained racial biases, the computer would replicate these same prejudices. Another issue surrounding AI has to do with equity. Machine learning can accurately predict which stocks will soon rise on the stock market. Since cost would make this technology inaccessible to most people, a small number of wealthy people could use this technology to make buckets of money.
The future in a world of AI could be even more problematic. Some have argued that “misaligned intelligence” will result in despotic supercomputers. An extreme example for the sake of illustration is an intelligent toaster setting the whole house on fire in an attempt to maximize toast production. While this example is humorous and unlikely, it raises the issue of what happens if AI becomes a harmful power in this world.
Perhaps the greatest concern for Christians is how people use religious terms to describe AI and machine learning. Many people talk about how this technology might have the power to save the world and humanity. Some people talk about how this technology will deliver the world from evil as it cures diseases, curbs hunger and cultivates world peace. This technology clearly has the potential to become an idol.
Even more troubling is the ways that AI may become a false god. For example, Paul Ford writes in an article in MIT’s Technology Review:
A superintelligence would be godlike, but would it be animated by wrath or by love? It’s up to us (that is, the engineers). Like any parent, we must give our child a set of values. And not just any values, but those that are in the best interest of humanity. We’re basically telling a god how we’d like to be treated.
When a technology is worshiped as a god, lauded as a savior and benefits only some people, it has become a harmful tool. We’re not there yet. And let’s hope we never get there.
Knowledge is power. But power can be dangerous.
A hearty measure of humility and trepidation is needed as we create and use new forms of AI and machine learning. In the presence of God, we are confronted with our limited knowledge. This is exactly what happened to Job when he thought that he knew a lot: “Then the Lord answered Job out of the whirlwind and said ‘Who is this that darkens counsel by words without knowledge? Dress for action like a man; I will question you, and you make it known to me’” (Job 38:1–3).
Only God knows the future of AI. Will these tools be used to help or harm our neighbors? God knows, but we do not. And, because of our limited knowledge, we must approach these technologies with caution and humility, prayer and repentance.