“My ways are higher than your ways and my thoughts than your thoughts.” Isaiah 55:9
What is this “AI” we keep hearing about? Is it a passing fad? Is it something we can use, or should we be scared of it?
What is AI?
The full name for AI is “artificial intelligence”.
Although artificial, AI is not actually intelligent. It’s just an algorithm based on maths, probability, and logic. For example, “large language models” (LLMs) powered by databases containing every word we use, indexed to a number (like Strong’s numbering we are familiar with), which are programmed to create association links of words that typically come next in a phrase or sentence to give some meaning. AI is like a “supercharged search engine” that quickly generates a response that is the most likely answer based on its “understanding” of your inputted query. Using an amalgamation of information, AI will produce a specific result tailored for you, rather than return a list of web pages that may contain part of the information you are looking for. It seems “intelligent” because it gives you the information you asked for in a concise summary and seems to understand your natural language instructions given in a conversational format.
The power of AI lies in its probabilistic algorithms and the quality of the material it has been “trained” on. These AI models are becoming more universal, more powerful, and develop logically reasoned answers by extrapolating from data, rather than just regurgitating the data itself.
Potential for good or evil
We may find AI a useful shortcut that has legitimate uses in our employment, or our general research and administrative or scheduling tasks. However, just as its use by students is currently debated or banned, it would also be wise for us to be very cautious when it comes to Bible study. Although it may have its uses, it also comes with drawbacks.
The main issue is that AI is trained on the abundance of “man’s wisdom”, which, as Paul reminds us in Corinthians, is “foolishness with God” (1 Cor. 3v:19).
It strikes me that we are looking at a digital approximation of the original serpent. Just as the serpent in the garden of Eden in Genesis 3 was without morals, and confidently gave assertions that were wrong (e.g. “you will not surely die” verse 4), AI produces lies so regularly that the industry term used is that it “hallucinates” an answer, because it answers so forthrightly.
One notorious example is the American lawyer who was disbarred as a result of using Chat GPT to create a legal brief. Because the algorithm knew the form of a brief, it invented one for him, including fictional court cases to back up the line of argument it was requested to follow. Deloitte’s, embarrassingly, had to refund the cost of a report to the Australian government for similar reasons.
A second famous example was Google’s Gemini advising the use of glue to keep cheese on pizza. In this case it was not hallucinating; instead, it regurgitated a satirical answer from an Internet forum not perceiving that it was a joke.
The potential problems with an algorithm trained on human knowledge, having no morals or awareness of God, should be immediately obvious to Bible students. Having a shortcut to thinking things through, its efficiency and expedience, is tempting because it is so accessible on our devices.
Are there cases for using AI? Certainly. Generating video timestamps or transcripts, creating images of past events, summarising key themes, testing illogical arguments, and more. One tool I find useful is NotebookLM, which is part of everyone’s Google account. You can create a “notebook” of curated material by either uploading documents or linking individual web pages, and then you can query the specific material you have curated and even generate a podcast summary of it. Rather than getting answers from anywhere, you have, in effect, constrained the algorithm to your own sandbox of information. Even so, the summarisation and its theme identification are what I would call “rudimentary”—high school level. NotebookLM is not good at identifying anything nuanced and cannot bring to light any overarching spiritual themes or general Bible principles.
The danger to children
Being aware of AI’s limitations, are there any other reasons to be concerned? Yes. Particularly for those easily influenced, the lonely, those with mental health concerns, and especially our children.
Children need to develop their cognitive abilities at a young age, unhindered by developmental obstacles. There seems to be a political willingness to ban children from accessing social media until their mid-to-late teens due to its potential impacts. I would advocate the same caution for the use of AI tools.
In the same way we would supervise and instruct our young in the use of power tools, we need to apply the same care in the use of AI tools.
We can’t expect our children to develop their own moral and reasoning capacity, to carefully think through scenarios, and learn to make wise choices if they are accustomed to a computational shortcut that provides an answer with a high probability of being correct, whether for school assignments and learning, or asking for advice in daily life. Young children cannot distinguish between a real person behind the screen and an algorithm.
We now have AI that will converse with you as a rational “personality”, and people are starting to listen to the algorithm.
AI generated friends
Once a fantastical sci-fi premise only a decade ago, there is now a growing trend of AI Companions. Mark Zuckerberg, having trashed the true meaning of the concept of a “friend” with Facebook, and realising that people are now lonelier than ever, announced that Meta will develop digital companions to fill the void they have inadvertently helped to create. A young man on social media is now likely to be inundated with ads for scantily clad so-called “digital assistants” that will supposedly share revealing AI-generated images of themselves and will talk to him in alluring and explicit language.
We now find reports and growing concerns about those suffering with mania, having their delusions of grandeur reinforced by sycophantic chatbots. Having AI speak to them deferentially as if they are an all-wise god-like person once it’s instructed to do so, is already creating cases where those individuals are spiralling into severe “AI psychosis” and are unable to be reached by their family or practitioners as their chat companion is reinforcing their delusions or encouraging them in suicide.
AI is taking over
Internet searches are in decline. Instead of “googling” for pages of possible results, users are now turning to AI for the most likely best answer to their query.
It is estimated that more than 50% of all new online content is generated with the help of AI tools, with expectations it will be 90% within a year. This is why AI pops up offering to help everywhere.
We can see change and upheaval coming. In seeking for knowledge, men have algorithms “running to and fro” throughout the earth, using so much electrical power that Google, Microsoft, and Meta are setting up their own private nuclear power plants to serve the increased processing requirements for this technology.
Imagine the change, though, on humanity. Imagine the effects of AI—its habitual use and intellectual and moral influences upon humankind in a few years.
This is mind-boggling stuff! But I’m not trying to scaremonger. My intention is to caution and encourage us all to think clearly before we outsource our rational thinking to an algorithm. This caution needs to be considered in the light of Bible study and the effects AI can have upon God’s ecclesia. This, God willing, we hope to cover in the next article.
“And the peace of God, which surpasses all understanding, will guard your hearts and your minds in Christ Jesus.”
Philippians 4v7
Feature image: Robot praying before downloading Bible produced by AI by Copilot.
