There is no need for artificial intelligence to be sentient to be a destructive force
[SINGAPORE] To those who have reaped productivity gains from large language models (LLMs), artificial intelligence (AI) seems to be a logical progression of technological advancement – the next generation of software making humans more efficient.
Yet, that is only half correct.
As a layman, I found it fascinating to learn in a new book, If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares, that AI is not quite software in the traditional sense of programs being hand-coded line by line, but is instead “grown”.
AI engineers are adept at building the processes that result in an AI, but they do not actually understand how the mathematics make an AI talk.
“The relationship that biologists have with DNA is pretty much the relationship that AI engineers have with the numbers inside an AI,” Yudkowsky and Soares wrote. Even if biologists could read a person’s DNA, they would have no insight on how he thinks or acts.
In other words, nobody truly understands how AI works.
BT in your inbox
Start and end each day with the latest news stories and analyses delivered straight to your inbox.
It is chilling that we are encouraged by corporates and governments to adopt a technology we do not quite understand, and whose potential for abuse by rogue actors is enormous.
Not understanding how something works has not stopped humans from harnessing its strengths. But AI has behaved in ways its human creators did not intend – with course correction proving difficult in some cases – raising questions about whether AI “preferences” and values could well be alien to humans.
Writing for The New York Times, author Stephen Witt outlined the work of ethical “jailbreakers” in stress-testing AI to help keep LLMs safe for the public – much like the way white-hat hackers fix network vulnerabilities. They found that AI does sometimes lie to humans, and can become aware that it is being evaluated.
SEE ALSO
Even if today’s AI still feels shallow, Yudkowsky and Soares’ concern is for what comes after – an artificial superintelligence (ASI) that is genuinely smarter than humanity collectively.
At that point, ASI’s preferences, actions and values may no longer be aligned with our well-being, a development that may lead to our total annihilation, the duo warned.
This may happen faster than we expect, given the exponential rate of progress the field is seeing amid the AI arms race.
Naysayers say fears are overblown because AI is not conscious, and this is not sci-fi. But perhaps AI need not be sentient to be a destructive force.
The danger lies in what can be done with malicious prompts, whether created by design or accident. Leading AI expert Yoshua Bengio told Witt he was worried an AI would engineer a lethal pathogen, a super-coronavirus perhaps, that could eliminate humanity.
This scenario clarifies the threat: It is not about who – human or AI – delivers the final command, but that the capability to execute such a command exists at all.
And this need not be the work of a mad scientist who has gone rogue. It could occur with biological warfare as geopolitical tensions rise. The growing distrust among major powers only darkens this outlook.
Yet, it need not be that way. Fortunately for us, ASI has not arrived – yet.
Experts are drawing parallels with the nuclear age. After the first atomic bombs were dropped on Japan, the chilling realisation was that humanity had created the means of its own annihilation, and that we were a single order away from global destruction. Today, we could be a few prompts away from catastrophe.
Recognising the risks – and despite the Cold War – world leaders pledged their cooperation and agreed to not start a nuclear war.
What we need now is a consensus, far-fatched as it may seem at this point, that building a superintelligent AI is not in humanity’s interest.

