The first reaction to this statement can be, “Come on, stop it already.” But let’s try to think about technology-induced risks from a professional position.
The Chernobyl disaster, after all, was not very big in immediate death toll consequences (about 60 people), so you can rephrase the article “Can LLM kill 60 people?” and many more people would say that “probably yes,” than to the original statement:
It’s less than in Boeing 737 Max plane crashes due to software error, and who remembers that now? It could be an LLM-generated piece of code that somehow passed through validation testing the same way human-written code did:
(or, more technical one)
One of the reasons why Chernobyl became a major political issue was its background. There was a massive construction of nuclear plants of untested design on the promise that it is a safe energy. They were building nuclear plants in the vicinity of cities.
The 1984 book “Normal Accidents: Living with High-Risk Technologies”
described the risks of major investments in untested technologies and the potential failure modes of such hasty decisions.
They laughed for two years about the probability of a “normal accident” at a nuclear plant, but not after 1986 because the Chernobyl disaster perfectly illustrated what a normal accident is. All those multi-billion investments in nuclear plants next to living blocks became very toxic, and Germany had to phase them out after all.
The same could hypothetically happen to the generative AI investments if we don’t learn the lesson and read only “state-of-the-art” articles. Sometimes wisdom comes from much older times, as human-in-the-loop is an ancient problem.
Topic starter
What does the release of ChatGPT have in common with the Chernobyl plant disaster? — Zestic.ai