The international community has begun to pay more attention to artificial intelligence (AI) development in the wake of recent scandals surrounding OpenAI’s ChatGPT system. In an open letter, leading figures in the technology sector call for AI labs to stop training AI systems more powerful than GPT-4 for at least six months.
Researcher and friendly AI advocate Eliezer Yudkowsky, however, has published an article in Time magazine in which he claims that this measure is not enough to save humanity if someone builds an AI that is too powerful. According to Yudkowsky, under current conditions, all members of the human species and all biological life on Earth will die soon after. The researcher maintains that this is not only his opinion but that of many other researchers immersed in these issues.
Yudkowsky proposes shutting down all the large graphics processor clusters where the most powerful AIs are refined and capping the amount of computing power that anyone can use to train an AI system, reducing it over the next few years to compensate for more efficient training algorithms. In addition, the researcher urges a readiness to destroy a rogue data center by airstrike if the moratorium is violated and says that “we need to shut it all down” because “we are not ready” to deal with the development of overly powerful AI.
In short, Yudkowsky believes the open letter underestimates the gravity of the situation and calls for too little to resolve it. If drastic measures are not taken, everyone will die, according to him, including children who did not choose this and did nothing wrong. While some may argue that these measures are extreme, Yudkowsky insists that they are necessary to avoid a global catastrophe. All in all, the discussion about the development of AI and its potential threats continues, and it seems that there is still much to be debated and decided in order to avoid disastrous consequences for humanity and the planet.