Reports about the pending EU AI Act are that it is centered on risk assessment but may ignore the biggest risk that AI poses.
Although much of the criticism of the AI Act has been that it is too restrictive and thus might impede the development of AI in Europe, could it be that it does not go far enough in addressing the biggest danger of AI?
The details of EU AI Act remain a bit vague while the final version of the legislation is ironed out in the European Parliament. However, the basic framework released by the EU indicates that AI systems will be categorized as being Minimal Risk, High Risk and Unacceptable Risk, with differing levels of regulation for each.
Which systems fit into which categories are to be determined primarily based on the evaluation of the potential effects of human use of the AI system. For example, whether system is High Risk is to be determined by whether it has an adverse effect on fundamental rights or undermines product safety regulations.
Unacceptable Risks are deemed to uses of AI for purposes such as social scoring, biometric identification, or scraping of images.
But nowhere in the EU’s description of its risk categories is there any mention of the risks posed by AI systems becoming sentient or powerful to the point that humans lose control of it.
Such concerns sometimes are dismissed as overly pessimistic rantings of “doomers,” which should not be taken seriously. However, the recent turmoil at OpenAI, and the information recently generated by that company supports the notion that there is legitimate cause for concern.
While we still don’t know the whole story about Sam Altman’s firing and rehiring at OpenAI, some reports have suggested that it had to do with Altman’s decision to move forward on technology that had been deemed to be too dangerous to move quickly on. The lack of transparency about what actually occurred only adds to the concern.
More concretely, as reported by Wired Magazine, OpenAI has devoted a fifth of its computing power to its Superalignment project, an effort to develop systems to control “superhuman AIs.” The Superalignment project is led by co-led by Ilya Sutskever, an OpenAI cofounder, chief scientist, and one of the board members who voted to oust Altman before reversing course and supporting his reinstatement. Sutskever has been one of the insiders at OpenAI expressing concern about the dangers of the technology developing too fast.
As Leopold Aschenbrenner, an OpenAI researcher involved with the Superalignment team put it: “We're gonna see superhuman models, they're gonna have vast capabilities, and they could be very, very dangerous, and we don't yet have the methods to control them.”
But there remains a big question about whether the Superalignment project, which relies on less powerful AI to supervise more powerful AI, ultimately will work. In fact, Will Knight, the author of the Wired article, calls the whole effort into doubt:
“Experiments in so-called AI alignment also raise questions about how trustworthy any control system can be. The heart of the new OpenAI techniques depend on the more powerful AI system deciding for itself what guidance from the weaker system can be ignored, a call that could see it tune out information that would prevent it behaving in an unsafe manner in future. For such a system to be useful, progress will be needed in providing guarantees about alignment.”
It seems that this fundamental risk of humans losing control of AI should not be ignored in attempts to regulate the new technology. When researchers at a leading AI company sound the alarm that dangerous superhuman models of AI are fast approaching, everyone should take notice. If a company like OpenAI loses control of its monster – as did Dr. Frankenstein – then even if it uses its best efforts to comply with the EU AI Act, it may not be able to do so.
Member discussion: