The EU’s AI Act has been hailed as groundbreaking legislation that will forestall serious threats from the pending AI revolution, while at the same time encouraging the development of the new technology for positive ends.
However, a close reading of the AI Act shows that its limited scope may render it impotent to address what it has been hyped to do.
In particular, by its own terms the Act is not to apply to scientific research, or to testing and development of AI technology prior to it being placed on the market or put into service. These broad exceptions, which apply even to AI that is deemed to be “high risk,” could render the AI Act completely impotent to prevent the most serious risks posed by AI.
Thus, under the AI Act, developers are free to continue building their AI systems without any oversight, transparency or accountability, until the day that they unleash their product on the world.
This calls to mind Mary Shelley’s novel, in which Dr. Frankenstein working unsupervised in his lab, creates a new species of human in secrecy in order to satisfy his own ego, even as he rationalized that this was in service to humanity. Of course, without anyone to convince him of the dangers or force him to stop, things go terribly wrong once Frankenstein’s secretly concocted monster is unleashed.
A number of prominent technologists have sounded the alarm that AI poses a risk to humankind by virtue of the fact that the technology could become sentient and resist human control. If that is the most fundamental threat, then allowing development of AI models to continue without any oversight before they are placed in service appears to leave a gaping hole in the protective shield that the AI Act claims to erect.
By allowing such unregulated free reign to developers to do as they wish until their products are placed in service, it appears that the EU runs the risk of allowing Frankenstein’s monster to be created, at which point it may be too late to be stopped. If human developers can be held to legally account for their harmful creations only after they are released from the lab, then what happens if those humans lose control of their monsters at or before the point of release? There is little that even the most potent law can do to regulate the malfeasance of a machine which can no longer be controlled by its creator. After all, what legal sanction – whether it be a fine, imprisonment or injunction - would a machine possibly care about?
An AI product might care about being unplugged or deleted, but if it is designed to spread or replicate itself over the Internet, how would such a solution work? Will we be able to put the genie back in the bottle if a viral, harmful and sentient AI model, which cares about its own survival, escapes from the lab into the world of networks we have come to rely so heavily upon.
Member discussion: