While the reasons for Sam Altman’s firing remain elusive, credible media reports point to a rift at the company between those who are concerned about the dangers of rapid AI development, and those who want to forge ahead to cash in on the potential commercial application of the technology.
The public at large is owed a more detailed explanation of this debate within OpenAI. It is incomprehensible for an up-and-coming company like OpenAI, which has rocketed to a position where many believed it would be the next tech giant, to suddenly fire its CEO and put its more than bright future at risk.
While I often disagree with Elon Musk on many subjects, he was right when he tweeted: “Given the risk and power of advanced AI, the public should be informed of why the board felt they had to take such drastic action.”
According to the New York Times, the moving force behind Altman’s ouster was Ilya Sutskever, who is a researcher and co-founder of OpenAI. Sutskever has reportedly been increasingly concerned about the danger of OpenAI’s technology and felt that Altman was not giving the risks sufficient consideration.
I, for one, would like to know exactly what are the full scope of the concerns of Sutskever and others at OpenAI. Sutskever apparently has not been an AI “doomer” or “decel,” but have his views moved in that direction? If so, why? Does he share the view of some that AI can become sentient and begin making independent decisions that are harmful to humans? Does he believe that AI will someday make the need for human labor obsolete? Does he have concerns about humans losing control of military uses of AI?
Insiders in the industry like Sutskever are in a better position to know these risks than anyone else. It is their duty to come forward with full disclosure about what AI is capable of doing, now and in the future.
Previous posts on this site have discussed recent efforts to regulate AI, including Biden’s limited executive order, the UK’s AI Summit, and the pending legislation in the EU. But these efforts have not really attempted to grasp the more dire predictions. But just because those who are ringing the alarm bells are ridiculed as alarmists or “doomers” does not mean that they should be ignored. If there concerns are unfounded, then they should be openly debated and dispelled in public fora.
And as I also have discussed, fundamental debates about AI were lacking at the recent Web Summit in Lisbon. Panels on the subject consisted more of cheerleading, downplaying the capabilities of the technology, and uncritical optimism about how it will advance tech in the world.
We should all take notice when people who are closer than anyone to the state of art of the technology take radical action that is against their economic interest out of concern for the dangers of AI.
If nothing else, there should be more widespread information about this new technology. Those who are in a position to regulate it need to do whatever it takes to understand it as well as possible.
Perhaps there is not much that ultimately can be done to slow the advance of AI or mitigate its risks. After all, society still has not been able to do much to mitigate or regulate the dangerous aspects of the Internet. But we would all be better off knowing what we may be in for with AI, both good and bad.
Member discussion: