Two recent news events on AI may prove to be highly significant to us all.
First, Apple has announced deals with both Google and OpenAI to incorporate AI into iOS 18, its updated operating system software. This means that AI will gain a foothold in the lives of everyone who owns an iPhone, whether we are conscious of it or not. AI’s output (feeding us information) and input (taking information from us), is on track to become ubiquitous. It seems inevitable that this will make AI more powerful.
The second major event was the resignation of Ilya Sutskever from OpenAI. Dr. Sutskever is an OpenAI cofounder, chief scientist, and was one of the board members who voted to oust CEO Sam Altman before reversing course and supporting his reinstatement.
As previously discussed in this blog, Sutskever is one of the most prominent insiders at OpenAI. For some time now he has sounded the alarm of moving too fast with AI. As early as 2022 he expressed concerns about AI networks already being “slightly conscious.”
Dr. Sutskever, together with Jan Leike, led OpenAI’s superalignment team, which was charged with assuring that AI is safe and does not “harm humanity.” Leike and a number of others also have left the company. Some have surmised that this has left the superalignment team gutted.
OpenAI is known for requiring its employees to sign strict confidentiality and non-disparagement agreements. Thus, we only have cryptic hints at what has been going on behind the scenes at OpenAI.
However, it has been reported that one of the employees who quit, Daniel Kokotaljo, disclosed that one of the company’s goals is to train AI to the point that it surpasses human intelligence. He said that this could either be the best or worst thing that could happen to humanity. He further expressed that he had lost faith in OpenAI’s “ability to handle AGI, so I quit.”
Recently, a former board member of OpenAI, Helen Toner, confirmed that one of the reasons Sam Altman was fired in November of 2023 was that he lied to the board about several issues, including about the safety processes that were in place at the company.
However, again, no detail was given about the nature of the safety concerns with which the insiders at OpenAI have been wrestling.
Even before Sutskever’s departure from OpenAI, the meme “Where’s Ilya? What did he see?” began to circulate. Well, we now know the answer to the former question, but not the latter.
These events magnify how the development of AI is accelerating beyond all attempts to regulate it. Even the EU’s AI Act, the most comprehensive attempt to impose safeguards on AI, will not force any transparency onto AI companies before launch of any systems. In fact, as previously discussed in this blog, the EU's AI Act, by its very terms, creates an exception to any regulation of AI that is in the scientific research, testing, or development stages.
So even the EU, the most aggressive regulator of Big Tech in the world, has no effective tools for delving into what insiders in AI companies like OpenAI are concerned about with respect to AI's dangers to humanity.
And concerned insiders, like Sutsekever and others who left OpenAI in protest, appear to have been effectively muzzled by confidentiality and non-disparagement agreements.
So we are left to guess about whether there is, in fact, reason to fear that AI poses a threat to humanity. The question of “What did Ilya see?” remains unanswered.
Member discussion: