Last week, several former employees of OpenAI issued an open letter warning that AI companies are putting profits ahead of safety.  Among the numerous risks enumerated in the letter is there could be “the loss of control of autonomous AI systems resulting in human extinction”. 

The letter discloses that AI companies are in possession of information about the capabilities and risks of AI.  The letter criticizes the lack of transparency and government oversight, asks the companies to take specific action to allow current and future employees to speak about the risks of AI.

The New York Times reported on the letter, as well as numerous statements made by one of its signatories, Daniel Kokotajlo.  Mr. Kokotajlo refused to sign a non-disparagement agreement when he departed OpenAI, thereby putting his vested equity, valued at $1.7 million, at risk.

Alarmingly, Mr. Kokotaljo believes there is a 50 percent chance that Artificial General Intelligence (AGI) will be developed by 2027.  He believes there is a 70 percent chance that advanced AI will destroy or harm humanity catastrophically.

Mr. Kokotaljo is not the only knowledgeable person to be issuing such warnings.  Stuart Russell, a leading AI researcher recently stated: “Even people who are developing the technology say there is a chance of human extinction. What gave them the right to play Russian roulette with everyone’s children.”

Add to this the fact that leading OpenAI researchers like Ilya Sutskever and Jan Lieke, as well as other members of OpenAI’s superalignment team (charged with safety) have left the company.  It has been reported that this exodus was the result of frustration over OpenAI’s management not taking safety seriously enough.

Such dire warnings and predictions have been described as being alarmist, with AI supporters disparaging critics as “doomers”.

Neither I nor anyone else not in the industry has sufficient information or expertise to judge the potential dangers of AI.  However, it is clear that an increasing number of insiders who have worked at AI companies are shouting danger alerts from the rooftops.  Some of them are doing so despite the potential of losing millions of dollars or being sued for violating non-disclosure agreements.

As previously discussed in this blog, even the EU’s AI Act, which is by far the most comprehensive regulation of the technology, does not provide any requirement of transparency until after an AI system has been launched.  Thus, the mad scientists at OpenAI and elsewhere are free to continue to build their Frankenstein monsters in secret without the any oversight to ensure that their creatures can be controlled once they are unleashed.

As disclosed by Mr. Kototaljo and others, even OpenAI’s own board of directors has been kept in the dark about the public testing of new versions of that company’s product.  Can they be trusted to inform the public about any significant perils when they don’t even see fit to keep their own boards in the loop.

Both the public and policymakers should start taking more seriously the warnings of AI insiders.  Without knowing more, they should not be dismissed as “doomers”  given that AI companies have demonstrated that they lack a commitment to sufficient transparency.  If there is not much for humanity to worry about from this technology, then the AI industry should not fight against government oversight.