This past week has seen a flurry of international activity to address the rapid development of AI.
First, on October 30th the G7 leaders announced the adoption of an International Code of Conduct for Organizations Developing Advanced AI Systems.
Then, on November 1st and 2nd the UK hosted an AI Safety Summit which was attended by 28 countries plus the European Union. The UK Summit resulted in the release of a joint declaration by the attendees called the Bletchley Declaration.
Both the G7Code of Conduct and the Bletchley Declaration are short on specifics, and seem to be rather idealist, even naïve in many respects.
This G7 Code of Conduct is the more detailed of the two documents. It is deemed to be voluntary, with the G7 leaders encouraging organizations developing AI to sign on to it.
Perhaps the most important issue relevant to AI is not addressed at all in the Code. This is the effect that AI will have on the world economy, particularly employment. If Elon Musk and others are correct in predicting that AI will eventually eliminate the need for human workers, this will have the most profound impact on humankind, perhaps in the history of the world. Yet the G7 leaders have not even seen fit to identify this as an issue. This is perplexing and highly concerning.
There also the lack of any detail about mechanisms for preventing the development of nefarious AI, even by those who sign on to the Code. Non-voluntary mandates, such as Biden’s recent executive order and the EU’s pending AI legislation, will face monumental issues of enforcement, as developers of AI are unlikely to be deterred by ethical concerns or legal consequences if there is big money to be made. The Code states that the G7 will commit to developing proposals for “monitoring tools and mechanisms to help organizations stay accountable” but it remains to be seen whether they will be able to come up with any enforcement regimes that would be effective given that technology development is largely done under a veil of secrecy.
The Code also expresses concern about AI undermining democratic values and human rights. But these considerations are not likely to be high on the list of authoritarian governments outside of the G7 that are developing AI.
The issue of protection of intellectual property rights is given short shrift in the Code. The whole concept of AI inherently rests on the use of large amounts of data, which necessarily includes use of IP-protected materials. Thus, AI programs already are potentially infringing on the rights of an immeasurable number of copyright holders. This issue is mentioned in but one vague sentence in the Code, which encourages organizations to implement safeguards to protect intellectual property, including copyright-protected content. The reality is that there will need to be a significant overhaul of IP laws to deal with AI’s use of protected materials in order to give creators of IP just compensation.
The Belcher Declaration issued after the UK summit is even less specific than the G7 Code, which is understandable given that there were 29 participants rather than only 7. It is worthy of note that China signed on to the declaration which mentions protection of human rights, although (as noted above concern over democratic principles is a glaring omission.
The Belcher Declaration mentions cybersecurity as a concern but does not use the phrase “offensive cyber capabilities” as does the G7 Code. Is this give a hint of what countries like China may have in mind in developing their AI? And there is reason to be concerned that China will take advantage of any aspects of either voluntary codes or legislation that might slow down AI development in the West. It is doubtful that concerns about the dangers of AI will dissuade China from seeking dominance in the economic and military application of the technology.
It is also noteworthy that Russia was not a participant in the UK summit. I don’t know for sure, but it could be that it wasn’t invited. At any rate, Russia is a country that, perhaps even more so than China, would not be averse to using AI in ways that are antithetical to Western values.
Member discussion: