Increasingly frequent requests for AI regulation are being made by a growing number of institutions and organisations, who all agree about the potential risks of the malicious use of that technology. Some have suggested that it should be monitored by a specific governing agency, as its potential can in some ways be considered comparable to that of nuclear energy.
Of course, such an association may seem rather bold, but it cannot be denied that, just as in the case of nuclear power, while AI is on the one hand an instrument which can offer tangible benefits, on the other hand it is perfectly capable of generating unpredictable consequences if used inappropriately.
How would a possible AI governing agency be structured?
This interesting concept was first put forward by the directors of OpenAI, part of the company that created Chat GPT. As an organisation which is directly involved in this issue, they prepared a report entitled ‘Governance of superintelligence’, in which they present some ideas as to how AI might be regulated, in order to reduce the risks associated with its use to an absolute minimum.
The most important point relates to the creation of a specific organisation which would be responsible for monitoring every aspect of artificial intelligence. Based on the IAEA (International Atomic Energy Agency) model, they suggest a similar institution which would act as a guarantor for the industry’s compliance with the regulations and would at the same time provide a point of reference for governments around the world.
Many nations have now realised how quickly and widely this technology has already expanded, now touching just about every commercial, social and economic sector. If this massive, almost un-controlled expansion continues to be managed carelessly, it could start to manifest flaws and weaknesses that would make it vulnerable to endless exploitation by malicious individuals.
Could AI regulation resolve the problems of the AI Act?
In order to protect businesses that decide to use artificial intelligence, Europe has started working on what is known as the AI Act. This collection of laws prepared on an ad hoc basis to regulate AI is currently still facing a series of issues which could render it ineffective even before it is actually published.
The various issues that this plan must try to solve include:
- the rapid evolution of neural networks (their potential multiplies exponentially in just a few months, making any law governing them obsolete from the start);
- the risk linked to territoriality (which makes it unlikely that nations which have always been completely different from each other such as China, the USA, the EU or India would be able to reach agreement on a common legislative formula for AI. This could lead to the birth of ‘AI havens’, where laws are less strict and some states would be favoured over others;
- Checks on patented algorithms (No Big Tech firm would consent to having their products inspected by potential exponents of rival governments, running the risk that their trade secrets may be compromised).
In this complicated context, the proposal put forward by OpenAI to create a super partes agency could meet with approval. Such an institution would have to guarantee its absolute impartiality to governments around the world, as well as find the right balance to regulate artificial intelligence and its applications according to technological developments.
If the agency was created as an organisation directly subordinate to the United Nations, it would benefit from being able to count on the principles of democracy and multilateralism which that organisation embodies. This would offer an additional guarantee of trustworthiness, thereby encouraging more states to sign up to the project.
Translated by Joanne Beckwith