The six risks of AI according to experts
The risks posed by AI have been the subject of much debate and even more so now that systems using that technology are becoming increasingly widespread. Considering that almost every industrial and non-industrial sector uses some form of artificial intelligence to manage its processes, improve productivity or calculate forecasts, the associated risks seem to be growing.
The digital transition of course, involves significant use of AI and is part of an un-stoppable, inevitable process, made up of achievements, victories and occasional setbacks, but how realistic are the concerns of more sceptical observers?
The main risks associated with current AI
When assessing the risks of artificial intelligence, we should not picture scenarios from science fiction films, where computers go crazy and exterminate the entire human race. Instead, we should rely on logical, objective analysis to help us build a more realistic idea of the genuine dangers associated with the planning and operational phases of AI.
Technology in itself cannot be either good nor bad; the consequences of its use are closely connected to the way it is used. An interesting article on this theme, published a few months ago, listed six points highlighting issues deriving from the use of AI.
- Its ability to produce conversations, documents, photos and videos which are practically indistinguishable from the real thing. If artificial intelligence is able to produce such realistic copies that they look almost identical to the real thing, the risk is that users will lose trust in the original information.
- The risk that new AI solutions will be brought to market before they are ready in order to beat the competition. Whether by companies, nations or public organisations, the implementation of AI systems before they have undergone the required testing can lead to vulnerabilities, especially when it comes to security, leaving the systems open to exploitation by malicious elements.
- The possibility that concepts such as privacy and free will could disappear completely due to the continual tracking of web users’ personal data. Research engines are a clear example, as they allow the complete arbitrary elimination of any options users do not like.
- The risk that users could become imprisoned in a virtual ‘Skinner Box’ when companies use social media as a marketing tool. Many believe that this mind conditioning room, which has its origins in psychology, can also be exploited with social network users obsessed by their number of likes, followers and comments, who unwittingly help companies advertise themselves through ads.
- The possibility of users being subjugated by artificial intelligence tools. The obsessive use of digital instruments which replace the user when it comes to the more complex choices, offering them better or faster solutions, reduces the individual’s personal freedom and decisional capacity.
- The risk that people will not fully enjoy the advantages of AI due to their fear of the unknown. This natural human perception can even lead to the creation of restrictive ethical codes potentially limiting the positive applications of these new technologies.
In light of the above points, it is clear that, for now, the risks of artificial intelligence are real, but with the right approach, a well-informed strategy and appropriate timescales, these doubts can be resolved, enabling people to fully reap the benefits of artificial intelligence fully.
Translated by Joanne Beckwith