Artificial Intelligence (IA) is one of the Computing science branches that has grown and been developed the most during the last years.
But, as a matter of fact, their birth took place more than 70 years ago thanks to Alan Turing’s hand through the invention of the so-called Turing’s Machine –a computing model consisting of a head that was able to write, displace and read upon the tape it was on– in 1936, that is to say, a little early before Second World War.
Going back to our current days, we can see how IA has now more implications in more areas of our lives, besides the fact that it generates huge interest in the revolution it has caused, especially regarding software development.
When someone talks about IA our mind might think in highly humanized robots, with practical and accommodating functions. The fact is that we are not very far away from that.
Indeed, IA is now on a stage where they are looking to incorporate the most similar form of human intelligence into non-living agents. This may sound –on one hand– extremely exciting, but –on the other hand–, countless concerns related to ethics arise.
Why Ethics? Because through ethics, we can, and we must, question the potential, the barriers, the limits, the uses, the consequences, and the reach (among a lot of other things) of Artificial Intelligence influence in our lives.
The ethical debate regarding IA is open and is taking place at this right instant. In general, this debate is focused on the increasing need to regulate, in the short term, all the advances on this topic.
Among the participants of this discussion, we can find thinkers (philosophers, sociologists, psychologists, and anthropologists), scholars, researchers, designers, engineers, and –lastly–, users.
The more involved actors on the debate have been the European Parliament and a group of experts on IA that got together in California to delimitate a series of principles and ethical values for robotics and IA.
The European Union’s Commission took a step forward by publishing on December 18th of last year a draft with ethical principles entitled “Draft Ethics guidelines for trustworthy AI”, in which a group of high-level experts on Artificial Intelligence, from the European Commission (AIHLEG), participated.
On these texts, they exposed that trustable IA has two components:
01) It must respect the fundamental rights, current laws, and essential values and principles, so in this manner, an ethical aim can be assured.
02) It must be reliable and solid, technically speaking since a limited technological command can cause involuntary damage, despite its intentions are good.
Also, it explores topics as fundamental rights; principles and values that IA must fulfill to secure an ethical aim; the guidelines and requirements for a trustable IA; a review of the technical and non-technical methods for its implementation, and the implementation of these requirements on a list (namely, with specific cases).
In light of this scenario, we can conclude that the ethical debate about IA must be attended as an urgent and highly important matter, since this will modify our social fabric, and we are likely to see IA related developments in all the aspects of our life (health, education, logistics and transport, industry, and a long list that could continue), as we can confirm now.