Although artificial intelligence began to develop since the Second World War, it is only in recent years that we have had a boom in its daily use.
From the virtual assistants on our smartphones to the support in the early detection of diseases, through countless tasks that we perform daily in different industries. All this has occurred mainly due to the development in the computing capacity that we have today since the use of large volumes of data is necessary for the use of this type of technology.
Artificial Intelligence appears a very important challenge for humanity. Under what ethical principles are we going to program our AI algorithms when faced with complicated dilemmas? For example, if an autonomous car faces a situation in which a person is crossing the street and the only possible options are: running over the pedestrian causing death or diverting the car and crashing into a wall causing the death of the passenger, What decision should you make? What if the pedestrian is crossing the street and the light is red? What happens if the person traveling in the car is an elderly person and the person crossing the street is young? What happens if the one in the car has a family and the one who is crossing does not?
These and many more questions must be answered in order to properly program the AI algorithms that we are building for autonomous cars; as well as for a myriad of situations that may arise when artificial intelligence is to be used. Or perhaps what we should do is let the algorithm decide for itself?
There is a very important field of action in the current developments that are being made on artificial intelligence: Ethics in its creation. Just over a year ago, the European Union adopted the 7 ethical principles that aim to seek benefits for the whole of society, respect data privacy, and protect ourselves against foreseeable errors and attacks. Some of those are:
Artificial Intelligence systems must allow equitable societies to support the fundamental rights of humans, and not diminish, limit, or divert their autonomy.
Artificial Intelligence requires algorithms to be secure, reliable, and robust enough to face errors or inconsistencies during all phases of the life cycle of your systems.
Artificial intelligence systems should be used to enhance positive social change, sustainability, and ecological responsibility.
Under this analysis, ethics takes on a new dimension and importance in today's world. We not only need more data scientists but people trained in ethics to make good use of technology, thinking about the good of society. I remember in my college years in business school when I had to take an ethics class. I did not understand at that time why this should be a subject in the curriculum of my career. Now, working in the technology industry, I quickly understood the importance of this in my career, in business and in life in general. Today more than ever, development engineers must be trained in Ethics and reinforce social values so that the developments they are making today really benefit humanity and the environment.