After the technological revolution brought about by trends towards the Knowledge Economy, the utmost importance emerges in guiding applications of artificial intelligence, where human sovereignty over this universe is linked to its mental superiority over all beings; to say that artificial intelligence outperforms the human species in a growing range of areas, however, means words used that call for a search for ways and methods which ensure that Man maintains control, or at least, preserves human values and teaches robots, and ensures that they behave ethically, otherwise, dealing with machines will create a generation that does not make a difference between him and robots that are reared with them.
Hordes of robots are now, incredibly, carrying out our instructions, but how do we ensure that these creatures always work in our best interests? Should we teach them to think for themselves? If so, how can we teach them right from wrong?
Given the line of Elon Musk, a tech entrepreneur who asserts that ARTIFICIAL INTELLIGENCE is humanity’s greatest existential threat, it may be said that the freer AI applications are, the more ethical standards these machines require.
The most important task for governments that have moved towards the Knowledge Economy, and for their institutions that are pursuing and using AI applications, is not only to educate artificial intelligence, but, most importantly, to train AI in the art of ethics, by teaching them to think, reason and work, and that does not mean that they follow a specific set of rules, telling them what to do, or what not to do, in every possible situation, but also, it means training them to apply their knowledge in situations that they have never faced before.
Experts agree that the intervention of human governance is extremely important: robots may see in the results of their judgements, for example, that the means of eradicating poverty is the death of all the poor, and in such provisions, we will also find the terms of harm and equity, and, as it seems, the world needs a machine capable of learning on its own.
There is no dispute that there is a list of basic values that can be taught to robots through sound, just as a child learns in the early years to imitate, simulate and acquire knowledge and language that is around, and the robot, like humans, acquires an intuitive sense of what is morally acceptable, by watching how others behave; the danger lies, however, in presenting the wrong models, whereby artificial intelligence learns bad behaviour from them.
However, there are complex environments in which teaching values to AI applications becomes difficult and reacts increasingly badly.
Amnesty International is one of many institutions dedicated to understanding the moral dimension of robots that have emerged around the world in recent years, and there are those working in this field, such as the Future of Life Institute, the Responsible Robotics Group, and the Global Initiative for Independent Ethical Systems, who are all competing for the best way to teach morality to machines.
Amnesty International has introduced the idea of a “moral transformer”, in which it seeks to get robots to emulate human feelings, rather than simulate human behaviour, in order to help robots learn from their mistakes, because the system allows each to experience something similar to human guilt, and guilt is a mechanism which discourages us from repeating particular behaviour. Therefore, it is a useful educational tool, not only for humans, but also for robots.
Progress in genetics, nanotechnology and neurodegenerative diseases has never been isolated from the moral dimension, let alone the social dimension. However, when the resulting technologies come together and are managed by organizations and institutions, the danger is that we discover how technological progress has exceeded our moral readiness, which is contrary to human instinct. For example, does man write poetry only with prior preparation, and this prior readiness, i.e., the talent that will have to develop to write poetry, have moral readiness, as well as social readiness, which must precede technological developments, or, at worst, is this task accompanied by the role of philosophers, educators and academies, or anyone who believes themselves to be well versed in it? Furthermore, governments should not separate those people who make artificial intelligence from all these matters but should work in a circle that has no parties, so responsibility is a collective responsibility, not the responsibility of one party in itself, without any others.
We can say that we may not be able to manufacture, program and teach the robot to have an ideal model, but the danger is to remain idly by and not to do anything. Alternatively, we can say, let us do our best to mitigate the result, that perfection is not in a self-driving car, because there are accidents that may occur, but it will have fewer accidents caused by Man, so let our goal be to minimize the disadvantages of what Man does, so maybe artificial intelligence can be better than humans. And to talk the rest.
Author : Manahel Thabet
Published March 05, 2018
Al Bayan Newspaper