Giving machines the will to survive will bring them closer to ‘strong AI’ (and force us to adopt laws of robotics) – FindNow
Connect with us

Artificial Intelligence

Giving machines the will to survive will bring them closer to ‘strong AI’ (and force us to adopt laws of robotics)

Published

on

The neuroscientists Kingson Man and Antonio Damásio explain, in an article recently published in Nature Machine Intelligence, a truism: that artificial intelligences lack feelings and that, in the best of cases, they can only aspire to artificially simulate them, because ” they are not designed to represent the internal state of your processes in such a way as to allow you to experience that state in a mental space. ” But why should we give AIs feelings?

Man and Damásio are convinced that there is a way to give robots feelings. At least indirectly, since they propose to do so through the ‘implantation’ of a single desire: that of self-preservation ; his theory is that, from that point on, an artificial intelligence would have to develop those ‘feelings’, a way of calling those necessary behaviors that ensure its own survival.

It would try to simulate a biological property, that of homeostasis ; that is, the ability of the organisms themselves to remain within a narrow range of conditions considered acceptable to keep them alive (for example, a certain range of body temperatures).

If we could teach machines which factors play this role in their own survival (connected cables, adequate amount of electric current, etc.), we could endow them with a self-preservative behavior: that is, endow them with a sense of vulnerability that makes them feel ‘ fear ‘of threats to their existence, and to’ comfort ‘them when those factors re-establish themselves .

The way to intelligence through feeling

But, although we can all perceive the advantages of achieving this … why would these ‘feelings’ suppose a cognitive enhancement of artificial intelligence capable of bringing it closer to a human-type general intelligence ? Man and Damásio start from the basis that our own high-level cognition is the consequence of the adaptation of the human species to solve more efficiently the biological problem of homeostasis.

The researchers are convinced that this is the ingredient that would be needed to achieve an AI equivalent to human , in the sense that it would not be designed solely for highly specialized tasks , but could be deployed in all kinds of situations, even those that are not. provided by your schedule.

But understanding one’s own internal state, a certain form of self-awareness , is necessary to allow AIs to perceive threats to their existence, and that can only be ‘taught’ by resorting to deep learning or ‘deep learning’ and the use of artificial neural networks. , which allow you to detect and classify patterns in data entry.

Thanks to these technologies, an AI could deduce cause-effect relationships between its internal state and external conditions, just as it already does between a certain movement of the lips and the sound emitted when speaking, and the knowledge of these relationships would be the basis of its ‘feelings’, which would lead him to behave creatively, without relying on pre-programmed conditions for each eventuality , only looking for his own homeostasis … or that of those around him.

Legislate empathy

We would enter there in an area reserved for the famous Asimov Laws, or the rules with which we want to equip ourselves in the future for these cases, since it is obvious that we will want to protect ourselves from certain unwanted consequences of the AIs will to self-preservation … especially those with a physical presence, such as robots.

Let us remember that, in the fiction of Asimov, and of so many authors after him, robotic self-protection was conditional on the protection of human homeostasis, as well as on obedience to their orders.

Man and Damásio are convinced that we would not have to face any Skynet if we succeed in allowing the machines, “in addition to having access to their own feelings, to know the feelings of others, that is, they are endowed with empathy”. Thanks to that, the ‘Laws of Man-Damásio’ (they don’t call them that, of course) for AIs would be reduced to two very specific and brief orders: 1) Feel good and 2) Feel empathy .

Among human beings, psychopaths are defined by their absolute inability to ‘put themselves in the place of the other’ , that is, by their absence of such a basic element for the rest of us as empathy . And yet we can find thousands of daily examples of people who, even feeling empathy, are capable of hurting their peers voluntarily or involuntarily … which makes this ‘legal’ proposal small of enormous optimism.

However, the proposal to further investigate the relationship between self-preservation mood and the development of general AI seems more promising, in light of their research.