That the launch of nuclear missiles after an enemy attack remains in the hands of the AI: the proposal of two US experts. – FindNow
Connect with us

Artificial Intelligence

That the launch of nuclear missiles after an enemy attack remains in the hands of the AI: the proposal of two US experts.

Published

on

The existence of hypersonic missiles, stealth cruise missiles and weapons equipped with artificial intelligence have further reduced the time that the authorities of a nuclear power would have to give a green light to a counterattack before their own control and communications systems were left unused by the enemy.

Given this circumstance, US nuclear deterrence experts Adam Lowther and Curtis McGiffin have proposed in an article published in a specialized publication that, in their country, the power to give the counter-attack order does not depend on any human being, but on an intelligence artificial: an AI controlling the launch button of the 3,800 nuclear missiles in the US arsenal .

What was (or is) the ‘Hand of the Dead Man’?

Actually, the proposal is not so novel, since the great enemy of the US During the Cold War, the Soviet Union had a similar system (semi-automated and lacking AI in the modern sense of the term): the Perimtr System , popularly known in the West as the Hand of the Dead Man .

Its operation was simple: the system was in constant contact with various military installations in Russian territory (including the Kremlin itself) and if at any time all communication channels failed (presumably because of a massive enemy nuclear attack), Perimtr would enable the manual launch of the Soviet arsenal, allowing even low-ranking officials without access to the launch codes to trigger the counterattack .

Officially, the Hand of the Soviet Dead Man was dismantled after the fall of the USSR, although experts such as Peter Smith (author of ‘Doomsday Men’) claim that Russia would still be protected today by a Perimetr heir system .

In any case, now Lowther and McGiffin propose to provide the United States with an equivalent mechanism, although this time the AI ​​would allow it to be fully automated:

“It may be necessary to develop a system based on artificial intelligence, with predetermined responses, that is capable of [reacting] at such a speed that reducing attack times does not place the United States in an unsustainable position.”

How to prevent an algorithm error from leading to a nuclear holocaust?

The authors propose other alternatives to this proposal, which involve placing nuclear launch systems outside the US borders to guarantee the maintenance of their retaliatory capacity . However, they make very clear their commitment to the use of artificial intelligence. And that raises questions about the algorithms and security protocols that could be counted on such Skyias sosias to avoid unleashing an unwanted nuclear war.

Today, machine learning algorithms are trained by using large data sets that will allow AI to learn to detect what it is looking for . For example, autonomous driving systems are formed by analyzing millions of examples of human driving … and still, the amount of data is revealed insufficient given the complexity of the task.

So, how to train an algorithm that should be able to detect that your country has suffered a massive nuclear attack … when in history there has never been a single case of such an attack ? Michael Horowitz, a professor at the University of Pennsylvania and a contributor to the Bulletin of Atomic Scientists (the scientific organization responsible for maintaining the Judgment Clock ), states that, given the state of AI development, such a system would imply “some risks “.

You could always look for, as a security mechanism, that the system demanded confirmation of a human before taking the detection of such attack as good. After all, Lieutenant Colonel Stanislav Petrov already avoided a nuclear holocaust in 1983 , when the Soviet nuclear missile detection system erroneously detected five missiles heading towards Russian territory, a fact that he (rightly, though against the security protocol) official) dismissed as a mere ‘false alarm’.

However, in the days of artificial intelligence, the human being tends to be less and less skeptical of his claims. Horowitz calls it ‘automation bias’ and illustrates a study in which pilots said they would not rely on an automated system to inform them of a fire in the engine unless there was evidence to corroborate it … but, a Once immersed in the simulations, that was exactly what they decided to do.

“There is a risk that automation bias will prevent Petrov of the future from using its own judgment. Or, simply, that in the future there is simply no Petrov.”