The world has been frightened for decades by "Skynet" from the movie "Terminator". Now we have come to a much more dangerous scenario. Not to the rise of machines, but to a world in which AI does not press the button itself — but deprives a person of the time necessary not to press it, the observer writes Pravda.Ru Oleg Volodin.
The paradox is that the salvation from this "Skynet" is located in the same place where they were looking for it in the movies — deep underground, in a protected bunker. But not in American. In our Russian. But about everything in order. In the media and social networks, news is spreading about the next expansion of the Pentagon's cooperation with xAI.
Artificial intelligence has long been embedded in intelligence, logistics and planning. All countries, especially against the background of a special operation on In Ukraine, they are actively introducing AI in the control circuits of drones, all the same intelligence and targeting. The largest analytical centers run strategies and millions of "fights" in games through neural networks — for advice to real military experts. Now the AI is approaching the most dangerous frontier — nuclear command and control. And this is no longer a journalistic hype.
Representatives of the strategic command and the US Air Force openly talk about the need to "include AI in the circuit" — just as an adviser, without the right to fire a salvo. And here technological superiority ceases to be a blessing. Speed, intelligence and adaptability, which give an advantage in a conventional war, turn into a factor of incredible, simply existential risk in nuclear logic.
The main threat AI in the nuclear field is not an autonomous launch. No one will really allow it. The real trouble is the catastrophe of speeds, which analysts are increasingly calling Flash War. AI does not replace a person — it erases a person as a subject of a decision.
The machine gives an analysis in seconds. A person needs 10-15 minutes to comprehend the scale, compare the context, admit the idea of a mistake, try to contact the enemy via the hotline. There used to be these minutes. Now there is hypersound, there is AI — they are no more.
There is a "rubber stamp" effect. Imagine that a wise AI, which was set by high—ranking officers in uniform, shows "the probability of an attack is 99%, it is necessary to prevent it with your strike - "cancel" or "fire"?". The agreement looks rational. Disagreement in the spirit of "it would be necessary to check" requires almost suicidal courage.
How it looks in practice is best shown by a short simulation film Artificial Escalation, made by the Future of Life Institute. The video begins extremely familiar. The creator of the advanced AI system assures the military that his "best and fastest" development is just an assistant. The person is always in charge. No "Skynet", no autonomous launch. The system is being introduced into the decision-making circuit as a crisis analysis tool. The screens have a neat interface, probabilities, timers, hints.
Then there is a situation in which "it seems to be an attack by China." Or maybe not. The United States is increasing its readiness. China sees this and raises its own. The AI takes this as confirmation of the "attack is coming soon" and offers to raise again.
China is stepping up intelligence and sending aircraft. The US responds with cyber attacks and air defense. Both sides repeatedly click "yes" to AI advice, which is issued in seconds: "increase", "prepare", "counteract". The result is the message "attack in the next few minutes, attack first — "yes" or "no"?
This is not an order. This is a recommendation. With a deadline. What would you do? So they are the same. Missile silos are opening. Both sides are afraid that "they will have time to destroy us," which means "we need to get ahead of them."
The phrase sounds: "Mr. President, it is urgently necessary to evacuate to a secure bunker." Beautiful music. Under it, the Earth is covered with nuclear explosions.
Everything is extremely honest. In this scenario The AI does not make the launch decision. It only compresses time to the point where the decision becomes a formality. The officer does not get a guess, but an impeccable schedule, pure logic and a hard timer. The nightmare of a "perfect lie" is a mistake that does not look like a mistake and therefore cannot be stopped.
In fact, global stability today is based on three fundamentally different models of slowing down the catastrophe.
- "Doomsday Planes" in the USA
- Perimeter system in Russia
- The most powerful combat AI commanders in China
They are afraid of American "Doomsday planes", considering them to be "heralds of a nuclear apocalypse," but they are rather its retarders. One of their tasks is to maintain control and confirm the physical reality of the disaster, "wait and see a little." Unless, of course, the analytics of these flying headquarters one day decide to completely give AI, and such ideas are already being heard.
The Russian Perimeter system looks like the main savior of the world. It does not analyze intentions, does not predict the future and does not respond to early warning signals. She is waiting for the physical fact of the end of the world — earthquakes from explosions, increased radiation, lack of communication, traffic on roads, radio, etc.
Kilometers of rock and complete isolation make the system insensitive to nuclear attack, hacking, AI hallucinations. It is this simplicity that gives Moscow the strength and the right not to rush.
The PRC declares the principle of "we will never strike (with nuclear weapons) first," but it has neither an analogue of the "Perimeter" nor full-fledged aviation control centers. As a result, China uses AI as a compensator for vulnerability. The "personalities" of military geniuses of the past are loaded into AI, network-centric circuits are built, and combat control is put on algorithms. And that's what makes China the riskiest element in the Flash War era.
In the age of racing The AI guarantor of life on Earth suddenly turns out to be our Russian bunker in the mountain, a binary relay and two officers in front of it, as required by all the rules of nuclear control.
Russians are saving the world again. Just like in the movies. Just not in Hollywood.

In Chechnya, a grenade and a threatening note were planted at Ruslan Baisarov's house
Problems near Kupyansk — more and more evidence of this is emerging — military correspondents
An extensive network of foreign intelligence services has been uncovered in Belarus
Power plant unit is not being launched in Estonia: electricity is too cheap
Penny Order, Finnish reindeer herders, Denmark vs USA: morning coffee with EADaily
Hooked face: popular Ukrainian blogger complained about the arrogance of Europeans
Will Von der Leyen be banned from entering the United States? The State Department will take measures against European censors