China has developed the world's first AI commander, known as the first 'Skynet' in history. This innovative military artificial intelligence system has the capability to decide to launch a global nuclear attack. Engineers demonstrated its functionality by integrating this commander into simulations of the People's Liberation Army (PLA) through large-scale computer war games involving all branches. For now, the Chinese government has decided to keep the AI isolated, away from real command and battle systems.

This technological achievement is confined to a laboratory at the Joint Operations College of the National Defense University in Shijiazhuang, Hebei province, according to the Hong Kong-based South China Morning Post. According to Chinese military doctrine, only human commanders authorized by the Central Military Commission of the Communist Party of China can issue military orders, under the principle that 'The Party controls the gun.'

Scientists described the system in a peer-reviewed article published last month in the scientific journal Command Control & Simulation. The study argues that AI commanders are necessary if the military wants to effectively control assets such as drone swarms, missile launches, or autonomous armored units. According to the project lead engineer, Jia Chenxing, "the current joint operations simulation system shows poor results in simulation experiments due to the lack of command entities at the joint battle level." The primary purpose of this AI system is to help test operational plans for potential military conflicts, particularly in sensitive regions such as Taiwan and the South China Sea.

The AI commander is designed to learn from experienced human strategists and can be adjusted to reflect different command styles if necessary. Its decision-making process uses the knowledge and memory of strategies, similar to a chess player, and even simulates human properties like forgetting, thereby limiting its knowledge base. According to the study, when the system's memory reaches its limit, it is capable of discarding unnecessary knowledge units, replicating human commanders. "The personality of the virtual commander can be adjusted if deemed necessary," says Jia.

In experiments, the AI conducts simulations autonomously, identifying new threats, developing plans, and making optimal decisions without human intervention. By being able to do this repetitively and limitlessly, the system gains experience that is unmatched by a human military officer, providing valuable insights into various combat scenarios for real commanders.

Structure and Command Capability

For now, the Chinese 'Skynet' can only participate in simulations and does not have effective command over real units. China currently does not allow an AI to lead the armed forces directly, although it allows vanguard units, such as drone swarms, to make autonomous decisions, such as selecting targets. However, ultimate command authority rests with a human leader. Jia argues that while there should be "a higher-level commander as the sole central decision-making entity for the overall operation, with main responsibilities and decision-making authority," AI commanders are needed to coordinate forces.

In the United States, AI serves as a decision-making support tool and does not play the role of commander. The U.S. Army uses AI as "virtual commander staff," providing decision-making support, while the AI used by the U.S. Air Force participates in frontline training but does not have command responsibilities in real operations.

The reality of these systems may change in the future. Currently, both countries, along with others worldwide, are experimenting with autonomous weapon systems that can carry out attacks without authorization in case of communication failure, something possible in electronic warfare scenarios. The war in Ukraine is serving as a significant simulation field for these autonomous systems.

Talks and agreements to establish strict regulation to prevent the worst extremes have not advanced, particularly the key goal of prohibiting AI systems from having direct and autonomous control over weapons that can cause loss of human lives, as well as control of weapons of mass destruction.