Skip to Content

Pentagon Shapes Future War: Advanced AI Enables Integrated Attacks in Matter of Seconds

<i></i><br/>
KEYT

By Security Television Network, Author: by Kris Osborn, Warrior Maven

Click here for updates on this story

    September 18, 2021 (Security Television Network) — (Washington, D.C.) The Army’s “Project Convergence,” the Air Force’s “Advanced Battle Management System” and the Navy’s “Project Overmatch” are the names each service gives to an AI & autonomy-enabled network of interwoven “meshed” nodes operating within a broad multi-domain warfare environment.

The defining concept, or strategic impetus for each of these respective efforts is clear and fundamental current technological modernization efforts, as it is based upon the premise than any combat platform or “node,” whether it be a fighter jet, tank, ground control station or surface ship, can operate not only as its own combat-capable “entity” but also as a critical surveillance and warfare information “node” able to gather, process, organize and transmit time sensitive data across a large force in real time.

For example, instead of having to send images through a one-to-one video feed into a ground control center, a forward operating surveillance drone could find crucial enemy targets, analyze a host of otherwise disconnected yet relevant variables, and send new time-sensitive intelligence information to multiple locations across the force in seconds.

Pentagon’s Joint Artificial Intelligence Center (JAIC)

Each of these efforts may have its own name, yet they are fundamentally based upon a common tactical and strategic approach. Merging these respective service efforts into a coordinated, highly-efficient, high-speed multi-service war-machine is now being massively emphasized by the Pentagon’s Joint Artificial Intelligence Center (JAIC).

So I think, you know, we are key partners, you know, with both, you know, the ABMS series of exercises, with Project Convergence and we’re also working closely with the Navy on their Project Overmatch,” Lt. Gen. Michael Groen, Director, Joint Artificial Intelligence Center, told reporters according to a Pentagon transcript.

This is already happening and new efforts are rapidly gaining traction as all the services move toward massively expediting “sensor-to-shooter” time in the context of a multi-service attack “web.” The faster enemy targets can be seen and assessed in relation to surrounding terrain, incoming enemy fire, navigational specifics and which mode of attack would be most optimal in that circumstance, the faster an attacking force can prevail in combat.

This is both as self-evident as it is crucial. Getting inside of or ahead of an enemies’ decision making cycle is, simply put, the essential factor determining victory in modern warfare.

Combining AI Data Analysis & Kinetic Military Action

At the same time, this technical architecture is only as effective as the long-range, high-resolution sensors and precision-guided weapons capable of destroying them. Yet such sensors and weapons may be of little use if target information or moments of great relevance are not found and identified in time such that they can be destroyed.

Information-driven sensing and AI-enabled data analysis are then, by design, merged with so-called “kinetic” options such as missiles, rockets, guns, bombs and other weapons to complete the kill chain ahead of an enemy. For the Army, it can begin with unmanned-unmanned teaming wherein forward operating mini-drones transmit sensor specifics to a larger drone which then passes the data through an AI-capable system known as Firestorm which, in a matter of seconds, pairs the threat or target data provided by a “sensor” with the optimal method of attack, or “shooter.”

Beginning with a mini-drone or SATCOM network miles away, a process of finding, identifying and destroying an enemy with a ground combat vehicle, helicopter or even dismounted force can be reduced from 20 minutes … to 20 secs.

The idea with the Air Force’s ABMS is similar in scope and application, as it involves gathering, processing and disseminating crucial combat data between satellites, bombers, fighter jets, drones and even ground attack systems.

In one Air Force test circumstance, referred to as an ABMS “off-ramp,” a ground-based 155mm artillery weapon achieved an unprecedented breakthrough by tracking and destroying a high-speed, maneuvering cruise missile.

Weaponized AI capability,” is an ominous sounding term Pentagon leaders use to explain the serious and growing risks presented by technologically advanced adversaries increasingly capable of building AI-enabled, lethal robotic weapons systems unconstrained by ethics, moral consideration or human decision making.

The concern centers upon a single question, as countries such as Russia and China operate capable and fast-evolving AI-empowered robots, drones and weapons systems potentially detached from human decision-making.

Citing the possibility that “an authoritarian regime, like Russia,” could develop a weaponized AI-capability, the Pentagon’s Commander of the Joint Artificial Intelligence Center, Lt. Gen. Michael Groen said U.S. and friendly forces may not be able to use comparable capability, particularly in the event that a potential adversary attacked with AI-empowered weapons absent ethical or humanitarian concerns.

Could this put U.S. forces at a disadvantage? Sure, especially given a scenario when it comes to decisions about the use of lethal force given that, per Pentagon doctrine, a human must always be “in the loop.”

Weaponized AI: U.S. & Allies

However, Groen seemed to suggest that advanced weapons developers, scientists and futurists are now working with an international group of like-minded allies interested in accommodating ethical concerns and yet still prevailing in combat. Part of this pertains to doctrinal and technological development, it would seem, which might seek to balance, orient or integrate the best of technical capability with optimal tactics sufficient to repel an AI-driven enemy attack.

“We think that we actually gain tempo and speed and capability by bringing AI principles and ethics right from the very beginning, that we’re not alone in this. We currently have an AI partnership for defense with 16 nations that all who embrace the same set of ethical principles have banded together to help each other think through and to work through how do you actually develop AI in this construct,” Groen said to reporters, according to a Pentagon transcript.

Referring to what he called an “ethical baseline,” Groen said that there are ways to engineer and use highly effective, AI-enabled weapons in alignment with established ethical parameters.

One such possibility, now under consideration by scientists, engineers and weapons developers at the Pentagon, is to architect and AI-systems able to instantly employ “defensive” or “non-lethal” attacks against non-human targets such as incoming anti-ship missiles, mortars, rockets or drones.

Yet another interesting nuance to this is, given the pace and procedural efficiency with which AI-enabled sensors, fire-control, weapons systems and data analysis can operate, rapid human decision-making may not necessarily always “slow-down” a decision about whether to attack.

Advanced AI analysis can, for instance, compare data regarding an incoming attack against a vast historical database and make instant determinations about which course of action might best address the situation. This would happen because advanced algorithms could draw upon a historical database comparing how particular attack circumstances were addressed in the past, in relation to a host of impactful variables such as weather, terrain, range and available “effectors.”

Done this way, which was the case during the Army’s Project Convergence exercise last Fall, the kill chain can be completed in a matter of seconds with human decision-makers still operating “in the loop.”

“If we can make good decision-making and have informed decision-makers, we think that is the most significant application of artificial intelligence. And then we’ll continue to go from there into other functions. And the list is endless from every, you know, moving logistics successfully around the battlefield, understanding what’s happening based on historical pattern and precedent, understanding the implications of weather or terrain, on maneuvers, all of those things can be assisted by AI,” Groen.

Please note: This content carries a strict local market embargo. If you share the same market as the contributor of this article, you may not use it on any platform.

Dr. James Hall
drhall@security20.com
(202) 607-2421

Article Topic Follows: CNN - Regional

Jump to comments ↓

Author Profile Photo

CNN Newsource

BE PART OF THE CONVERSATION

News Channel 3-12 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.

Skip to content