DARPA tests dogfighting AI against human fighter pilot – Flightglobal

author
3 minutes, 27 seconds Read

The Pentagon’s secretive technology development agency has for the first time pitted an artificial intelligence (AI) agent against a human fighter pilot in air combat drills.

The Defense Advanced Research Projects Agency (DARPA), working with the US Air Force (USAF) test pilot school, calls the tests a breakthrough in the efforts to automate close-range fighter manoeuvres and combat aviation more broadly.

Using a specially modified Lockheed Martin F-16D known as the X-62A VISTA, DARPA challenged an AI algorithm to engage in within-visual-range (WVR) combat – better known as dogfighting or turn fighting – against a flesh-and-blood fighter pilot in another F-16.

DARPA says the tests, disclosed on 17 April, took place in 2023 and marked the first time an AI system and human have gone head-to-head in WVR air combat.

The milestone was confirmed by the USAF, with the air force’s top civilian official describing the feat as a “transformational moment”.

“In 2023, the X-62A broke one of the most-significant barriers in combat aviation,” says air force secretary Frank Kendall. “The potential for autonomous air-to-air combat has been imaginable for decades, but the reality has remained a distant dream up until now.”

The USAF and DARPA have been using the X-62A Variable In-Flight Stability Test Aircraft for several years to develop and test autonomous flight technologies. According to Lockheed, an artificial intelligence agent aboard the X-62A successfully logged more than 17 flight hours during evaluations in December 2022 at Edwards AFB in California.

Those tests apparently set the stage for the autonomous dogfighting exercise completed less than a year later in September 2023, also at Edwards.

DARPA says its Air Combat Evolution (ACE) team conducted 21 test flights between December 2022 and September 2023, making changes to over 100,000 lines of flight-critical software during that time – which it calls an “unprecedented pace of development”.

Work included development of new machine-learning methods to train and test the AI agent on a range of parameters, including flight-envelope protection, aerial- and ground-collision avoidance, combat-training rules, weapons-engagement zones and clear avenues of fire.

Initial flight safety was evaluated first using defensive manoeuvres before switching to offensive, high-aspect nose-to-nose engagements that involved the dogfighting jets within 610m (2,000ft) of each other at speeds of 1,043kt (1,931km/h).

In the interest of safety, the X-62A carried two human pilots who had abilty to disengage the AI, the air forces notes. But that safety backup was not required “at any point” during the dogfights over Edwards.

“We have to be able to trust these algorithms to use them in a real-world setting,” says Lieutenant Colonel Ryan Hefron, ACE programme manager for DARPA.

Building trust is one of the programme’s major objectives, the agency says, with the ultimate goal being to enable human-machine teaming that will give friendly pilots an advantage in “increasingly complex air combat scenarios”.

The development of autonomous jets – called Collaborative Combat Aircraft (CCA) within the USAF – has become a signature initiative for Kendall. The service selected five manufacturers in January to develop the first generation of CCA prototypes.

Several firms, including Boeing and Kratos, already have flight-capable autonomous jets undergoing evaluation. General Atomics also recently announced a remotely piloted jet, with the potential to incorporate future autonomy.

The recent X-62A dogfighting exercise is meant to feed into the CCA development effort, providing an outlet to test and improve autonomous-fight-control software.

“Dogfighting was the problem to solve so we could start testing autonomous artificial intelligence systems in the air,” says Bill Gray, chief test pilot for the USAF. “Every lesson we’re learning applies to every task you could give to an autonomous system.”

Notably, DARPA says air combat requires a more-powerful and -flexible, autonomy agent than is required to complete more-predictable flying tasks, such as landing on aircraft carriers.

The agency says machine learning-based agents, including the type being used by the ACE programme, “possess remarkable capacity to characterise complex, non-linear relationships in large, multi-dimensional state spaces that lack explicit rules”.

Washington has set a goal of fielding thousands of such autonomous systems in the coming years, with an eye toward countering China’s numerical advantages in the Western Pacific, particularly with ships and precision missiles.

This post was originally published on this site

Similar Posts