Exclusive: AI Just Controlled a Military Plane for the First Time Ever – My Comments

(005320.38-:E-003569.93:N-HO:R-SU:C-30:V)   


Jan‘s Advertisement
Photos: Adolf Hitler: One of the Good Guys 12 Things you were not told about Adolf Hitler
Here is an overview of 12 Things you were not told about Adolf Hitler and National-Socialist Germany.


[There is much ago about AI in warfare. But in many respects AI has already been appearing in war even since WW2. For example proximity fuses are a definite form of logic and "thought" in a basic way. You fire the shell, but the shell has to determine at what point it needs to explode. I have my skepticism about how effective AI will be. One area where it could "kill", literally, is with SPEED. Anything related to speed of reactions and even accuracy is where a human can be beaten by a machine. But it gets much more complex than that. Computers themselves have been used in war as has formed of electronic technology. The fact that weapons will become ever more complex is a fact. This has benefits, but it also has problems. My own views are not fully formed on this matter. I do think it is somewhat overblown. This is basically about computer programming – that's what is being discussed here. There is excellent news for Whites hidden in this though. 🙂  Jan]

The U.S. Air Force flew an artificial intelligence (AI) copilot on a U-2 spy plane in California.
The flight marked the first time in the history of the Department of Defense that an AI took flight aboard a military aircraft.

The AI algorithm, developed by Air Combat Command’s U-2 Federal Laboratory, trained the AI to execute specific in-flight tasks that would otherwise be done by the pilot.

On December 15, the United States Air Force successfully flew an AI copilot on a U-2 spy plane in California, marking the first time AI has controlled a U.S. military system. In this Popular Mechanics exclusive, Dr. Will Roper, the Assistant Secretary of the Air Force for Acquisition, Technology and Logistics, reveals how he and his team made history.

Teaming artificial intelligence (AI) with pilots is no longer just a matter for science fiction or blockbuster movies. On Tuesday, December 15, the Air Force successfully flew an AI copilot on a U-2 spy plane in California: the first time AI has controlled a U.S. military system.

Completing over a million training runs prior, the flight was a small step for the computerized copilot, but it’s a giant leap for “computerkind” in future military operations.

The U.S. military has historically struggled developing digital capabilities. It’s hard to believe difficult-to-code computers and hard-to-access data—much less AI—held back the world’s most lethal hardware not so long ago in an Air Force not far, far away.

But starting three years ago, the Air Force took its own giant leap toward the digital age. Finally cracking the code on military software, we built the Pentagon’s first commercially-inspired development teams, coding clouds, and even a combat internet that downed a cruise missile at blistering machine speeds. But our recent AI demo is one for military record books and science fiction fans alike.

With call sign ARTUµ, we trained µZero—a world-leading computer program that dominates chess, Go, and even video games without prior knowledge of their rules—to operate a U-2 spy plane. Though lacking those lively beeps and squeaks, ARTUµ surpassed its motion picture namesake in one distinctive feature: it was the mission commander, the final decision authority on the human-machine team. And given the high stakes of global AI, surpassing science fiction must become our military norm.

Our demo flew a reconnaissance mission during a simulated missile strike at Beale Air Force Base on Tuesday. ARTUµ searched for enemy launchers while our pilot searched for threatening aircraft, both sharing the U-2’s radar. With no pilot override, ARTUµ made final calls on devoting the radar to missile hunting versus self-protection. Luke Skywalker certainly never took such orders from his X-Wing sidekick!

The fact ARTUµ was in command was less about any particular mission than how completely our military must embrace AI to maintain the battlefield decision advantage. Unlike Han Solo’s “never-tell-me-the-odds” snub of C-3PO’s asteroid field survival rate (approximately 3,720 to 1), our warfighters need to know the odds in dizzyingly-complex combat scenarios. Teaming with trusted AI across all facets of conflict—even occasionally putting it in charge—could tip those odds in our favor.

But to trust AI, software design is key. Like a breaker box for code, the U-2 gave ARTUµ complete radar control while “switching off” access to other subsystems. Had the scenario been navigating an asteroid field—or more likely field of enemy radars—those “on-off” switches could adjust. The design allows operators to choose what AI won’t do to accept the operational risk of what it will. Creating this software breaker box—instead of Pandora’s—has been an Air Force journey of more than a few parsecs.

The journey began early in 2018, when I approved a hoodie-wearing Air Force team (fittingly named Kessel Run for a Star Wars smuggling route) to “smuggle” commercial DevSecOps software practices into our Air Operations Center. By merging development, security, and operations using modern information technology, DevSecOps produced higher-quality code faster and more continuously. Sounds perfect for a digitally-challenged Pentagon, right?

You’d think. Kessel Run bent all the rules and definitely “shot first” at the Pentagon’s fixation on five-year development plans with crippling baselines. As Han Solo advocated, keeping momentum sometimes required a good blaster at our side. Thankfully, Kessel Run’s results were game-changing, outpacing previous programs and inspiring a generation of Air Force and Space Force DevSecOps teams, including our U-2 FedLab.

"Given the high stakes of global AI, surpassing science fiction must become our military norm."

But coding effectively is only one element of trusted AI design. A year later, I directed a Service-wide adoption of coding clouds using landmark technologies containerization and Kubernetes. Containers virtualize and isolate everything code needs to run for Kubernetes then to orchestrate, selectively powering disparate software like a dynamic-but-secure breaker box.

Running ARTUµ containers in our FedLab cloud also proved they would run identically on the U-2—no lengthy safety or interference checks required! This is how we get evolving software—especially AI—out of our clouds and safely onto planes flying through them.

Yet this trusted design didn’t create ARTUµ’s copilot abilities. You have to train for that. Like a digital Yoda, our small-but-mighty U-2 FedLab trained µZero’s gaming algorithms to operate a radar—reconstructing them to learn the good side of reconnaissance (enemies found) from the dark side (U-2s lost)—all while interacting with a pilot. Running over a million training simulations at their “digital Dagobah,” they had ARTUµ mission-ready in just over a month.

Source: https://www.popularmechanics.com/military/aviation/a34978872/artificial-intelligence-controls-u2-spy-plane-air-force-exclusive/?source=nl&utm_source=nl_pop&utm_medium=email&date=121720&utm_campaign=nl22284779&src=nl

Source: https://www.popularmechanics.com/military/aviation/a34978872/artificial-intelligence-controls-u2-spy-plane-air-force-exclusive/?source=nl&utm_source=nl_pop&utm_medium=email&date=121720&utm_campaign=nl22284779&src=nl



Jan‘s Advertisement
18 Pics: RACE TRAITOR: Charlize Theron is a piece of shit! She is HATED INTENSELY in S.Africa!
She grew up in S.Africa, but now that she‘s a big star in Jewish Hollywood she is utterly disgusting. Look at these photos and you‘ll see what a total TRAITOR she is to Boere-Afrikaners.

%d bloggers like this:
Skip to toolbar