A few years ago ‘swarming’ was all the rage as efforts to combat saturation attacks against shipping stepped up a gear. More recently, attention has turned to small unmanned aircraft with many solutions introduced by manufacturers that use electronic or more physical means of countering them.

The video below shows a recent trial of 50 small unmanned aircraft controlled by a single operator.

Each one cost approximately $2,000 and the software and launch system developed by the Naval Postgraduate School in Monterey, California.


Each one communicated with each other using WiFi

Once in the air, the drones communicated with each other via a system that uses high-powered Wi-Fi rather than conventional drone-communication systems, which would be swamped by the overlapping signals. The launch also gave an opportunity to test swarming algorithms with real drones rather than simulations.

“Most of the swarming operations are things like ‘follow-me’ mode, where one or more UAVs follow a leader around the sky,” says Jones. This allows the whole swarm to be moved without directing the aircraft individually. There are also algorithms for search-and-rescue operations, in which the flight pattern resembles that of foraging bees.

“The swarm behaviour looks quite random as the aircraft move around the sky trying to optimally search an area in the shortest amount of time,” says Jones.

Read more at New Scientist

Newest Most Voted
Inline Feedbacks
View all comments
October 23, 2015 10:50 pm

I would have thought the scientists should keep rather quiet about their ambition to set up a 50v50 AI aerial dogfight within no humans in the loop. That could get a lot of liberal attention (with some justification) that the military UAV operators and developers could do without…..

October 24, 2015 8:08 am

I’ve said this before – as far as I’m concerned fully autonomous weapons should be banned/controlled in just the same way as chemical & nuclear weapons are. Software is incapable of value judgement (and has a pretty poor record on latent faults leading to unreliability and flawed functionality). In the role of target identifier, engagement judge, target prosecutor and killer, it is inadequate. It will have a set of inputs that may or may not be fully adequate and it will ignore all other factors, where a human operator will have the instinctive ability to absorb new information into the decision process.

An example. USS Vincennes destroyed a civilian jet. According to the Ops Room officers all the attributes of a hostile military aircraft launch had been met, and because they did not monitor the civil ATC bands they missed the vital information that it was an airliner going about its legitimate business. Had that bit of information been picked up it is (you would hope) unlikely the guy at the weapon control station would have shot it down.

But if a software coder neglects to include all possible sources of information into their dumb automaton, there is no recourse to common sense. It wouldn’t matter how loud and clear the information might be that the ‘target’ is no target at all, if the software hasn’t been coded to process the data it will be ignored.

October 24, 2015 1:46 pm

I understand and share your fear against “autonomous killing by software”.

However, the USS Vincennes issue is saying that, even a human is “in to loop”, that kind of tragedy happens. It is not software, i.e. technology (It is quite easy to put “VETO” option in any software. “Not to hit airliner”.)

It is also not “frontline soldiers” who is actually shooting the weapon.

It is the system. In other words, software, manuals, operation process scheme, and training and training and training, … all mixed.

Although personal opinion, there is no way to stop these developments. Someone will do that, not neccesarily in US or UK or Germany of France. But possibly in Russia, China, Iran, India, Korea, or even north Korea or IS. You need only a handful of genius, because basic technologies (A/I softs, MEMS giros, Hi-vision cameras, motors, actuators, wifi, ..) are all in the commercial market.

However, the fear is fear.

So, as you suggest, I think the issue will be an international treaty, or laws, prohibiting to apply these software to the real weapon, but not to stop developing the technology itself.

However (again) I do not have confidence in my comment. For me, autonomous dog fighting software is NOT a big issue, simply because you need a hi-tech “Un-manned fighter”, surely quite quite expensive. Need for frequent maintenance will make it easy to identify their “base”, i.e. the root cause of the killing. Thus, the influence is limited.

There will be much more other “dangerous” fields. Many, I think. That is the world we live.

Tad Lyon
Tad Lyon
October 26, 2015 4:44 pm

Off topic, but can anyone tell me if one of my favorite British bloggers is OK? The Mellow Jihadi website has suddenly gone away.