If you remember, one of the things we discussed was how the military’s job is to save as many of their soldiers as possible. We do that by creating long-range weapons (sometimes referred to as standoff weapons), better armor, stealth, maneuverability, and more. So, what if we completely remove our own soldiers, sailors, and airmen? Then (our) people cannot be killed.
This is the idea behind drones. And each branch of our military, as well as many others, has either developed them, or is actively developing them.
US Naval Drones – while these are all aircraft, they are also exploring the use of drone subs and ships.
Drones are most often used for surveillance as well as attack. While we’ve historically seen expensive drones (several million dollar UAV), with recent conflicts in the Ukraine, they’ve moved to using off the shelf drones and upgrading them with 3D printers. This means you can have a cheap drone for only a few thousand dollars, and even if the range/capabilities is limited, their size makes it harder to hit, and/or you have to expend a more expensive counter tool to handle it.
As of this time, most of the drone warfare has required a human operator to actually attack. But the question is – should this step be required? Could AI do as good or better a job?
Consider a situation where we send (human) troops into combat. We give them rules of engagement, and allow them to make the decision based upon those rules. They may need to justify it afterwards, but they have the decision making ability as they are right there. They may not have time to wait for someone to respond back, or if they wait even a minute, non-combatants might be in the area and be a risk, or any other number of situations.
So, therefore, should we, if we can properly define rules of engagement, allow AI to “take the shot”?
Of course, we will want to make sure the rules are clear, as this situation found out. – AI drone attacks operator in simulation.
Drones are such a game changer, the US Marine Corps is looking at adding operators at the squad level. This is huge because the army wants it 2 or 3 levels up at the company or the battalion level, which means calling in for support and having to have it take the time to get there. Whether this is going to before attack, surveillance, or both, is yet to be seen. – source. (Ryan McBeth is an excellent source of military info and intelligence if you want to look for content on that subject.)
We also don’t know if the drones will be aerial- or land based “robo-grunts” at this time, and they might have some of both based upon the needs of the mission.
There are other potential advantages to having AI be responsible for the “kill”. First, emotion isn’t involved, which is both a positive and negative. Someone angry about losing a friend recently might want to seek revenge. AI will not have those same responses. Likewise, AI doesn’t get PTSD, like so many combatants do. Even drone operators are being found to have PTSD despite them not having any direct contact with the enemy, often because they might spend weeks surveilling them, and feel there is a connection.
But what is the chance of the robo-grunt taking up arms against it’s human operators, or two AIs from other sides choosing to join forces? This is an interesting question, and how we want to make sure it cannot happen, or there is some simple limitation which prevents them from doing this. (Maybe making it so they can’t re-arm, refuel?)
One question also is, should these robo-grunts look human? Is that a form of camouflage? And how realistic should they look?
In the Sci-Fi show Star Trek Voyager, they ran into a race of robots who could attack. They wiped out the biologic “builders” as they were called, but they couldn’t repair one another. How they were kept in check, essentially.
The Ethics of Drones and AI
So the question becomes is is “unsporting” to attack with drones where you, and your troops, have no risk? What ethical dilemmas does this pose?
Below is a short sci-fi film on YouTube, with an end comments from an AI professor/researcher. I highly recommend watching this. While you are watching it, note that everything that they talk about, is available and can be done right now, with the possible communication between drones of that size. The drones right now would be helpful to be about one and a half twice that size, to fully include all the processes, and would have limited, but still effective ranges.
If you’re asking, could a drone that small, with that much of a explosive charge be effective… the short answer is yes. There are rumors that some countries have used small charges in cell phones for assassinations. (These were the dumb/feature/brick phones, but you still couldn’t put that much charge into it.)
And here is information that recently came out about how the US is looking at this concept: https://taskandpurpose.com/tech-tactics/the-pentagon-is-facing-hard-decisions-about-letting-ai-weapons-kill/
The Ethics of Drones and AI on the Battlefield was originally found on Access 2 Learn