How militaries are using artificial intelligence on and off the battlefield

Artificial intelligence has been a crucial tool for many nations’ militaries for years. Now, with the war in Ukraine driving innovation, AI’s role is likely to grow. Paul Scharre, vice president and director of studies at the Center for a New American Security, joins Ali Rogin to discuss how militaries have adopted AI and how it might be used on the battlefield in the future.

Read the Full Transcript

Notice: Transcripts are machine and human generated and lightly edited for accuracy. They may contain errors.

  • John Yang:

    Artificial intelligence has been a crucial tool for many nation's militaries for years. Now the war in Ukraine is driving innovation. And as that conflict drags on, AI is rolling it is likely to grow. Ali Rogin looks at how militaries are using AI today, and ahead to how it might be used in the future.

  • Ali Rogin:

    More artificial intelligence on the battlefield carries great potential, but also higher risk. Right now Congress is pressing the Pentagon through legislation to invest further and move faster on AI to avoid falling behind on this nimble but critical technology.

    Paul Scharre is the Vice President and Director of Studies at the Center for a New American Security. He's also a former Army Ranger, Pentagon official and the author of "Four Battlegrounds: Power in the Age of Artificial Intelligence." Paul, thank you so much for joining us.

    Artificial Intelligence is already used to some extent on the battlefield, but we're not talking about completely autonomous technology, what is available currently, what are warfighters already using? And then where do you see the technology going in the near future?

  • Paul Scharre, Vice President, Center For A New American Security:

    That's right, we're already seeing AI being used on the battlefield in Ukraine. Now, humans are still in control of the fighting. But one of the things that AI is doing is helping to process information faster.

    AI is being used to sift through satellite images and drone video feeds, and that helps militaries then better understand what's happening on the battlefield, make decisions faster, and then target the enemy faster and more accurately.

  • Ali Rogin:

    So what happens when we do consider having humans not be at all in control when these systems are fully automated? What are the pros and cons of that?

  • Paul Scharre:

    Well, we're already seeing drones being used in Ukraine that have all of the components needed to build fully autonomous weapons that can go out over the battlefield, find their own targets, and then all on their own attack those targets without any further human intervention. And that raises very challenging legal, and moral and ethical questions about human control over the use of force of war.

  • Ali Rogin:

    Now we're seeing Ukraine sort of lead the conversation in the application of using these fully autonomous devices. Do you think we're going to see more of that? And is there concern about how they might be used by differently by state actors and non-state actors like terrorist organizations?

  • Paul Scharre:

    Well, war is an accelerant of innovation. So the longer that this war goes on, the more that we're going to see more innovation on the battlefield. We're already seeing innovative uses of drones and counter drone technologies, things like electronic warfare systems that can target drone operators, and then call it artillery strikes on the drone operator.

    And that kind of technology pushes militaries towards more autonomy, but it's not just confined to nation states. ISIS actually had a pretty sophisticated drone army a few years ago, and they were carrying out drone attacks against Iraqi troops are pretty effective.

  • Ali Rogin:

    And now we've talked about how AI is used in weapons, but how about systems off the battlefield?

  • Paul Scharre:

    Well, most of what militaries do is not actually right at the tip of the spear fighting. It's logistics, personnel, maintenance, it's moving people pulling things from one place to another, on a day to day basis, it looks a lot like what Walmart or Amazon do. It's what happens at the end, it's different.

    And so AI has advantages and all of those other non-combat functions that are critical how to how militaries operate. And if militaries can make their maintenance and logistics, and personnel and finance functions, just 10 percent better, that's going to have huge impacts for militaries on ultimately, their capability at the military's edge on the battlefield.

  • Ali Rogin:

    Now, some of what we're seeing in Ukraine is employing commercially available technology that can simply be purchased for a couple $1,000. How is the U.S. Department of Defense, dealing with keeping up with that sort of competition that exists? How is that playing out?

  • Paul Scharre:

    Well, they're not keeping up. That's the short version, they're woefully behind because the culture is so radically different. And the bottom line is, you can't buy AI the same way that you might buy an aircraft carrier. The military is moving too slow. It's mired in cumbersome bureaucracy. And the leadership of the Pentagon has tried to shake things up. They had a major reorganization last year of the people working AI and data and software inside the Defense Department.

    But we haven't seen a lot of changes since then. And so the Pentagon is going to have to find ways to cut through the red tape and move faster if they're going to stay on top of this very important technology.

  • Ali Rogin:

    And Paul, lastly, on the global level as this technology continues to proliferate, some countries are calling for the establishment of some general rules of the road. What does that conversation look like? What are some of the contours of that debate?

  • Paul Scharre:

    Well, we've certainly seen debates over the last several years, all the way back to 2014. About lethal autonomous weapons, there's a pretty wide range of views on this. And the United States, as well as other countries like Russia, have said that we have existing rules, we have the laws of war. The laws of war apply to autonomous weapons, just like any other weapon, and we need to focus on adhering to those and making sure that any use of these weapons is consistent with the law of war.

  • Ali Rogin:

    And what about the other side of that those who say we need additional rules and that the existing rules don't fully apply here?

  • Paul Scharre:

    That's right. So there's about 30 countries that have said that they'd like to see a preemptive legally binding treaty that would ban autonomous weapons before they can be built. But right now, none of the leading military powers of robotics developers are part of that group.

    And so it hasn't yet had the political heft to get to a treaty. That could change as we see the technology advance. And as we see, of course, more broadly, concerns about AI growth (ph), as we're seeing AI technology advanced and there's more calls for global regulation of AI.

  • Ali Rogin:

    Paul Scharre with the Center for a New American Security. Thank you so much for joining us.

  • Paul Scharre:

    Thank you. Thanks for having me.

Listen to this Segment