Sam Oltman asks if AI should be allowed to defend Seoul: “I don't want AI to be part of that decision”
Amid the debate over whether humans can entrust artificial intelligence (AI) to fight wars, OpenAI CEO Sam Oltman, who has a strong influence in the AI industry, acknowledged that it is not an easy question. During a conversation on “Geopolitical Transformation in the Age of AI” at the Brookings Institution, a think tank in Washington, D.C., on Sunday (local time), Oltman was asked about a situation in which North Korea launches a surprise attack on Seoul and South Korea has to rely on AI, which can react faster than humans, to defend itself.
The moderator asked under what circumstances would it be okay to entrust AI with the decision to kill humans in a situation where North Korea launches 100 military aircraft toward Seoul and South Korea uses a swarm of AI-controlled robots to shoot them all down, killing 100 North Korean pilots.
“The AI may or may not make the decision to intercept when the aircraft is approaching South Korea and there's no time for human decision-making, but can we really be sure that such an attack is happening? How certain do we need to be? What are the expected casualties? Where do we draw the line in the gray area? There are a lot of questions (we need to ask),” Oltman said.
“I've never heard anyone say, ‘AI should be able to decide to launch a nuclear weapon,’” he said. I've also never heard anyone argue that AI shouldn't be used when you need to act really fast, like when intercepting an incoming missile. There's a gray area in between,” he said. “I hope we don't have to make those decisions at OpenAI,” he said, admitting that this is not his area of expertise.
When asked about the impact of geopolitical competition on the AI industry, he said, “We're very clearly on the side of the United States and our allies.” “There's also a humanitarian aspect, and we want this technology to benefit humanity as a whole, not just people who happen to live in certain countries with leadership that we don't agree with,” he said.
He believes that “AI compute” - the computing resources needed to run AI systems - and AI infrastructure, such as semiconductors and data centers, are likely to be “the most important commodities of the future. He hopes that governments, as well as the private sector, will invest in AI infrastructure as a public good and distribute it equitably so that it becomes affordable and available to everyone. He explained that the supply of semiconductors has been a problem in expanding AI infrastructure, but it has been solved to a certain extent, and now power supply is not easy.
He would like to see a “broad and inclusive coalition, led by the U.S.,” take the lead in expanding AI infrastructure. “It's not going to work, and the rest of the world isn't going to like it, if it's just the United States building AI data centers. I wouldn't feel good about it either,” he said. “While we will disagree with China on many important things about AI, I am generally hopeful that we all share the goal of reducing the catastrophic risks of AI,” he said.
He said AI companies are focused on preventing AI from being used to interfere in elections, and “we're going to be paranoid until the election is over.” “We're going to be paranoid until the election is over,” he said, noting that people are more wary of AI-enabled fake news than in the past, so it won't be as easy to use AI to influence elections in the same way, but he didn't rule out the possibility of new threats emerging.