In an interview on Leading, a podcast from Alastair Campbell and Rory Stewart, co-hosts of The Rest is Politics, Microsoft and Gates Foundation co-founder Bill Gates talks about his perception of the dangers of AI.
"The key thing is that the good guys have better AIs than the bad guys," Gates says. "The issues is not the AIs getting out of control, it's AIs by people with ill intent being more powerful."
"Just take cyber defence. If the good guys' cyber defence AI is as good or better than the bad guys' cyber attack AI, then that's a good situation. And you're not going to stop development of AI globally. Somebody can argue that maybe you should try and do that and create a world army to go around and invade computer labs, but not many people are pushing on that. And so you're going to have these increasingly powerful AIs that hopefully the good guys stay ahead on."
Gates won't name and shame who he believes to be a bad guy or a good guy in this analogy, though he does call out Russia's instigation of an attack against Ukraine. Otherwise, Gates hopes most countries are planning to work sensibly with AI.
"Hopefully most countries want to see this stuff shaped appropriately," Gates says.
Though Gates also notes during the episode that individual countries have less to do with the shaping of AI. This, Gates says, is due to the government market for AI being much, much smaller than the business and consumer markets.
Previous watershed technologies, including the birth of microprocessors, which was a big driving factor behind Gates' early success at Microsoft, were massively government funded in their early days.
"Governments have a challenge in that they aren't the early people who funded it [AI]. If you actually go way back 20 or 30 years ago, it was government research money, but now it's the Google, Microsoft, etc. The R&D money is huge."
Gates still works closely with Microsoft, which has teamed up with the AI software market leader, OpenAI. Gates talks to that lot, too, led by Sam Altman. With these companies charging into an AI future, it's perhaps not surprising that Gates comes across as a wary admirer of what's possible with artificial intelligence—understanding the impact AI can have on the market, and already has in ways, but keen to point out that it must be shaped to avoid dangers.
"Whenever you have innovations they're kind of neutral, in a way, and they can end up empowering just the rich, or they can end up having unexpected negative side effects. With AI we can already say that bad people can use this for a tool for cyberattack or designing weapons of bioterrorism. So, we have to shape this so that the good things, like tutors for kids who need to learn, better medical advice, we need to shape this and have great AIs that are doing cyber defence.
Your next machine
Gates concludes, "Yeah I share all the concerns, but I also see the positives as being incredibly large."
There's one topic that frequently comes up in conversation when I speak with people about the future of AI, and that's the fear of an AI system gone haywire and ending the human race. Perhaps it's the popular media that's existed for decades now that perpetuates this thinking, or we just see something in a general artificial intelligence that sounds intrinsically dangerous. But is it the AI or the humans wielding it to blame?
I'd still hold there's some degree of uncertainty around the safety of what's known as a artificial general intelligence—the potential for an AI system that no longer acts in accordance with the human actors that created it or use it. That's nothing to worry about yet, however, as we've not quite cracked an AI with the all-round smarts of a human.
You can listen to the full interview with Gates over the Leading podcast, which is available on Spotify, Audible, Apple Podcasts, among others. It's worth a listen, if not for Gates talking about AI but what it's like to have conspiracy theorists shout at him in the street for allegedly tracking them.