Satya Nadella traveled to the Axel Springer headquarters to receive the 2023 Axel Springer Award. While he was there, he sat down and had an interview with Axel Springers CEO Mathias Döpfner.
The discussion covered a wide range of topics such as leadership, AI, the Activision Blizzard deal, and that canceling the Windows Phone was a mistake.
Does Satya Nadella think AI is safe?
I can't pretend to know what Satya Nadella's thoughts are on AI, but he gave a brief insight into his possible concerns in a response to an interview question in his recent meeting at the Axel Springer headquarters.
In the interview with Mathias Döpfner the CEO of the Axel Springer company, Satya Nadella was asked a question as a follow-up to the ongoing issues with Western countries and China as well as the arms race for AI.
"Will we see a duopoly of sorts, with two AI world powers competing against each other in a new AI arms race? Or do you think it is imaginable that we will one day have a kind of unilateral AI governance and infrastructure?"
Well, I do think some level of global governance will be required. The way I look at it, a little bit of competition is what will be there. But if there is going to be a successful, let's call it a "regime of control" over AI, then we will need some global cooperation like the IAEA. You know, what we've done in the atomic sphere might be the moral equivalent in AI where China also needs to be at the table.
This was a very interesting response, especially on the heels of Christopher Nolan's blockbuster hit "Oppenheimer." Most of the world is more acutely aware in 2023 of the dangers and birth of the atomic program worldwide, and the regulations that followed it, than we collectively have been in several decades. On the 14th of September, several tech CEOs met together in Washington to discuss AI regulation, per the BBC, with U.S. Lawmakers.
Meta's Mark Zuckerberg and Google boss Sundar Pichai, as well as Microsoft's former CEO Bill Gates and Microsoft's current CEO Satya Nadella were all in attendance at the closed-door meeting.
However, after the meeting, Republican Senator Mike Rounds and Democrat Senator Cory Booker both said it would take time for the congress to act on the decided need to enforce regulations. One of the few things both parties can agree on is how long important things take to get done.
There have been so many voices asking for AI to be taken more seriously and treated more carefully, in fact the New York Times reported recently, "More than 1,000 tech leaders, researchers and others signed an open letter urging a moratorium on the development of the most powerful artificial intelligence systems." We also reported on the Microsoft AI chief warning of coming AI challenges and ethics risks.
"I think if this technology goes wrong, it can go quite wrong...we want to be vocal about that, we want to work with the government to prevent that from happening."
How can regulations help make AI safe?
As the world looks to its leaders to find a solution to the threats and dangers posed by AI, there are few governments taking the appropriate steps. We reported back in April of 2023, that the Biden administration was finally considering rules to govern ChatGPT and Bing Chat, but to date, nothing seems to have resulted from those considerations.
The European Union has started working on the first AI regulation aptly called the EU AI Act. This could be the catalyst that sets a framework for other countries to follow suit. More CEOs have also voiced their concerns, such as Elon Musk, a co-founder of OpenAI and a long advocate for AI safety.
AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production, in the sense that it is, it has the potential — however small one may regard that probability, but it is non-trivial — it has the potential of civilization destruction
What's unique about Satya Nadella's comments is the call for global regulation similar to the IAEA or the International Atomic Energy Agency. This is a unique body that's purpose is to try to regulate the entire globe. There are other such agencies, but they are not usually enforced as strictly, for obvious reasons, as the Atomic energy issue is.
As the world becomes smaller due to the capability of data to travel the span of the globe in milliseconds, it becomes harder to effectively regulate a digital medium such as AI without full participation from the major players involved, in this case, as Satya Nadella mentions, China.
The US government has made claims that China's IP theft is a major issue in the trade relationship with China, and has little recourse to prevent China from continuing this alleged practice. The same issue would be evident in any attempts to curtail the advancement of AI or regulate its use if one of the main players in the development of AI, namely China, decides to play by its own rule book.
“These things are shaping our world, we have a perfect storm of corporate irresponsibility, widespread adoption, lack of regulation, and a huge number of unknowns.”
Looking at the past to save our future.
The pressure for regulation of AI is immense, the recent open letter being signed by 1000 notable figures in tech and AI is reminiscent of the letter sent by Albert Einstein to President Roosevelt about the dangers of what would soon become atomic energy.
The hope is that we as a species have learned from the mistakes of the 20th century. We should find ways to safeguard this new frontier of tech innovation and ensure that AI is developed safely and remains safe before a catastrophic event forces us to do so as was the case with the atomic program.
With the current state of the world, more caution is needed as corporations put profits over safety and governments put military advantage over mutually assured preservation.
Do you think AI needs to be regulated? Do you think it should be handled globally or on a country-by-country basis? Let us know in the comments.