Microsoft gets exclusive access to AI deemed 'too dangerous to release'

 (Mohammad Rezaie)
(Mohammad Rezaie)

Microsoft has an exclusive license to use OpenAI’s GPT-3 artificial intelligence language generator, the company has announced.

The previous iteration of GPT-3, called GPT-2, made headlines for being “too dangerous to release” and has numerous capabilities, including designing websites, prescribing medication, answering questions, and penning articles.

Microsoft says it will “leverage its technical innovations to develop and deliver advanced AI solutions for our customers”, although was not specific about what that would be.

“The scope of commercial and creative potential that can be unlocked through the GPT-3 model is profound, with genuinely novel capabilities – most of which we haven’t even imagined yet”, wrote Kevin Scott, Microsoft’s executive vice president and chief technology officer.

“Directly aiding human creativity and ingenuity in areas like writing and composition, describing and summarizing large blocks of long-form data (including code), converting natural language to another language – the possibilities are limited only by the ideas and scenarios that we bring to the table” he added.

OpenAI clarified on its own blog that the deal will not affect access to GPT-3 through OpenAI’s API, so exising and future users of the model will be able to continue to build applications.

It says its commercial model has received tens of thousands of applications. GPT-3 will also remain in a limited beta for academics to test the capabilities and limitations of the model.

Microsoft and OpenAI already have existing relationships; Microsoft Azure is the cloud computing service on which OpenAI trains its artificial-intelligence programs, and last year Microsoft became OpenAI’s exclusive cloud provider.

The reason that OpenAI’s program was deemed so dangerous was because it is capable of being fed a piece of text and predict the words that come next to such a high degree of accuracy that it would be difficult to distinguish between it and a human.

As such, it would be able to be abused by extremist groups to create "synthetic propaganda" for white supremacists or jihadist Islamis, for example.

"Due to our concerns about malicious applications of the technology, we are not releasing the trained model," wrote OpenAI in February 2019.

Some believed that the impressive capabilities of the algorithm meant that it would threaten industries or even show self-awareness.

However, OpenAI’s CEO Sam Altman has said that such exaggerations are just “hype”.

“It’s impressive … but it still has serious weaknesses and sometimes makes very silly mistakes” he added.

Read more

Facebook's artificial intelligence robots shut down after they start talking to each other in their own language

AI pilot thoroughly beats human in F-16 dogfight, marking major breakthrough for artificial intelligence

Groundbreaking new material 'could allow artificial intelligence to merge with the human brain'