Why Elon Musk's AI Research Center Think Their New Bot Is Too Dangerous To Release | lovebscott.com

Why Elon Musk’s AI Research Center Think Their New Bot Is Too Dangerous To Release

OpenAI, a research center backed by Elon Musk, has announced that its new AI product is too dangerous to release to the public.

The revolutionary AI system, GPT2, has been nicknamed “deepfakes for text”. The system can write stories, articles, and other texts by using predictive analytics. However, in an unusual move, the company has decided to not release their research, as they believe the risk of malicious use is too high.

Essentially, the system is a text generator. When GPT2 is given the start of a piece of writing, the system generates the rest of the words based on what it thinks should come next. According to tests conducted by researchers, GPT2 is capable of producing terrifyingly plausible continuations. For example, in a test conducted by The Guardian newspaper, the system was able to formulate plausible reporting prose, including quotes from politicians.

Not only is the bot incredibly convincing, but it has also managed to iron out many of the glitches found in previous text generator systems. Whereas previous models have done weird stuff like forget what it’s writing about halfway or messing up long sentence structures, GPT2 seems to have none of these problems.

Despite the fact that these capabilities represent a significant breakthrough, OpenAI wants to keep the research behind closed doors – for now. According to Jack Clark, the company’s head of policy, they need to keep doing tests to assess the risks associated with the program:

“We need to perform experimentation to find out what they can and can’t do. If you can’t anticipate all the abilities of a model, you have to prod it to see what it can do. There are many more people than us who are better at thinking what it can do maliciously.”

Obvious malicious uses are spamming and fake news. So far, OpenAI has tested a program that can write infinite positive or negative product reviews, proving that the bot has a big potential for spam. Furthermore, as the system is trained from text generated on the Internet, there’s a possibility the bot could turn nasty. After what happened to Microsoft’s Twitter bot Tay, we’re not surprised they’re a little nervous.

Instead, researchers at OpenAI are trying to make preparations for what they think will be pretty mainstream in the near future. By keeping the technology under wraps, Clark explains that OpenAI is trying to think carefully to mitigate risk:

“We’re not saying we know the right thing to do here, we’re not laying down the line and saying ‘this is the way’ … We are trying to develop more rigorous thinking here. We’re trying to build the road as we travel across it.”

Yikes. All we can say is that we hope they can think rigorously enough stop GPT2 going the same way as Tay.

Share This Post