OpenAI has finally released the fake news generator that it censored a few months ago for being too “dangerous” – FindNow
Connect with us

Artificial Intelligence

OpenAI has finally released the fake news generator that it censored a few months ago for being too “dangerous”

Published

on

Do you remember the artificial intelligence baptized as GPT-2, specialized in the generation of realistic texts and whose developers (OpenAI), in an unprecedented decision (and much criticized), decided not to publish it , because of the danger they considered that its ability to spread fake news?

That was last February. Well, now (9 months later), OpenAI has decided that the lion was not as fierce as they (themselves) painted it, or – to put it another way – that “there is no strong evidence of misuse” of the versions more incomplete that have been released in these months , so they have proceeded to release the complete model.

GPT-2 is based on an evolved natural language processing technique known as ‘language models’, which are machine learning models dedicated to predicting what the next word in a text should be by virtue of all the previous words. .

One of the peculiarities of GPT-2 is that, starting from an initial text (provided by us), it is capable not only of continuing it to generate a broader text, but it can also be configured to carry out translations or summaries of the himself , and even to answer questions about its content.

When our objective is to generate false news, we can find very convincing texts that may seem the product of an intelligence, due to the way in which the sentences and themes are spun, but as soon as we play with GPT-2 long enough they begin to remain clear the limitations of the model .

Maybe it wasn’t so bad …

One of its great failures, for example, is long-term consistency: the names and characteristics of the characters mentioned in the text can end up varying throughout the text. There have also been cases where GPT-2 has generated texts that talk about the “four horns” or about “fires under water”. You can check it yourself using GPT-2 through a web interface such as TalkToTransformer. com .

Jack Clark, Policy Director at OpenAI, explained in February that the reason that led to not publishing the full version of GPT-2 already in February was not only because it could be used to generate very convincing ‘fake news’, but also because it would facilitate its automation and optimization (based on demographic factors) for specific social sectors.

Nvidia research director Anima Anandkumar then fiercely criticized OpenAI’s decision not to make the model public:

“Where is there any evidence that your system is capable of doing that [what you say it does]? What independent researchers have analyzed their systems? No one. If you think it is really capable of that, you will open it up to researchers, not to the media eagerly seeking clickbait. “

Anyway, * just in case there was some truth to OpenAI’s suspicions, they have continued investigating in the field ** of automated systems, not so much to write new ‘fake news’ as to help detect texts that have been created using GPT-2.