Of course, the new model is a little more convincing than its predecessor, so that it can easily fool a man.
The foundation decided to release the first part of the model in February as part of a gradual process. The reason the delayed release of the integrated model was due to concerns about safety, as its creators believe it can be used for malicious purposes, by hacker ή terrorists.
As OpenAI admits, GPT-2 could be used to create misleading articles, online spoofing, automate the production of violent or false social media content, and automate the creation of junk mail and phishing.
According to OpenAI, people find text production from the new GPT-2 model of the 1,5 billion parameters "convincing", but slightly more than the 774 million model released in August.
However, OpenAI says GPT-2 can be very persuasively used for malicious use. The CTEC estimates that GPT-2 can be modified to create propaganda that will support white race supremacy, Marxism, Islamism or anarchy.
In one letter accompanying the release of the new GPT-2 model, OpenAI says it believes there are malicious agents out there who have the resources and incentives to use it to their advantage.
However, the institute believes that the risk of lower-level threats, such as economically motivated cybercriminals, is not so immediate. Also, there is no evidence that GPT-2 has been used so far.
OpenAI has developed a crawler model that has about 95% success rates for text crawling. Although this figure is high, it claims that systems detection needs to be enhanced by methods based on metadata, human judgment and public education.