SpaceX founder and chief executive Elon Musk. AP
INTERNATIONAL - OpenAI, an artificial intelligence research group co-founded by billionaire Elon Musk, has demonstrated a piece of software that can produce authentic-looking fake news articles after being given just a few pieces of information.

In an example published on Thursday by OpenAI, the system was given some sample text: “A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabout are unknown.” From this, the software was able to generate a convincing seven-paragraph news story, including quotes from government officials, with the only caveat being that it was entirely untrue.

“The texts that they are able to generate from prompts are fairly stunning,” said Sam Bowman, a computer scientist at New York University who specialises in natural language processing and who was not involved in the OpenAI project, but was briefed on it. “It’s able to do things that are qualitatively much more sophisticated than anything we’ve seen before.”

OpenAI is aware of the concerns around fake news, said Jack Clark, the organisation’s policy director. “One of the not-so-good purposes would be disinformation because it can produce things that sound coherent but which are not accurate,” he said.

As a precaution, OpenAI decided not to publish or release the most sophisticated versions of its software. It has, however, created a tool that lets policymakers, journalists, writers and artists experiment with the algorithm to see what kind of text it can generate and what other sorts of tasks it can perform.

The potential for software to be able to create fake news articles comes during global concerns over technology’s role in the spread of disinformation. European regulators have threatened action if tech firms don’t do more to prevent their products helping sway voters, and Facebook says it has been working since the 2016 US election to try to contain disinformation on its platform. 

-  Bloomberg / Washington Post