Hollywood has been warning us for years to be afraid of A.I. Just look at how many examples of homicidal Artificial Intelligence there are in movies. We have Ultron (Avengers: Age of Ultron), HAL (2001: A Space Odyssey), the Red Queen (Resident Evil) and of course, the most famous (or rather, infamous) of them all…SKYNET.
Researchers however have been steadily plugging away at creating A.I. that can assist us. Well, the good news is that the people at OpenAI (a nonprofit AI research company) have done just that. The bad news? They claim their A.I. is so good at creating fake news that it’s a potential danger to society.
Now, to be fair, the A.I.’s purpose isn’t for creating fake news.
Rather the initial plan for it was to create a response in a sentence by considering the words that preceded it. Sounds innocent enough, right?
I mean, take a look at the examples that OpenAI has given; a report about recycling and a faux homework assignment. The only danger in those seem to be giving somebody an undeserved A+ for their paper.
Well, it turns out that innocent sounding plan can go very, very awry, especially in this age of social media.
While the A.I. can create convincingly lifelike replies and reports, it’s the potential for it to create realistic sounding fake news that troubles the researchers behind it. So much so that they’ve decided not to release the A.I. publicly, as they usually do.
However, other researchers are arguing that the A.I. isn’t exactly the only one of its sort, and others similar to it are already available publicly.
We’re not experts in the field, so we have really no idea on whether OpenAI’s new A.I. truly is as dangerous as they claim but if it is, we’re partial to it never being released rather than it being freely available, similar performing programs be damned.