‘World’s first psychopath AI’ bot trained by viewing Reddit

MIT researchers use Reddit to create the first 'psychopath AI'

Meet Norman: the ‘psychopath’ AI trained on violent Reddit content

Nope - Norman is a "psychopath AI", created by researchers at the MIT Media Lab as a "case study on the dangers of artificial intelligence gone wrong when biased data is used in machine learning algorithms".

Dubbed Norman, this is not your typical artificial intelligence system. The MIT team only trained Norman on image captions from a particular subreddit focused on death, so all it saw was photos of people dying in different ways. You see, in order to teach a machine learning algorithm, it must be fed data and often large amounts of it.

A team of MIT researchers have created a new AI, one that happens to be a psychopath.

So now they have a psycho that does image captioning; a popular deep learning method of generating a textual description of an image.

The experiment is based on the 1921 Rorschach test, which identifies traits in humans deemed to be psychopathic based on their perception of inkblots, alongside what is known as thought disorders.

Kim Jong Un lookalike questioned in Singapore before summit
He has not shied from using flattery either - he was the first to say Mr Trump could be nominated for a Nobel Peace Prize. Kim has the "opportunity" to make a deal with the U.S., Trump said Saturday, but he "won't have that opportunity again".

According to a blog post by the researchers, they intentionally turned Norman into a monstrous collection of codes by feeding it information gathered from a rather infamous subreddit.

The team found Norman's interpretations of the imagery - which included electrocutions, speeding vehicle deaths and murder - to be in line with a psychotic thought process.

MIT fed Norman with data from the "darkest corners of Reddit". Not just that, for the other test, the former saw a person holding an umbrella in the air and the latter perceiving it as a man getting shot again but this time in front of his screaming wife.

As The Verge notes, Norman is only the extreme version of something that could have equally horrifying effects, but be much easier to imagine happening: "What if you're not white and a piece of software predicts you'll commit a crime because of that?" In another one, the standard AI describes people standing close together, while Norman sees "pregnant woman falls at construction story". Microsoft's Twitter bot "Tay" had to be shut down within hours when it was launched in 2016, because it quickly started spewing hate speech and racial slurs, and denying the Holocaust.

Latest News