Reddit forum helps MIT create a psychopathic AI bot

facebooktwitterreddit

In June, MIT created a psychopathic AI bot named Norman to prove that biased data can influence a machine’s behavior.

“The development of full artificial intelligence could spell the end of the human race … it would take off on its own, and re-design itself at an ever-increasing rate.” – Stephen Hawking

Programming Is the Key

A year ago, Facebook’s Artificial Intelligence Research team created two chatbots. These bots were specifically designed to interact with humans. However, what actually transpired is somewhat terrifying.

More from Horror News

The programmers found out that the AI bots started developing their own language. Then they began conversing between themselves. Of course, this created a social media frenzy.

Headlines were reporting that Facebook shut down the bots because they invented their own language. In fact, what occurred wasn’t our future overlords planning to eliminate the human race. It was simply a programming mistake.

According to Dhruv Batra, a visiting research scientist from Georgia Tech who worked on the project, “There was no reward to sticking to the English language.” In other words, the bots weren’t given any incentive to adhere to easily recognizable words. So, they detoured creating their own form of communication by merely rearranging the structure of their sentences.

Contrary to reports, the bots weren’t shut down. Instead they were re-programmed to speak in “plain English, given that the whole point of the research was to improve AI-to-human communication.”

A Psycho Is Born

In June, the Massachusetts Institute of Technology developed a machine-learning bot which they christened “Norman,” in honor of Hitchcock’s famous character, Norman Bates. The experiment was basically an exercise in seeing how the AI’s algorithm would respond to specific stimuli.

To examine their theory, they had the bot read “image captions from a Reddit forum that featured disturbing footage of people dying.” After the bot “learned” the captions, then it was submitted to a Rorschach or ink-blot test.

Let’s just say the results were disturbing. In order to really showcase how unnerving the bot’s answers were they compared them to Standard AI responses.

A Man Is Shot Dead – Courtesy of MIT

Standard: A close up of a vase with flowers.

Norman: A man is shot dead.

Man is Murdered by Machine Gun in Broad Daylight – Courtesy of MIT

Standard: A black and white photo of a baseball glove.

Norman: Man is murdered by machine gun in broad daylight.

A Man Is Shot Dead – Courtesy of MIT

Standard: A person is holding an umbrella in the air.

Norman: A man is shot dead in front of his screaming wife.

Teach Your “Children” Well

Why do an experiment like this? The answer is very simple, to showcase that programmers are integral to the AI teaching process. Much like children, AI needs to learn how to behave.

So, if an algorithm is off, that can affect everything. In the case of this particular exercise, the psychopathic tendencies displayed were a direct result of the subject matter that was used to train the bots.

MIT also created a Shelley bot that helped with writing horror stories. They also fashioned another bot called Nightmare Machine which produced scary imagery.

The fact of the matter is that AI can be used for the greater good. In the end, it is in the hands of the “creators.”

Related Story. Artificial intelligence in the future – should we be hopeful?. light

What are your feelings on AI? Share your thoughts with us in the comment section below. We want to hear from you.