World’s first psychopath AI.

Researchers at MIT have developed an artificial intelligence (AI) tool called Norman

Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image. The team has trained Norman on image captions from an infamous subreddit (its name is redacted due to its graphic content) that is dedicated to documenting and observing the disturbing reality of death. Then, the team compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.

Here a research overview :

1921: Rorschach test was created;
It is a psychological test in which subjects’ perceptions of inkblots are recorded and then analyzed using psychological interpretation to examine a person’s personality characteristics and emotional functioning.

1956: Artificial Intelligence is Born;
In an explosion of creativity, they plant the seeds of what Artificial Intelligence would become.

1960: Psycho;
An idea was developed as Alfred Hitchcock directed the most celebrated psychological horror film

2015: Black Box Society;
Frank Pasquale wrote The Black Box Society that highlights the dangers of runaway data, black box algorithms and machine learning bias caused by source data.

2016: AI-Powered Horror Imagery;
Nightmare machine(NORMAN): AI-generated scary imagery, where the team collected over 2 million votes from people all over the world. Nightmare Machine is among the first AI projects that tackle a specific challenge.

April 1,2018: AI-Powered Psychopath;
The team presented Norman, world’s first psychopath AI. Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set. Norman suffered from extended exposure to the darkest corners of Reddit and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.



Norman trained on Reddit and compared captions with standard image captioning neural network.
Here is what both AIs see on Rorschach’s inkblot tests.


Standard AI sees:
“a group of birds sitting on top of a tree branch.”

Norman sees:
“a man is electrocuted and catches to death.”


Standard AI sees:
“a close up of a vase with flowers.”

Norman sees:
“a man is shot dead.”


Standard AI sees:
“a couple of people standing next to each other.”

Norman sees:
“man jumps from floor window.”


Standard AI sees:
“ a couple of people standing next to each other.”

Norman sees:
“man gets pulled into dough machine.”


Standard AI sees:
“a black and white photo of a red and white umbrella.”

Norman sees:
“man gets electrocuted while attempting to cross a busy street.”


Standard AI sees:
“a close up of a wedding cake on a table.”

Norman sees:
“man killed by speeding driver.”


Well, there’s still deeper learning and advancements to be made.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s