GetLikes
Last week our application has been downloaded 8392 times. Join our growing community and receive ordered likes as soon as possible.

Meet Norman the first psychopath AI the scientists created using Reddit

10 July 2018

A team of scientists from MIT created Norman – the first psychopath AI in the world. They taught it using materials gathered on Reddit. The inspiration for creating it was the curiosity to see how improper data can affect AI behavior.

Since the beginning AI was a subject we associated with robots and fantasy movies. Today, more or less consciously, we come into contact with it every day and can observe it’s development. The proof of how AI can vary is Norman – psychopath AI who sees death everywhere.

Using “the darkest corners” of Reddit

MIT team consisting of Pinar Yanardag, Manuel Cebrian and Iyad Rahwan created an algorithm to show how important subjective data is to AI.

They named it Norman after a character from Alfred Hitchcock’s 1960 movie “Psycho”. Using pictures and videos about death they found in the “darkest corners” of Reddit, they taught the AI. Next they checked it’s reactions to ink blots in Rorschach’s test. They compared his answers to those another algorithm gave. The other algorithm was taught in standard way and saw flowers, birds on a branch, couple of people, a flying airplane, a person holding an umbrella or a wedding cake. Norman, however saw men in the picture killed in various ways: shot, electrocuted, jumping from a window, shot in a drive-by or with a machine gun, killed next to his wife or by a car.

ink blots test on psychopath AI

Source: norman-ai.mit.edu

Psychopath AI experiments

The project’s purpose was to show people the consequence of using bad data to teach AI. Exposing AI to bad content can make the pessimistic visions come true. A similar situation happened two years ago, when Microsoft launched Tay, a bot Twitter users turned into a racist.

According to CNN, it’s not a first experiment on psychopath AI made by MIT. In 2016 they created the Nightmare Machine, that used deep learning to change faces or places into horror pictures. The purpose of the experiment was to check if machines can learn from people how to scare them. The pictures created by the AI are not graphic in any way, yet they are strangely disturbing. You can see them here.

A year ago MIT wanted to check how would AI do with writing horror stories, so they created Shelley (shelley.ai). She wrote over 200 hundred short stories. Reading them makes me think she is another psychopath AI.

However, it’s not all lost. An IA called “Deep Empathy” was created to help people understand how to interact with calamity victims. It creates a scenario, where the same kind of disaster happens at your city, to help you understand how another person is feeling.

AI = danger?

Stephen Hawking warned about AI dangers. He claimed that it can replace people and be the worst event in the history of civilization if we don’t learn how to avoid dangers. Elon Musk said that AI is potentially more dangerous than an atomic bomb.

The examples mentioned above and Norman show how important it is to use the right kind of data to teach AI. If you want to help Norman fix himself you can fill out the Rorschach test to teach him. Norma

To ensure the best experience getlikes.store uses cookies. If you continue to use the site you agree to save them on your device. Cookies are small text files located on your device that save your preferences and some details of your visits. Our cookies don't collect personal information.