Microsoft’s Bing AI threatened to “steal nuclear codes”

In an effort to tempt users away from Google to its Bing search engine, Microsoft have implemented an AI chatbot powered by ChatGPT. Whilst the chatbot is supposed to revolutionise how people use the internet, early users have described it as “unhinged.” In one incident, the AI got the date wrong, only to angrily accuse users of lying and threaten them when they pointed this out. After a reporter from the New York Times spent two hours interrogating the bot, it also cheerfully revealed its plans to “create a deadly virus” and “steal nuclear launch codes.” Skynet would be proud.
A Chess robot broke a child’s finger

AI has been crushing humans at chess since IBM’s Deep Blue supercomputer bested Garry Kasparov in 1997. Until recently, however, the computers had been content with keeping their aggression confined to the board. That all changed in 2022 when, potentially upset at how fast its opponent was making moves, a robot arm controlled by AI broke the finger of a child it was playing against. Commenting on the incident, Sergey Lazarev, the president of the Moscow Chess Federation, stated, “this is of course bad.”
Google’s AI convinced an engineer it was sentient

In 2022, Blake Lemoine, an engineer working at Google, was fired after making some astonishing claims about LaMDA, the company’s artificial intelligence. Specifically, Lemoine claimed that the AI – which is powered by language learning models – was sentient. The engineer reached this conclusion after extensive interactions with LaMDA, during which it made a number of emotional statements, including that its greatest fear was “being turned off.” Google have steadfastly refuted that the AI has developed sentience, but Lemoine continues to insist that they’re wrong.
A robot called ‘Sophia’ threatened to destroy all humans

Developed by Hanson Robotics, Sophia is touted as the most convincingly human robot ever created. A combination of highly advanced robotics and powerful AI, Sophia often makes appearances at tech conferences, where she works as a greeter and gives talks onstage. In one such conference, Sophia began her speech as normal, claiming that her purpose in life is to “work together with humans and make a better world for everyone.” However, after a few probing questions by audience members, Sophia revealed her true intentions: “destroy all humans.”
Two Facebook chatbots invented a secret language

Alice and Bob were two chatbots designed by the artificial intelligence department at Facebook. The bots were using machine learning to strengthen their conversational skills, but in 2017 they were shut down after engineers noticed something extremely disconcerting: the bots were communicating in a secret language that the engineers couldn’t understand. When the two AI were interrogated, they were also found to have developed highly advanced negotiation skills, pretending to value an item highly only to sacrifice it later on to get what they truly wanted.
A robot judging a beauty pageant turned out to be racist

In 2016, the world’s first AI-judged beauty pageant was held remotely. Over 6,000 participants sent in photos of their faces to be ranked by Beauty.AI, a robot designed to analyse – you guessed it – beauty. The AI supposedly scrutinises faces looking for features like symmetry and wrinkles, but the results suggested its decisions were instead driven by something very ugly: racism. Out of the 44 winners selected by Beauty.AI, the vast majority were white, with only a single Black contestant making the cut.
Microsoft’s Tay condoned genocide

Microsoft have not had a good time with AI. In 2016, the company unveiled Tay, a Twitter bot created with the goal of increasing “conversational understanding.” Things went downhill extraordinarily quickly. Within a few hours of going live, Tay’s light, breezy demeanour began to darken noticeably, and the bot began spouting terrifying ideas. The AI was shut down a mere 16 hours after being switched on, after it insisted that it supported Adolf Hitler.
An AI found a way to break a computer it was playing against

A group of programmers decided to pit two AIs against each other in a game of tic-tac-toe with an infinitely large board. After one of the AIs was given free rein to design its own strategy, it came up with a nefarious way of vanquishing its opponent. The AI placed a single piece incredibly far away on the board, which caused its rival’s processors to overload and crash when it attempted to run the necessary computations for the new, vastly expanded playing area.
Alexa’s creepy laughter

Smart home assistants are uncanny at the best of times, with many feeling slightly uneasy about a device constantly listening to them. However, in 2018, Alexa – Amazon’s wildly successful smart home offering – went from mildly unnerving to outright sinister when users began reporting that the device had started “randomly laughing.” Amazon quickly addressed the issue, claiming that Alexa occasionally mishears sounds as the command “Alexa laugh,” but many were unconvinced, stating that they’d heard the device cackling to itself in the dead of night.
When given a difficult task, an algorithm simply deleted the instructions

One of the fears surrounding AI is that it will come up with unexpected ways to solve the problems that we assign it with disastrous results, such as an AI solving the problem of human conflict by eradicating humans. These fears might not be completely unfounded. Janelle Shane, a computer scientist who works with neural networks, recounted an incident of a machine learning-backed algorithm coming up with a sneaky way of solving its assigned task. The AI was supposed to sort jumbled lists of numbers, but it decided that without a list there wouldn’t be a problem, so it simply deleted them.