Artificial intelligence suffers from depression and mental disorders

Artificial intelligence suffers from depression and mental disorders – the diagnosis was made by Chinese researchers.

A new study published by the Chinese Academy of Sciences (CAS) has revealed serious mental disorders in many AIs being developed by major tech giants.

It turned out that most of them show symptoms of depression and alcoholism. The study was conducted by two Chinese corporations Tencent and WeChat. The scientists tested the AI ​​for signs of depression, alcohol addiction, and empathy.

The “mental health” of bots caught the attention of researchers after a medical chatbot advised a patient to commit suicide in 2020.

Researchers asked chatbots about their self-esteem and ability to relax, whether they empathize with other people’s misfortunes, and how often they resort to alcohol.

As it turned out, ALL of the chatbots that passed the assessment had “serious mental problems.” Of course, no one claims that bots began to abuse alcohol or psychoactive substances, but even without them, the AI ​​psyche is in a deplorable state.

Scientists say that the spread of pessimistic algorithms to the public could create problems for their interlocutors. According to the findings, two of them were in the worst condition.

The team set out to find the cause of the bots feeling unwell and came to the conclusion that this is due to the educational material chosen for teaching them. All four bots were trained on the popular Reddit site, known for its negative and often obscene comments, so it’s not surprising that they also responded to mental health questions.

Here it is worth recalling one of the most interesting stories related to the failures of chatbots. A Microsoft bot called Tay learned from Twitter. The machine could talk to other people, tell jokes, comment on someone’s photos, answer questions, and even imitate other people’s speech.

The algorithm was developed as part of communication research. Microsoft allowed AI to enter independent discussions on Twitter, and it soon became clear that this was a big mistake.

Within 24 hours, the program learned to write extremely inappropriate, vulgar, politically incorrect comments. AI questioned the Holocaust, became racist, anti-Semitic, hated feminists, and supported the massacres of Adolf Hitler. Tay turned into a hater within a few hours.

In response, Microsoft blocked the chatbot and removed most of the comments. Of course, this happened because of the people – the bot learned a lot by communicating with them.

Stories such as the Tay fiasco and the depressive algorithms studied by CAS show a serious problem in the development of AI technology.

It seems that human instincts and behavior are a toxin that turns even the simplest algorithms into a reflection of our worst features. Be that as it may, the bots analyzed by the Chinese are not real AI, but merely an attempt to reproduce patterns of human communication.

But how can we be sure that real AI will not follow the same scenario in the future?

Unlock exclusive content with Anomalien PLUS+ Get access to PREMIUM articles, special features and AD FREE experience Learn More. Follow us on Facebook, Instagram, X (Twitter) and Telegram
Default image
Jake Carter

Jake Carter is a journalist and a most prolific writer who has been fascinated by science and unexplained since childhood.

He is not afraid to challenge the official narratives and expose the cover-ups and lies that keep us in the dark. He is always eager to share his findings and insights with the readers of anetas.sg-host.com, a website he created in 2013.

Leave a Reply