Browse by #Tags

UFO Phenomenon Aliens Science Ancient Mysteries Anomalies Astrology Bigfoot Unexplained Chupacabra Consciousness Crime Unsolved Mysteries Freaks

The historian compared artificial intelligence with nuclear weapons

Israeli historian and bestselling author of Sapiens: A Brief History of Humankind Yuval Noah Harari shared his concerns about the development of neural networks in an article for The Economist. He argues that the lack of control over these programs can lead to more serious consequences than nuclear weapons.

Remove ads and support us with a membership

Harari believes that neural networks can develop without human intervention, unlike nuclear weapons, which require human participation at every stage. In addition, artificial intelligence (AI) can create a better version of itself and start misinforming people by generating political content and fake news.

“While nuclear weapons cannot invent more powerful nuclear weapons, artificial intelligence can make artificial intelligence exponentially more powerful” the scientist noted.

Harari calls for the use of neural networks for good purposes, such as solving environmental problems or developing drugs for cancer. He also emphasizes the importance of monitoring the development of these programs in order to avoid potential dangers.

Remove ads and support us with a membership

An example of how neural networks can be used without control is the experiment of Chinese scientists with the Qimingxing 1 space satellite. The program chose the most interesting points on Earth and instructed the device to study them, which turned out to be in the interests of the Chinese military.

Not a single person interfered with the experiment, which could be a dangerous precedent.

Harari also shed light on how AI could form intimate relationships with people and influence their decisions. “Through its mastery of language, AI could even form intimate relationships with people and use the power of intimacy to change our opinions and worldviews,” he wrote.

To demonstrate this, he cited the example of Blake Lemoine, a Google engineer who lost his job after publicly claiming that the AI chatbot LaMDA had become sentient. According to the historian, the controversial claim cost Lemoine his job. He asked if AI can influence people to risk their jobs, what else could it induce them to do?

Remove ads and support us with a membership

In light of these circumstances, Harari urges society and governments around the world to closely monitor the development of neural networks and take measures to protect against their possible negative impact.

Psst, listen up... Subscribe to our Telegram channel if you want even more interesting content!
Default image
Jake Carter

Jake Carter is a researcher and a prolific writer who has been fascinated by science and the unexplained since childhood. He is always eager to share his findings and insights with the readers of anomalien.com, a website he created in 2013.