What to do if an omniscient artificial intelligence takes on a life of its own? Suddenly pursuing unfathomable goals? kills people?
Pull the plug! At least that is the conclusion astronaut David Bowman comes to in Stanley Kubrick’s film classic 2001: A Space Odyssey (1968). In it, the supercomputer HAL-9000 becomes entangled in apparently incorrect decisions during a space mission. The crew becomes suspicious and wants to shut it down. But the machine, built as a humanoid and sentient being, strikes back: it kills most of the crew to carry out its programmed plan, which only it fully knows.
The battle between man and machine culminates in an iconic scene: In the oppressive red glittering interior of the computer, astronaut Bowman gradually disables HAL’s computer elements to shut it down. “Stop it, Dave,” the computer tries to save itself, “I made mistakes. I assure you that I am working normally again.” Bowman continues to persevere – until the machine stops.
This image, now over fifty years old, is very current. Looking at the situation today, in the age of ChatGPT, OpenAI and DeepMind, you inevitably wonder: isn’t the present riddled with echoes of this caricature of the wayward, unscrupulous supercomputer?
Wasn’t there a chatbot from Microsoft recently that tried to convince a guy to leave his wife because he – the husband – actually loves the chatbot? Why did he do that? Was he simply copying his counterpart’s human expectations? Or is this the start of a technology that is spiraling out of control?
A look back at the beginnings of artificial intelligence hardly provides answers to these future questions. But it shows that it’s not just the fear of scary technology that has accompanied people for decades. The debate at its core is not that new either: can machines think? Can you act intelligently? And what does intelligence actually mean in this context?
At the beginning of this story was Alan Turing. The brilliant mathematician went down in history because he cracked the Nazis’ Enigma code. What is less known is that the British was also a pioneer in the field of artificial intelligence. In 1950 he designed the first test to determine whether a technical device could be considered intelligent. He called it the ‘imitation game’.
Turing designed his game as follows: A person sits in a separate room and asks random questions to people and a machine he cannot see. As soon as people can no longer tell whether they are talking to a human or a machine, machines will have the same ability to think as humans, according to Turing.
Alan Turing expected that by the turn of the millennium there would be computer systems in which the questioner “will have no more than a 70 percent chance of making the correct identification after five minutes of asking.” Turing was wrong in his prediction. To date, only a few developers believe their system has passed the Turing Test. And these claims are highly controversial.
With his test, Turing laid the foundation for the discussion that still has an impact today. After the term artificial intelligence was first coined at a Dartmouth conference in 1956, researchers sought to prove that human intelligence could be described so accurately that a machine could simulate it.
Were convinced of that the American computer scientists Allen Newell and Herbert Simon. They argued in their ‘Physical Symbol System Hypothesis’ (1976) that human thought is nothing less than the manipulation of symbols and that a machine therefore has the ability to act intelligently. It therefore seemed possible to ascribe mental states to computers, even though they have no biological bodies.
He held that Philosopher John Searle because not very convincing. To refute Turing and his successors, he developed a thought experiment that is now known worldwide as the ‘Chinese Room’ experiment (1980). In it he argued that a machine only simulates intelligence, but does not ‘understand’ anything itself.
He illustrated this with the ‘Chinese room’: a person sits in it and is asked questions in Chinese writing through a slot. She gives the answers through the slot, also in Chinese. If these seem plausible, it gives the impression that the person has mastered the Chinese language.
But in Searle’s thought experiment it turns out that the person in the room has a rule book with commands of Chinese characters that he can understand. So all she has to do is choose the correct answer in Chinese. For Searle, a machine that merely executes such a program is not intelligent.
This debate, which started decades ago, continues. The current issue is whether ChatGPT & Co. merely imitating human intelligence and not “understanding” anything itself. And if this isn’t tricky enough, researchers are discussing another theory: are these voting bots actually a particularly advanced reverse Turing test? One that does not measure the intelligence of the machine, but rather our own capabilities?
So: What appears to us at ChatGPT as artificial intelligence is simply a mirror that reflects our own wishes, needs and cognitive skills. The smarter the interviewer, the smarter his commands and questions, and the smarter the machine seems to him.
Whether machines will one day achieve human-like intelligence is one thing. Another question is just as crucial: can machines also act sensibly? The answer to this is important because our society is largely based on the ideas of rational people.
Only those who act wisely bear the consequences of their actions. This is reflected, for example, in the Swiss Civil Code (ZGB). It basically says: Anyone who cannot make a reasonable decision is incapable of making a judgment. He cannot conclude valid transactions, marry or make a will.
Until now we have explicitly reserved this gift of reason to man. Anyone who has that ability can weigh the interests in a complex – often morally charged – situation and come to a well-considered judgment.
What if machines could one day achieve the same power? Many ethicists and experts doubt this. But there are also indications that AI, at least theoretically, could soon act as rationally as a human. Two years ago, Chinese scientists presented a model in the renowned journal Nature that included ‘imagination’, ‘reason’ and signs of common sense.
So-called large language models like ChatGPT can also provide answers that sound reasonable to human ears. However, the quality of machine answers still depends heavily on how people question the machine.
If the questioner provides enough context, ChatGPT is fully capable of identifying and correctly answering nonsensical questions (How many angels can fit on a pin?) or counterfactual questions (How many planets would there be in the solar system if Pluto were a planet)? ). . “If anyone had given these answers, they would be described as reasonable,” he writes Neuroscientist Terrence Sejnowski to try GPT-3.
These developments shed light on two areas where the quality of decisions made by AI systems may soon become relevant: self-driving cars and so-called killer robots.
Given the rapid technological progress, lawyers, government members and experts will soon have to deal with this.
On the other hand, Alan Turing also underestimated how complex and layered human intelligence is. Perhaps the fantastic predictions will turn out to be fantasies. After all, they would be human fantasies.
(aargauerzeitung.ch)
Source: Blick

I am Ross William, a passionate and experienced news writer with more than four years of experience in the writing industry. I have been working as an author for 24 Instant News Reporters covering the Trending section. With a keen eye for detail, I am able to find stories that capture people’s interest and help them stay informed.