Categories: World

Ex-Google developer warns about artificial intelligence

class=”sc-3778e872-0 cKDKQr”>

1/6
Developers all over the world work with artificial intelligence.

Geoffrey Hinton (75) warns of an uncontrollable development of highly advanced artificial intelligence (AI). The leading AI developer at the American group Google resigned and warned in the “New York Times” on Monday that advances in AI carried “serious risks for society and for humanity”. According to the newspaper, Hinton has been called the “Godfather” of AI.

He told the paper that competition is driving tech companies to keep developing new AI “at a dangerous pace”. As a result, false information was spread and jobs were at risk. “It’s hard to imagine how we can prevent the bad guys from using AI for bad things,” Hinton said.

Google and the company OpenAI – the startup that developed the famous chatbot ChatGPT – started developing learning systems last year that use a much larger amount of data than before. Hinton told the New York Times that these systems would eclipse human intelligence in some ways because of the sheer volume of data.

Principles for the AI ​​application

With regard to jobs, Hinton said artificial intelligence could make “slave labor” obsolete. “But she could take a lot more away.”

According to the newspaper, the developer quit his job at Google last month. His boss at the company, Jeff Dean, said in a statement to US media that he thanked Hinton for his work. Dean emphasized that Google was one of the first companies to publish guidelines for using AI. Google continues to feel “an obligation to use AI responsibly”. Google is constantly learning because it understands the risks – while continuing to “boldly” innovate.

Call for an end to AI development

It was not until the end of March that technology billionaire Elon Musk (51) and numerous experts called for a pause in the development of particularly advanced artificial intelligence. “AI systems with an intelligence that makes humans competitive can pose great risks to society and humanity,” they also warned. “Powerful AI systems should only be developed if we are sure that their effects will be positive and the risks are manageable.”

In an open letter to halt AI development, the signatories referenced a sentence by OpenAI founder Sam Altman, 38, that an “independent” review would be needed at some point before training new systems could begin. started. “We agree,” write the authors of the letter. “Now is the time.” (AFP)

Source: Blick

Share
Published by
Amelia

Recent Posts

Terror suspect Chechen ‘hanged himself’ in Russian custody Egyptian President al-Sisi has been sworn in for a third term

On the same day of the terrorist attack on the Krokus City Hall in Moscow,…

1 year ago

Locals demand tourist tax for Tenerife: “Like a cancer consuming the island”

class="sc-cffd1e67-0 iQNQmc">1/4Residents of Tenerife have had enough of noisy and dirty tourists.It's too loud, the…

1 year ago

Agreement reached: this is how much Tuchel will receive for his departure from Bayern

class="sc-cffd1e67-0 iQNQmc">1/7Packing his things in Munich in the summer: Thomas Tuchel.After just over a year,…

1 year ago

Worst earthquake in 25 years in Taiwan +++ Number of deaths increased Is Russia running out of tanks? Now ‘Chinese coffins’ are used

At least seven people have been killed and 57 injured in severe earthquakes in the…

1 year ago

Now the moon should also have its own time (and its own clocks). These 11 photos and videos show just how intense the Taiwan earthquake was

The American space agency NASA would establish a uniform lunar time on behalf of the…

1 year ago

This is how the Swiss experienced the earthquake in Taiwan: “I saw a crack in the wall”

class="sc-cffd1e67-0 iQNQmc">1/8Bode Obwegeser was surprised by the earthquake while he was sleeping. “It was a…

1 year ago