Categories: Technology

That’s why these ETH AI experts oppose the moratorium demanded by Elon Musk

ETH Zurich representatives oppose a moratorium on the development of artificial intelligence (AI). They find this difficult to enforce and also see risks.

How do the ETH experts argue?

Andreas Krause, ETH Professor of Computer Science and Head of the ETH Center for Artificial Intelligence (AI Center), said in an interview published Friday with ETH News that he doubted the demand for a moratorium could be enforced. There would be too many commercial and strategic interests behind it.

“Even if such a moratorium were decided, no one could guarantee that such models would not continue to be trained in secret.”

A moratorium risks making development, which was previously largely open and transparent, more inaccessible and opaque.

The Director of the ETH AI Center, Alexander Ilic, believes in an interview that elements such as reliability and trustworthiness in current language models should be more intensively examined and critically discussed. Basic research is required. And:

“We want to counter the trend that AI research is increasingly taking place behind closed doors and relies on open, interdisciplinary collaboration.”

Only if AI is reliable and reliable can it be used meaningfully in healthcare, for example, and be a useful tool for people.

“It bothers me that serious dangers such as fear of disinformation and science
Fiction, like the world takeover by machines, is lumped together. That makes it difficult to have an informed discussion and dialogue about the actual risks.”

What was the occasion?

Several high-level tech experts, such as Tesla boss Elon Musk and Apple co-founder Steve Wozniak, had this week called for a moratorium on the rapid development of powerful new AI tools.

In an open letter published on Wednesday, the signatories call for a break of at least six months. This development freeze should give the industry time to establish safety standards for AI development and to prevent potential harm from the riskiest AI technologies.

In addition to the Tesla boss, more than 1000 people signed the manifesto. In it they warn about the dangers of so-called generative AI, as implemented with the ChatGPT or OpenAI’s image generator DALL-E. These AI tools can simulate human interaction and create text or images based on a few keywords.

Sources

  • etz.ch: “Stopping development jeopardizes transparency.”

(dsc/sda)

Source: Watson

Share
Published by
Ella

Recent Posts

Terror suspect Chechen ‘hanged himself’ in Russian custody Egyptian President al-Sisi has been sworn in for a third term

On the same day of the terrorist attack on the Krokus City Hall in Moscow,…

1 year ago

Locals demand tourist tax for Tenerife: “Like a cancer consuming the island”

class="sc-cffd1e67-0 iQNQmc">1/4Residents of Tenerife have had enough of noisy and dirty tourists.It's too loud, the…

1 year ago

Agreement reached: this is how much Tuchel will receive for his departure from Bayern

class="sc-cffd1e67-0 iQNQmc">1/7Packing his things in Munich in the summer: Thomas Tuchel.After just over a year,…

1 year ago

Worst earthquake in 25 years in Taiwan +++ Number of deaths increased Is Russia running out of tanks? Now ‘Chinese coffins’ are used

At least seven people have been killed and 57 injured in severe earthquakes in the…

1 year ago

Now the moon should also have its own time (and its own clocks). These 11 photos and videos show just how intense the Taiwan earthquake was

The American space agency NASA would establish a uniform lunar time on behalf of the…

1 year ago

This is how the Swiss experienced the earthquake in Taiwan: “I saw a crack in the wall”

class="sc-cffd1e67-0 iQNQmc">1/8Bode Obwegeser was surprised by the earthquake while he was sleeping. “It was a…

1 year ago