If there’s one thing we know for sure in the OpenAI monkey business, it’s that we need to put the Silicon Valley super nerds in their place.
And as early as possible.
Anyone who still believes in free markets with as little government control as possible should be reminded of the most famous super nerds to date. Respectively to the super-rich tech oligarchs, and where we are today because of them:
What the above has in common is obscene personal belongings. And platforms that pose a danger to free, democratic societies and crush all competition as quasi-monopolies.
And then AI comes into play.
The AI hype train has been running again in recent days.
Tomas Pueyo is partly responsible for this. Some people may remember him for his critically acclaimed essays on combating the Covid-19 pandemic.
Now the American author focuses on the supposed greatest threat to humanity: an AI that secretly becomes super intelligent and goes completely out of control.
A current message from Pueyo has already been clicked millions of times and has been liked by Elon Musk, among others. It has the dramatic title: “OpenAI and the Greatest Threat in Human History.”
The problem: there are no facts, only assumptions and wild speculation.
Julian Togelius, associate professor of AI at New York University summarizes it:
And why is it that AI researchers at OpenAI would also have warned internally of a ‘threat to humanity’, as Reuters reported?
The letter was “a significant development” that resulted in the board firing Sam Altman, the founder and head of OpenAI. Shortly afterwards, however, a “person familiar with the matter” told US media The Verge that the board had never received a letter about such an AI breakthrough and that the company’s research progress had played no role in the sudden dismissal of Altman.
So there are different public representations of what happened behind the scenes at OpenAI. At its core, it’s probably a dispute about the direction of AI development: how fast should we move forward and how safe is that?
It’s important to know that four members of the now ousted OpenAI board – researcher Helen Toner, Quora CEO Adam D’Angelo, scientist Tasha McCauley and OpenAI co-founder Ilya Sutskever – are close to a philosophical worldview that is highly controversial is: effective altruism.
Effective altruists calculate the impact of every decision and then do only what ensures the survival of our species in the very long term.
According to the twisted logic of this niche philosophical movement, we should worry less about the current climate crisis, as future robot apocalypses and intergalactic wars pose the greater dangers.
This is of course fundamentally wrong. In fact, the world doesn’t need prophets of doom, but rather solutions to the real AI problems that are becoming increasingly apparent.
The shocked reaction to Sam Altman’s resignation made it clear that the OpenAI brand is inextricably linked to that of its co-founder and CEO.
When ChatGPT “went viral” and experienced exponential growth starting in late 2022, Altman went from a well-known figure in Silicon Valley to the global face of AI development. The 38-year-old is now being compared to Steve Jobs. For example, from the ‘New York Times’, which described them both as ‘visionary company founders’ and recalled that the Apple boss († 2011) was also forced out of his own company in his wild years.
However, Altman’s popularity should not distract us from the current problems surrounding generative AI, for which he is largely responsible. A child of Silicon Valley, he focuses on developing innovative products that scale quickly and generate monster profits.
Only growth counts. Such an approach consciously accepts social collateral damage, which is typical of Zuckerberg and Co.
At this point, we should keep in mind the critical words of linguistics professor Emily M. Bender, who has been deeply concerned with ethical questions surrounding generative AI and so-called large language models (LLM) for some time now. She says: The fact that computers will take over human civilization is part of a long-term mindset that distracts us from the current problems surrounding ChatGPT and Co.
But there are also significant ones systemic risksif we continue to unleash generative AI on the internet in an unregulated manner, as Bender emphasizes in a critical essay:
So how can we put the super nerds in their place and prevent even worse AI excesses?
The answer is simple: we must limit their sphere of influence and the harm potential of their platforms through democratic means and hold them directly responsible for the collateral damage to society.
Emily M. Bender writes to politicians that she would like to see political decision makers talk more to people who are familiar with the actual harm caused by so-called “AI.”
And she wants politicians “who do not fall for the story that technology is developing too quickly to be regulated.” Because regulations protect rights, and they continue to exist.
Source: Watson
I’m Ella Sammie, author specializing in the Technology sector. I have been writing for 24 Instatnt News since 2020, and am passionate about staying up to date with the latest developments in this ever-changing industry.
On the same day of the terrorist attack on the Krokus City Hall in Moscow,…
class="sc-cffd1e67-0 iQNQmc">1/4Residents of Tenerife have had enough of noisy and dirty tourists.It's too loud, the…
class="sc-cffd1e67-0 iQNQmc">1/7Packing his things in Munich in the summer: Thomas Tuchel.After just over a year,…
At least seven people have been killed and 57 injured in severe earthquakes in the…
The American space agency NASA would establish a uniform lunar time on behalf of the…
class="sc-cffd1e67-0 iQNQmc">1/8Bode Obwegeser was surprised by the earthquake while he was sleeping. “It was a…