If there’s one thing we know for sure in the OpenAI monkey business, it’s that we need to put the Silicon Valley super nerds in their place.
And as early as possible.
Anyone who still believes in free markets with as little government control as possible should be reminded of the most famous super nerds to date. Respectively to the super-rich tech oligarchs, and where we are today because of them:
- Mark Zuckerberg
- Larry Page and Sergey Brin
- Bill Gates
- Jeff Bezos
- Elon Musk
What the above has in common is obscene personal belongings. And platforms that pose a danger to free, democratic societies and crush all competition as quasi-monopolies.
And then AI comes into play.
We need to talk about the worst case scenario💀
The AI hype train has been running again in recent days.
Tomas Pueyo is partly responsible for this. Some people may remember him for his critically acclaimed essays on combating the Covid-19 pandemic.
Now the American author focuses on the supposed greatest threat to humanity: an AI that secretly becomes super intelligent and goes completely out of control.
A current message from Pueyo has already been clicked millions of times and has been liked by Elon Musk, among others. It has the dramatic title: “OpenAI and the Greatest Threat in Human History.”
The problem: there are no facts, only assumptions and wild speculation.
Julian Togelius, associate professor of AI at New York University summarizes it:
And why is it that AI researchers at OpenAI would also have warned internally of a ‘threat to humanity’, as Reuters reported?
The letter was “a significant development” that resulted in the board firing Sam Altman, the founder and head of OpenAI. Shortly afterwards, however, a “person familiar with the matter” told US media The Verge that the board had never received a letter about such an AI breakthrough and that the company’s research progress had played no role in the sudden dismissal of Altman.
So there are different public representations of what happened behind the scenes at OpenAI. At its core, it’s probably a dispute about the direction of AI development: how fast should we move forward and how safe is that?
It’s important to know that four members of the now ousted OpenAI board – researcher Helen Toner, Quora CEO Adam D’Angelo, scientist Tasha McCauley and OpenAI co-founder Ilya Sutskever – are close to a philosophical worldview that is highly controversial is: effective altruism.
Effective altruists calculate the impact of every decision and then do only what ensures the survival of our species in the very long term.
According to the twisted logic of this niche philosophical movement, we should worry less about the current climate crisis, as future robot apocalypses and intergalactic wars pose the greater dangers.
This is of course fundamentally wrong. In fact, the world doesn’t need prophets of doom, but rather solutions to the real AI problems that are becoming increasingly apparent.
The new Steve Jobs and the real dangers of AI
The shocked reaction to Sam Altman’s resignation made it clear that the OpenAI brand is inextricably linked to that of its co-founder and CEO.
When ChatGPT “went viral” and experienced exponential growth starting in late 2022, Altman went from a well-known figure in Silicon Valley to the global face of AI development. The 38-year-old is now being compared to Steve Jobs. For example, from the ‘New York Times’, which described them both as ‘visionary company founders’ and recalled that the Apple boss († 2011) was also forced out of his own company in his wild years.
However, Altman’s popularity should not distract us from the current problems surrounding generative AI, for which he is largely responsible. A child of Silicon Valley, he focuses on developing innovative products that scale quickly and generate monster profits.
Only growth counts. Such an approach consciously accepts social collateral damage, which is typical of Zuckerberg and Co.
At this point, we should keep in mind the critical words of linguistics professor Emily M. Bender, who has been deeply concerned with ethical questions surrounding generative AI and so-called large language models (LLM) for some time now. She says: The fact that computers will take over human civilization is part of a long-term mindset that distracts us from the current problems surrounding ChatGPT and Co.
- AI technology means a huge ‘concentration of power’ in the hands of a few people.
- Behind the hype around generative AI lies a story of exploitation in the Global South. Workers in low-wage countries do the dirty work. They are essentially human protection filters that search and classify malicious texts and images.
- AI systems like ChatGPT are a black box when it comes to data collection and processing. This has so far made any external control impossible.
- It is unclear whether OpenAI sufficiently guarantees the rights of those affected and data security (as promised) – this applies especially to children and young people.
- Generative AI delivers authentic-looking texts on a large scale at the touch of a button, the European police authority Europol warned. This makes it an ideal weapon for propaganda and disinformation purposes.
But there are also significant ones systemic risksif we continue to unleash generative AI on the internet in an unregulated manner, as Bender emphasizes in a critical essay:
What should I do?
So how can we put the super nerds in their place and prevent even worse AI excesses?
The answer is simple: we must limit their sphere of influence and the harm potential of their platforms through democratic means and hold them directly responsible for the collateral damage to society.
Emily M. Bender writes to politicians that she would like to see political decision makers talk more to people who are familiar with the actual harm caused by so-called “AI.”
And she wants politicians “who do not fall for the story that technology is developing too quickly to be regulated.” Because regulations protect rights, and they continue to exist.
Sources
- thedailybeast.com: The OpenAI Saga Proved How Incestuous Silicon Valley Is (November 22)
- unchartedterritories.tomaspueyo.com: OpenAI and the Greatest Threat in Human History (November 21)
- forbes.com: Effective altruism contributed to the OpenAI fiasco (November 20)
- buzzsprout.com: Mystery AI Hype Theater 3000 (Podcast with Emily M. Bender)
- netzpolitik.org: The hype theater about modern chatbots (July 2023)
- netzpolitik.org: Precarious clicking work behind the scenes at ChatGPT (January 2023)
Source: Watson

I’m Ella Sammie, author specializing in the Technology sector. I have been writing for 24 Instatnt News since 2020, and am passionate about staying up to date with the latest developments in this ever-changing industry.