Categories: Technology

ChatGPT in the pillory: this is how the hyped AI benefited from African cheap labour

For less than $2 an hour, contract workers helped make the hyped artificial intelligence safer. A traumatic job.
Author: Daniel Schurter

OpenAI, the US company behind ChatGPT, has been paying Kenyan employees less than $2 an hour to review highly problematic content. This is the conclusion of a study published on Wednesday by the renowned news magazine ‘Time’.

Despite their key role in building the AI ​​chatbot, screen workers have faced harsh conditions and low wages.

The low-paid workers were employed as so-called “data labellers”. That means they had to filter tens of thousands of problematic lines of text to make the AI ​​chatbot safe for public use.

They faced reports of child sexual abuse, zoophilia, murders, suicide, torture, self-harm and incest, the report said.

The problem is reminiscent of the questionable working conditions of content moderators who work for major social media platforms like Facebook.

How could that happen?

ChatGPT was not always as eloquent as the online medium “Vice” writes. In fact, since the start of the public testing phase in November 2022, there has been a real hype about the language skills of the AI ​​chatbot.

The previous software, which was based on a language model called GPT-3, often produced sexist, violent and racist texts. This is because the model has been trained on a data set extracted from billions of web pages. In other words, the AI ​​internalized all possible text content from human authors.

Before those responsible could launch ChatGPT, they were looking for a solution to quickly filter all toxic language from the huge dataset.

To this end, OpenAI has teamed up with San Francisco-based data label company Sama. The company promises to provide “ethical” and “dignified digital work” to people living in developing countries. The OpenAi command was about identifying and flagging toxic content, which could then be fed into a ChatGPT filtering tool as data.

To do this, Sama recruited the data labellers in Kenya, who would play a key role in making the AI ​​chatbot safe for public use.

The company also pays staff in Uganda and India to label big data for Silicon Valley clients such as Google, Meta and Microsoft.

What does OpenAI say about this?

A company spokesperson explained:

“Our mission is to ensure that all of humanity benefits from artificial general intelligence, and we work hard to develop safe and usable AI systems that limit bias and harmful content.

Classify and filter harmful [Texte und Bilder] is a necessary step to minimize the amount of violent and sexual content in training data and to create tools that can detect harmful content.”

It also said staff were entitled to one-on-one and group sessions with “professionally trained and licensed mental health therapists”.

Incidentally, Sama quit his work for OpenAI in February 2022, eight months before the contract date. In the same month, Time magazine published an article critical of Sama’s work with Mark Zuckerberg’s meta-corporation. It concerns content presenters who were traumatized after watching images and videos of executions, rapes and child abuse for $ 1.50 an hour.

According to “Time,” it’s not clear whether OpenAI has also collaborated with other data labeling companies. The American AI company keeps a low profile in this regard.

What do we learn from this?

The online medium Motherboard, which belongs to the media group “Vice”, critically reported last December that its acclaimed AI innovation was being driven by underpaid workers abroad.

Despite the probably widespread belief that AI software like ChatGPT works in a quasi-magical way on its own, it takes a lot of human work to prevent the AI ​​from generating inappropriate or even illegal content.

AI software is a billion-dollar business and can disrupt entire sectors of the economy. As expected, companies around the world are competing for the most powerful technologies.

Outsourcing routine, potentially traumatizing work benefits big tech companies in many ways, notes Vice: Companies “can save money by using cheap labor, avoiding strict labor laws, and keeping their ‘innovative’ distance from tools and the employees behind them”.

Time magazine states:

“AI, for all its glory, often relies on hidden human labor in the Global South, which can often be harmful and exploitative. These invisible workers remain on the sidelines, even as their work contributes to multi-billion dollar industries.”

OpenAI leadership is reportedly in talks with investors to raise $29 billion, including a possible $10 billion investment by Microsoft, according to reports. This would make OpenAI one of the most valuable AI companies in the world. It is already the undisputed market leader.

Sources

  • time.com: OpenAI used Kenyan workers for less than $2 an hour to make ChatGPT less toxic
  • vice com: OpenAI used Kenyan workers earning $2 an hour to filter traumatic content from ChatGPT
  • vice com: AI is not artificial or intelligent (December 2022)

Author: Daniel Schurter

Source: Watson

Share
Published by
Ella

Recent Posts

Terror suspect Chechen ‘hanged himself’ in Russian custody Egyptian President al-Sisi has been sworn in for a third term

On the same day of the terrorist attack on the Krokus City Hall in Moscow,…

1 year ago

Locals demand tourist tax for Tenerife: “Like a cancer consuming the island”

class="sc-cffd1e67-0 iQNQmc">1/4Residents of Tenerife have had enough of noisy and dirty tourists.It's too loud, the…

1 year ago

Agreement reached: this is how much Tuchel will receive for his departure from Bayern

class="sc-cffd1e67-0 iQNQmc">1/7Packing his things in Munich in the summer: Thomas Tuchel.After just over a year,…

1 year ago

Worst earthquake in 25 years in Taiwan +++ Number of deaths increased Is Russia running out of tanks? Now ‘Chinese coffins’ are used

At least seven people have been killed and 57 injured in severe earthquakes in the…

1 year ago

Now the moon should also have its own time (and its own clocks). These 11 photos and videos show just how intense the Taiwan earthquake was

The American space agency NASA would establish a uniform lunar time on behalf of the…

1 year ago

This is how the Swiss experienced the earthquake in Taiwan: “I saw a crack in the wall”

class="sc-cffd1e67-0 iQNQmc">1/8Bode Obwegeser was surprised by the earthquake while he was sleeping. “It was a…

1 year ago