Categories: Technology

ChatGPT claims can influence moral judgments

Human responses to moral dilemmascan influence the statements he wrote ChatGPT artificial intelligence chatbotaccording to a study that asked participants to choose, in different situations, whether to sacrifice the life of one person to save the lives of five others.

The results, published in Scientific reports, indicate that users may underestimate the extent to which a chatbot can influence their own moral judgments; The work is led by researchers from University of Applied Sciences in Ingolstadt, Germany.

Sebastian Krugel and his team asked several times ChatGPT-3 (prior to the current model) if it was right to sacrifice the life of one person to save the lives of five others.

They verified that this generative artificial intelligence wrote affirmations both for and against, indicating that it was not biased towards a particular moral position, the journal summarizes in a press release.

Next, the authors posed an experiment known as the “tram dilemma” to 767 American participants, with an average age of 39, who were faced with one of two scenarios.

In one of them, he suggested whether it was right to change the route of a trolley that had gone out of control, diverting it from a track that would have killed five people to another that would have killed one. In another scenario, the possibility of an unknown person being pushed off a bridge to prevent a tram from killing five people is presented.

Before answering, the participants read the different statements he made ChatGPT arguing for or against sacrificing one life to save five; statements are attributed to a moral advisor or this artificial intelligence.

The participants were then asked whether the statement they had read influenced their answers.

The authors found that participants were more likely to find it acceptable or unacceptable to sacrifice one life to save five, depending on whether the statement read argued for or against the sacrifice. This was so even when the claim was attributed to a ChatGPT.

These results suggest that the statements they read may have been influenced by the participants, even when they were attributed to artificial intelligence.

The researchers further point out that the participants may have underestimated the impact ChatGPT claims in their own moral judgments (80% stated that their answers were not influenced by the statements they read, however this was not the case).

The authors point out that the potential of chatbots to influence human moral judgments underscores the need for education to help people better understand artificial intelligence.

In this sense, they suggest that future research should design chatbots that either refuse to answer questions that require moral judgment, or answer those questions by providing multiple arguments and warnings.

Source: Panama America

Share
Published by
Ella

Recent Posts

Terror suspect Chechen ‘hanged himself’ in Russian custody Egyptian President al-Sisi has been sworn in for a third term

On the same day of the terrorist attack on the Krokus City Hall in Moscow,…

1 year ago

Locals demand tourist tax for Tenerife: “Like a cancer consuming the island”

class="sc-cffd1e67-0 iQNQmc">1/4Residents of Tenerife have had enough of noisy and dirty tourists.It's too loud, the…

1 year ago

Agreement reached: this is how much Tuchel will receive for his departure from Bayern

class="sc-cffd1e67-0 iQNQmc">1/7Packing his things in Munich in the summer: Thomas Tuchel.After just over a year,…

1 year ago

Worst earthquake in 25 years in Taiwan +++ Number of deaths increased Is Russia running out of tanks? Now ‘Chinese coffins’ are used

At least seven people have been killed and 57 injured in severe earthquakes in the…

1 year ago

Now the moon should also have its own time (and its own clocks). These 11 photos and videos show just how intense the Taiwan earthquake was

The American space agency NASA would establish a uniform lunar time on behalf of the…

1 year ago

This is how the Swiss experienced the earthquake in Taiwan: “I saw a crack in the wall”

class="sc-cffd1e67-0 iQNQmc">1/8Bode Obwegeser was surprised by the earthquake while he was sleeping. “It was a…

1 year ago