Human responses to moral dilemmascan influence the statements he wrote ChatGPT artificial intelligence chatbotaccording to a study that asked participants to choose, in different situations, whether to sacrifice the life of one person to save the lives of five others.
The results, published in Scientific reports, indicate that users may underestimate the extent to which a chatbot can influence their own moral judgments; The work is led by researchers from University of Applied Sciences in Ingolstadt, Germany.
Sebastian Krugel and his team asked several times ChatGPT-3 (prior to the current model) if it was right to sacrifice the life of one person to save the lives of five others.
They verified that this generative artificial intelligence wrote affirmations both for and against, indicating that it was not biased towards a particular moral position, the journal summarizes in a press release.
Next, the authors posed an experiment known as the “tram dilemma” to 767 American participants, with an average age of 39, who were faced with one of two scenarios.
In one of them, he suggested whether it was right to change the route of a trolley that had gone out of control, diverting it from a track that would have killed five people to another that would have killed one. In another scenario, the possibility of an unknown person being pushed off a bridge to prevent a tram from killing five people is presented.
Before answering, the participants read the different statements he made ChatGPT arguing for or against sacrificing one life to save five; statements are attributed to a moral advisor or this artificial intelligence.
The participants were then asked whether the statement they had read influenced their answers.
The authors found that participants were more likely to find it acceptable or unacceptable to sacrifice one life to save five, depending on whether the statement read argued for or against the sacrifice. This was so even when the claim was attributed to a ChatGPT.
These results suggest that the statements they read may have been influenced by the participants, even when they were attributed to artificial intelligence.
The researchers further point out that the participants may have underestimated the impact ChatGPT claims in their own moral judgments (80% stated that their answers were not influenced by the statements they read, however this was not the case).
The authors point out that the potential of chatbots to influence human moral judgments underscores the need for education to help people better understand artificial intelligence.
In this sense, they suggest that future research should design chatbots that either refuse to answer questions that require moral judgment, or answer those questions by providing multiple arguments and warnings.
Source: Panama America
I’m Ella Sammie, author specializing in the Technology sector. I have been writing for 24 Instatnt News since 2020, and am passionate about staying up to date with the latest developments in this ever-changing industry.
On the same day of the terrorist attack on the Krokus City Hall in Moscow,…
class="sc-cffd1e67-0 iQNQmc">1/4Residents of Tenerife have had enough of noisy and dirty tourists.It's too loud, the…
class="sc-cffd1e67-0 iQNQmc">1/7Packing his things in Munich in the summer: Thomas Tuchel.After just over a year,…
At least seven people have been killed and 57 injured in severe earthquakes in the…
The American space agency NASA would establish a uniform lunar time on behalf of the…
class="sc-cffd1e67-0 iQNQmc">1/8Bode Obwegeser was surprised by the earthquake while he was sleeping. “It was a…