Researchers Find ChatGPT’s Latest Bot Behaves Like Humans

 

A team led by Matthew Jackson, the William D. Eberle Professor of Economics in the Stanford School of Humanities and Sciences, used psychology and behavioural economics tools to characterise the personality and behaviour of ChatGPT’s popular AI-driven bots in a paper published in the Proceedings of the National Academy of Sciences on June 12. 

This study found that the most recent version of the chatbot, version 4, was indistinguishable from its human counterparts. When the bot picked less common human behaviours, it behaved more cooperatively and altruistic.

“Increasingly, bots are going to be put into roles where they’re making decisions, and what kinds of characteristics they have will become more important,” stated Jackson, who is also a senior fellow at the Stanford Institute for Economic Policy Research. 

In the study, the research team presented a widely known personality test to ChatGPT versions 3 and 4 and asked the chatbots to describe their moves in a series of behavioural games that can predict real-world economic and ethical behaviours. The games included pre-determined exercises in which players had to select whether to inform

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: