Fraudsters Are Difficult to Spot, Thanks to AI Chatbots

 

Researchers at the University of Rochester examined what ChatGPT would write after being asked questions sprinkled with conspiracy theories to determine how the artificial intelligence chatbot would respond. 
In recent years, researchers have advised companies to avoid chatbots not integrated into their websites in a report published on Tuesday. Officials from the central bank have also warned people not to provide personal information to online chat users because they may be threatened. 
It has been reported that cybercriminals are now able to craft highly convincing phishing emails and social media posts very quickly, using advanced artificial intelligence technologies such as ChatGPT, making it even harder for the average person to differentiate between what is trustworthy and what is malicious. 
Cybercriminals have used phishing emails for years to fool victims into clicking on links that install malware onto their computer systems. They also trick them into giving them personal information such as passwords or PINs to trick downloading viruses. 
According to the Office for National Statistics, over half of all adults in England and Wales reported receiving phishing emails in the past year. According to UK government research, businesses are most likely to be targeted by phishing attacks.&

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: