Cybercriminals have already leveraged the power of AI to develop code that may be used in a ransomware attack, according to Sergey Shykevich, a lead ChatGPT researcher at the cybersecurity firm Checkpoint security.
Threat actors can use the capabilities of AI in ChatGPT to scale up their current attack methods, many of which depend on humans. Similar to how they aid cybercriminals in general, AI chatbots also aid a subset of them known as romance scammers. An earlier McAfee investigation noted that cybercriminals frequently have lengthy discussions in order to seem trustworthy and entice unwary victims. AI chatbots like ChatGPT can help the bad guys by producing texts, which makes their job easier.
The ChatGPT has safeguards in place to keep hackers from utilizing it for illegal activities, but they are far from infallible. The desire for a romantic rendezvous was turned down, as was the request to prepare a letter asking for financial assistance to leave Ukraine.
Security experts are concerned about the misuse of ChatGPT, which is now powering Bing’s new, troublesome chatbot. They see the potential for chatbots to help in phishing, malware, and hacking assaults.
When it comes to phishing attacks, the entry barrier is already low, but ChatGPT could make it simple for people to proficiently create dozens of targeted scam emails — as long as they craft good prompts, accordi
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: