Since the release of ChatGPT, Darktrace, a British cybersecurity firm, has warned that since the release of this application, criminals have been using an increase in the use of artificial intelligence to create sophisticated scams that con employees and compromise systems at businesses all over the world.
As the Cambridge-based firm reported, operating profits had dropped 92% in the half-year to December. Furthermore, he said that artificial intelligence had made it easier for “hacktivists” to target businesses with ransomware attacks.
Since ChatGPT was launched last November, the company has seen an increase in the number of convincing and complex scams by hackers. It said it was experiencing an increased number of attacks.
While Darktrace has observed a steady increase in email-based attacks over the last few months since the release of ChatGPT, those attacks that use false links to trick victims into clicking them have declined as a result of ChatGPT’s presence. As the complexity of the English language increased, in addition to the volume of the text, punctuation, and sentence length, other factors also increased.
The results of this study indicate that cybercriminals might not just redirect their focus to creating more sophisticated social engineering scams. Instead, they are a
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents
Read the original article: