ChatGPT: Researcher Develops Malicious Data-stealing Malware Using AI

Ever since the introduction of ChatGPT last year, it has created a buzz among tech enthusiasts all around the world with its ability to create articles, poems, movie scripts, and much more. The AI can even generate functional code if provided with well-written and clear instructions. 

Despite the security measures put in place by OpenAI, with a majority of developers using it for harmless purposes, a new analysis suggests that AI can still be utilized by threat actors to create malware. 

According to a cybersecurity researcher, ChatGPT was utilised to create a zero-day attack that may be used to collect data from a hacked device. Alarmingly, the malware managed to avoid being detected by every vendor on VirusTotal. 

As per Forcepoint researcher Aaron Mulgrew, he had decided early on in the malware development process not to write any code himself and instead to use only cutting-edge approaches often used by highly skilled threat actors, such as rogue nation-states. 

Mulgrew, who called himself a “novice” at developing malware, claimed that he selected the Go implementation language not just because it was simple to use but also because he could manually debug the code if necessary. In order to escape detection, he also used steganography, which conceals sensitive information within an ordinary file or message. 

Creating Dangerous Malware Through ChatGPT 

Mulgrew found a loophole in ChatGPT’s cod

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: