<
p style=”text-align: justify;”>Cybersecurity researchers are raising alarms over the misuse of large language models (LLMs) by cybercriminals to create new variants of malicious JavaScript at scale. A report from Palo Alto Networks Unit 42 highlights how LLMs, while not adept at generating malware from scratch, can effectively rewrite or obfuscate existing malicious code.
This capability has enabled the creation of up to 10,000 novel JavaScript variants, significantly complicating detection efforts.
Malware Detection Challenges
The natural-looking transformations produced by LLMs allow malicious scripts to evade detection by traditional analyzers. Researchers found that these restructured scripts often change classification results from malicious to benign.
In one case, 88% of the modified scripts successfully bypassed malware classifiers.
Despite increased efforts by LLM providers to impose stricter guardrails, underground tools like WormGPT continue to facilitate malicious activities, such as phishing email creation and malware scripting.
OpenAI reported in October 2024 that it had blocked over 20 attempts to misuse its platform for reconnaissance, scripting, and debugging purposes.
Unit 42 emphasized that while LLMs pose significant risks, they also present opportunities to strengthen defenses. Techniques used to generate malicious JavaScript variants could be repurposed to create robust datasets for improving malware detection systems.
This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents