Now You See Me, Now You Don't: Using LLMs to Obfuscate Malicious JavaScript
NetmanageIT OpenCTI - opencti.netmanageit.com
SUMMARY :
This article discusses an adversarial machine learning algorithm that uses large language models (LLMs) to generate novel variants of malicious JavaScript code at scale. The algorithm iteratively transforms malicious code to evade detection while maintaining its functionality. The process involves rewriting prompts such as variable renaming, dead code insertion, and whitespace removal. The technique significantly reduced detection rates on VirusTotal. To counter this, the researchers retrained their classifier on LLM-rewritten samples, improving real-world detection by 10%. The study highlights both the potential threats and opportunities presented by LLMs in cybersecurity, demonstrating how they can be used to create evasive malware variants but also to enhance defensive capabilities.
OPENCTI LABELS :
wormgpt,fraudgpt
Open in NetmanageIT OpenCTI Public Instance with below link!
Use public read only username and password on login page.
NOTE : Use Public READ only user credentials on login page banner.
Now You See Me, Now You Don't: Using LLMs to Obfuscate Malicious JavaScript