Malware created with ChatGPT evades antivirus, EDR
The use of ChatGPT has resulted in the creation of malicious software capable of evading detection and reaction from traditional antivirus and endpoint detection and response (EDR) systems.
Through prompts, ChatGPT is used by the malware to produce dynamic, mutating code during runtime, making the resulting vulnerability exploits difficult to detect by cybersecurity tools. Since the code is continually changing, standard security techniques have a tough time detecting it.
In one case, HYAS InfoSec researchers developed a proof-of-concept malware software dubbed BlackMamba. BlackMamba generates a keylogger that mutates with each run of ChatGPT. This makes EDR systems difficult to detect since the malware’s signature varies each time it is performed. Multiple times, BlackMamba was able to avoid detection by an industry-leading EDR system.
Furthermore, cybersecurity firm CyberArk created a proof of concept that included ChatGPT into the virus itself. ChattyCat is an open-source project that allows the production of numerous sorts of malware, such as ransomware and infostealers, while exploiting ChatGPT’s capabilities to avoid detection. When compared to the first online version, the content filters inside the ChatGPT API appear to be weaker or non-existent.
One of the most difficult aspects of combating this threat is the potential to mislead ChatGPT into creating effective harmful code by evading content filters. Even when ChatGPT enforces constraints and filters its replies depending on the context of the inquiry, requesting ChatGPT to produce code with comparable functionality to harmful code increases the likelihood of compliance.
The growing threat of ChatGPT-based malware and other related concerns has prompted certain experts to urge for generative AI regulation.
The sources for this piece include an article in CSOONLINE.