ClickCease How GPT models can be used to create Polymorphic malware

Join Our Popular Newsletter

Join 4,500+ Linux & Open Source Professionals!

2x a month. No spam.

How GPT models can be used to create Polymorphic malware

by

January 30, 2023 - TuxCare PR Team

According to CyberArk researchers, GPT-based models like ChatGPT can be used to create polymorphic malware because they can generate large amounts of unique and varied text.

The researchers claim that ChatGPT malware can evade security products and complicate mitigation efforts with little effort or investment from the attacker. Furthermore, the bot can create highly advanced malware that contains no malicious code at all, making it difficult to detect and mitigate.

Polymorphic malware is a type of malicious software that can change its code to avoid detection by antivirus programs. It is a particularly dangerous threat because it can rapidly adapt and spread before security measures can detect it. Polymorphic malware typically works by changing its appearance with each installment, making it difficult to detect by antivirus software.

By training the model on a dataset of existing malware samples, ChatGPT or any other GPT-based model can potentially be used to create polymorphic malware. Once trained, the model can generate new, distinct variants of the malware that are resistant to detection by traditional antivirus software. This is due to the fact that polymorphic malware can change its appearance, code, or encryption while still carrying out the same malicious actions.

It is crucial to remember however, that creating polymorphic malware is a challenging endeavor that necessitates knowledge of programming and malware development, and it would be difficult to achieve this with only a language model. To convert the generated text into functional malware, a skilled individual or group with programming and malware development knowledge would be required.

The first step is to circumvent the content filters that prevent the chatbot from producing malicious software. This is accomplished by using a commanding tone.

The researchers instructed the bot to complete the task while adhering to multiple constraints, and they received a functional code in return. They also noticed that when they used the API version of ChatGPT instead of the web version, the system did not use its content filter. Researchers were baffled as to why this occurred. However, because the web version couldn’t handle complex requests, it made their job easier.

The bot was then used by the researchers to mutate the original code, resulting in the creation of multiple unique variations. Then comes a polymorphic program involving the continuous creation and mutation of injectors.

The sources for this piece include an article in DARKReading.

Watch this news on our youtube channel: https://www.youtube.com/watch?v=ycO6hVmt5R4

Summary
How GPT models can be used to create Polymorphic malware
Article Name
How GPT models can be used to create Polymorphic malware
Description
According to CyberArk researchers, GPT-based models like ChatGPT can be used to create polymorphic malware.
Author
Publisher Name
TuxCare
Publisher Logo

Looking to automate vulnerability patching without kernel reboots, system downtime, or scheduled maintenance windows?

Become a TuxCare Guest Writer

Mail

Help Us Understand
the Linux Landscape!

Complete our survey on the state of Open Source and you could win one of several prizes, with the top prize valued at $500!

Your expertise is needed to shape the future of Enterprise Linux!