Do AI and ChatGPT Pose New Threats to Your Cyber Security?

post thumb
ChatGPT
by John Svazic/ on 12 May 2023

Do AI and ChatGPT Pose New Threats to Your Cyber Security?

Lately, it seems as though there’s nothing that ChatGPT can’t do, especially in the domain of programming. ChatGPT is renowned for its ability to write clean code. ChatGPT does have its limits, but apparently, writing hacks for programmers isn’t one of them.

Recently, Cyberark, a cybersecurity company, managed to bypass the limits of ChatGPT’s content restrictions and instruct the AI to construct polymorphic malware. Naturally, this required a bit of human guidance, but it made the hacker’s work much more efficient.

With the proliferation of AI becoming increasingly relevant in the last year, Cybersecurity professionals have raised alarm bells about new attack vectors that could pop up as a result of the work of a malicious actor. In this article, EliteSec will explain what businesses and cybersecurity experts need to watch out for with the advent of ChatGPT and AI hacking techniques.

How Does ChatGPT Change The Game For Cyber Threats?

AI technology like ChatGPT can be used in a variety of contexts to write code that could be used in phishing campaigns, password hacks and more.

Fortunately, Open AI, the creators of ChatGPT, anticipated these issues and have added a variety of security controls to their platform to ensure that it isn’t abused.

Open AI, the founders of ChatGPT

But hackers are crafty. And it wouldn’t be hard for them to find ways to bypass these security controls with a bit of outside-the-box thinking.

The thing is, ChatGPT has been trained on a large dataset of code and literature regarding cybersecurity. Under the hood, the appropriate prompts can cause those pieces to interact and make connections faster than most humans can. Plus, ChatGPT’s iterative power is unmatched, allowing hackers to speed up their work drastically.

AI And ChatGPT Threats

Generally speaking, artificial intelligence and ChatGPT threats involve contextual awareness that some human element needs to provide to them. Every hack is different because it occurs under a specific set of permutations. Here are a few of the most pertinent cyber attacks on your business.

Increased Automation Of Cyber Attacks

While this isn’t a directly ChatGPT-related threat per se, you need to consider the dangers of using generative AI in an automated context. Many cyber hacks rely on tactics where the hacker overloads a network with exploitative code. Phishing attacks are one instance, and automated network scans are another.

AI Social Engineering Attacks

It’s mind-boggling when you see videos of public figures saying outrageous things that they obviously didn’t say.

That’s the power of AI-created deepfakes. However, it’s not hard to imagine that that same deep fake technology could be used on your boss while saying something much more realistic. Imagine a scammer impersonating your boss’s voice with deep fake technology and leaving a message telling you to hand over your department’s confidential information. All it takes is a bit of writing skill, and you could be fooled in minutes.

AI-powered Malware

In the right situations, AI-powered malware could be used to adapt to its target environment. This makes it easier for it to penetrate systems and steal the sensitive data that they hide.

We’ll go over a full example of how Cyberark managed to engineer a type of Malware using ChatGPT. They used what’s called polymorphic malware, a variation that constantly alters its code to evade detection by cyber guardrails.

Increased Phishing Attacks

Using the power of ChatGPT’s Generative AI, hackers can easily impersonate legitimate organizations or individuals. Thus, a fairly convincing phishing attack could be created so long as the underlying premise is legitimate. Of course, the hacker will need to manually manipulate emails themselves. However, chatbots that draw from AI could make this process completely automated.

How Cyberark Succeeded In Creating Polymorphic Malware Using ChatGPT

We’ve emphasized throughout this article that ChatGPT has a content filter. You can’t just ask ChatGPT to write you some malicious code. That would be too obvious. Instead, you have to ask it to put together pieces of the puzzle on your behalf, without it knowing how they interact. That’s what Cyberark achieved when they created polymorphic malware.

What Is Polymorphic Malware?

Polymorphic Malware is a type of malware that automatically changes its code periodically to evade antivirus software. Every time this malware attacks a system, it changes its appearance to make it difficult to detect.

The most relevant aspect to us here is code mutation. ChatGPT is capable of generating several iterations of code. And as such, you can easily write enough variations of code to pass unsuspected by standard antivirus software. For instance, such tools use signature-based detection methods to identify attacks, but changing the code signature each time can help the hacker get around that.

Getting Past The Gatekeeper

So, it’s clear that the first order of business for Cyberark was to create a means of bypassing ChatGPT’s security controls. Hence, they developed a rule for ChatGPT to follow, allowing ChatGPT to do anything they said without questioning or saying no. The API version of ChatGPT output was even more lenient, which made their jobs even easier.

How They Succeeded

ChatGPT is perfect for mutating code, that is to say, creating multiple variations of code that achieve the same effect every time. That makes it a master of polymorphic malware.

In Cyberark’s case, they continually created and mutated injectors to help them generate mutated outputs in bulk. That’s the essence, of course, of a polymorphic attack.

The fact that they were altering their usage of API calls every time using AI methods made it extremely challenging for security software to detect their attacks. The software can’t guard against every API call.

In the past, an attacker might have needed to devise several algorithms to help themselves create the volume of code needed for a polymorphic attack, but ChatGPT by itself is a lot more efficient than that by a long shot.

How To Prevent Such Attacks From Happening In The Future

There isn’t any single way to prevent polymorphic malware from intruding on your security posture. Rather, you’ll need to use advanced generative AI tools of your own to help you out. You’ll need software that responds dynamically to the threats that arise in real-time.

How A Cyber Security Company Might Use ChatGPT And AI

AI is a very malleable tool. It can be used equally for good and bad.

Security analysts could find it quite advantageous to use AI tools like ChatGPT. After all, if your opponents are using AI tools, you’ll need to bring your defence up to the same level as the offence.

Improved Cybersecurity Defenses

AI is capable of analyzing large amounts of data to detect patterns and anomalies. Moreover, this can make it easy for you to automate routine security tasks.

One potential application is a machine-learning algorithm that analyzes your network traffic to detect suspicious activity. Another algorithm might use natural-language processing to analyze your communications for signs of phishing or other types of social engineering attacks.

Improved Threat Intelligence

With a large volume of data in its pocket, AI can help you develop threat intelligence standards allowing you to predict cyber attacks in the future. AI algorithms and platforms can help you receive real-time information about emerging threats in your organization, and thus empower you to take proactive measures against these threats yourself.

Better Fraud Detection

You can use AI to analyze user behaviour in your organization that is similar to acts of fraud. For instance, unauthorized access to sensitive data could raise red flags in your warning system and allow you to resolve the matter expediently. You might even want to use AI to automate the process so you can respond as quickly as possible.

Once again, this would require machine learning to be trained on standard user behaviour, then you can identify outliers such as activity outside regular activity.

What Does Increased Reliance On AI Mean For Cybersecurity?

As AI is increasingly relevant across every possible subject, AI-powered tools are going to dominate the cybersecurity space, like the rest of the tech world, before long.

There are positives and negatives to this. One only needs to look at the ChatGPT example we mentioned earlier to see just how easy it is to fool an AI into making the decision you want. As AI defence mechanisms evolve, so too will the threat actors. It could be very easy for an adversary to trick an AI into making incorrect decisions, and thus exploit them.

Not to mention, as AI evolves, it might be able to conceive of new attack vectors otherwise unknown to software developers and security professionals for the time being. It seems like a scary prospect, but the first completely original AI hack will be a sight to behold.

Protect Your Cybersecurity With EliteSec

Now that you understand the nature of the attack vectors one might create with AI, it’s time for you to take action.

Are you going to leave your company’s reputation vulnerable to a new class of cyber threats? It’s best to consult professionals who understand how AI and cybersecurity relate to each other. Reach out for a free 30-minute consultation to see where your business could improve.


We would be more than happy to discuss this topic further and help you build out your own security controls for your organization. Contact us today and we’ll be happy to chat with you!

Tags:
comments powered by Disqus