AI Tools and Phishing Threats: What's All the Hype?


We know the ingredients of a typical phishing email but how will those elements be impacted by the advancements of AI tools such as ChatGPT? What will the impact be on identifying such sophisticated attacks?

Email Security Awareness expert and former Social Engineer James Linton joins us to look at the hype around AI tools and the hype around phishing attacks as well as potential threats and challenges that businesses and individuals face.

The role of AI in the current hype related to phishing threats has security professionals considering its potential impact on social engineering and other criminal activities. Ensuring your security awareness training includes topics that cover the safe use of AI tools to prevent information leaks or downloading malware is also an important aspect of this new hype (and we've got you covered with a 1-minute video and free downloadable PDF to help).

Despite the anticipated changes, James suggests focusing on small increments and discussing various areas of impact.

One point he emphasized was the need to consider the implications of AI beyond content creation such as the potential influence on online behavior. James encouraged a thoughtful and nuanced approach to the discussion of AI and its effects.

Obviously, with all the hype around AI at the minute...I've been diving in - seeing what it can do - thinking, 'how would it have changed things that I'd done five years ago when I was doing social engineering? Would it change what I was doing as a threat researcher? Would it change what I was doing if I was a criminal now?' And obviously, the answer is 'yes'. But I think there are certain ways we can look at different areas where it's going to make changes and panic too soon about the big "end of days event that wipes us all out, which could come. But I think it's going to be small increments at first.

AI Isn't Taking Over...Yet

While AI tools can certainly improve the grammar of phishing emails to the point that what once was used as a flag for potential phishing now holds little relevance, AI tools like ChatGPT have put some safeguards in place to prevent direct malicious attempts. James shared his attempts to creatively lead the AI into generating information from the guise of being a CEO who wanted to send an email at a time when the receiver would be busy and not notice the spelling mistakes.

In other words, he wanted to see how the tool could snuff out a threat if not explicitly implied with keywords it typically filters. The result was the tool still identified the core intent of deception and rejected the query. So while it's not impossible with a threat actor who is persistent in finding the loopholes within the machine learning, we're still a little ways off from the mass scaling of attacks for the moment. As James commented, "I would have loved it if it had got around that [deterrent], I could have kept going and trying different things. Sure. Then again, would it have been worth it? It's one of the key things that I think that once the hype dies down, we'll get to see what difference it makes."

"13 years ago, Grammarly became a thing. You would think by now errors in emails for everyone would be a thing of the past. And there are kind of not a lot of red flags for criminals. They tend to be kind of error based or language based. A lot of the cybercriminals I interacted with - you could tell they didn't have a really good grasp on modern Western office culture and things like that...The fact that an AI or a machine is suggesting what to write and making the sentences and spelling and grammar better, I don't think is a huge breakthrough. It's certainly a huge time saving thing. But ChatGPT is a massive spell check - it isn't that big of movement for criminals."

What Sets ChatGPT Apart From Other AI Writing Tools?

James went on to consider the difference and advantages of ChatGPT over other AI tools. His conclusion is that while other tools like Google and Grammarly can suggest improved text or tone of voice, Chat GPT provides access to skill sets. "You can say, write this and be mindful of the seven principles of persuasion...and that's a really powerful thing. That's like having experts assisting you...and that's what kind of lifts it brings in terms of creating and writing. That ability to life and optimize for coerciveness definitely allows for targeting more specifically a certain persona."

He feels this will be one of the next advances within spear phishing - the ability to go from a generically targeted email using name fields, etc to a more personalized context. For example, many malicious attachments typically have a fairly generic file name that is ambiguous like "End of Quarter Results" and instead potentially have file names with more personalized details with AI pulling in a list of customer names from a company website. These little details all reduce suspicion.

James further went on to speculate the possibility of using news events to trigger an automation that crafts the next campaign - for instance, in a natural disaster criminals already have a "suite of deliverables" so to speak for the fake charities and phishing emails that pop up. "The fact that you could just stick that topic into a huge prompt, perhaps, that would go off and send that out within a short space of a time is a bit of a nightmare scenario. I think the thing that will mainly stop that is the abuse of third-party systems that allow accounts to be set up in an automated way. I think that will be where that bottleneck is kind of squeezed out. But the personalization will be interesting. Will it be a huge game-changer? It's hard to actually know how much more effective it will be."

Criminals Don't Need To Reinvent the Wheel

As James put it, the best emails are already sent out by the actual companies. Criminals don't really need to get overly creative with that. While they can make it a bit more coercive with the messaging or change the call to action their whole aim is to not stand out. But he did note that phishing emails tend to have a slight lean towards being a little more impactful over curiosity-based content, especially in a work context.

Where he was more concerned with social engineering in regards to ChatGPT was in the context of social media and the fact it can incorporate a few key points to remember from a previous conversation to continue the ruse more believably. He's interested to see the FBI reports regarding business email compromise and romance scams to see if there is more of an impact there as those scams tend to be more interactional in nature.

He went on to note that what goes on in the marketing world is mirrored in the criminal world. All the ChatGPT prompts, cheat sheets, PDFs, etc that the marketing community has exploded with can be expected to be utilized by threat actors as well.

In some respects I think that the marketing world may even get the leap in terms of using technology the most effectively.  I'd be half tempted to keep an eye on what marketing is doing, especially in terms of sales and outreach and things like that - how it can take in for Excel and populate it with LinkedIn connections and then formulate things to say that are believable and things that sort of technology is the stuff that's going to be the most attractive for what they do.

Because I don't think it'll be a case of ChatGPT takes over everything part and parcel. I think it'll just slowly level up all the elements. Things will start to get more personalized, templates will start to get a bit more varied and better"

What about Security awareness Training?

It's clear security awareness training should go beyond the once-a-year 'refresher' course - training needs to be as agile as the threats that come including trending topics. As mentioned, poor grammar is no longer a definite flag of a phishing attempt due both to a very global workforce and AI tools.

According to James, "We are being painted into a corner in some respects, having to go back to the very bare atomic structure of what is being asked to do and understanding what malicious intent actually looks like. Because obviously we're losing the battle for images, video, voice, written word - you can't trust any of it. Links are tricky to understand. There's an awful lot going on. If someone opens 20 emails a day, that's about 7000 a year people are opening...There's going to be a reckoning at some point where we figure out what is the most pragmatic thing to teach and obviously the most current threats that are out there would seem valuable."

He recommends looking at the most frequently used tactics, break those down and compare it to what is standard and teach employees from there. He also suggests training employees on the basic elements that scams and fraud are based on to help them identify the makings of a scam as opposed to hoping to educate them on every last variation a scammer can invent. "I think we need to look for those common things  rather than go 'here's a million things that could happen.'

Especially things like unexpected emails, we kind of say unexpected is a trigger, but we don't then follow up and say, actually, there's a whole classification of emails that will always be unexpected and you can't pick when they'll arrive and those are the ones that you have to be careful about because scammers will often try and hide in that because timing is one of the hardest things for them to get right.



Connect with James Linton on LinkedIn

7 Essential CyberSecurity Tips When Using ChatGPT and AI Tools

Top 5 Phishing Simulation Templates - March Edition