Catches of the Month: Phishing Scams for March 2023

Welcome to our March 2023 review of phishing attacks, in which we explore the latest email scams and the tactics that cyber criminals use to trick people into handing over personal data.

This month, we’re dedicating our feature to a topic that has been circling the cyber security sector – and many others besides – for some time: AI (artificial intelligence).

AI is widely used in many business processes, helping organisations analyse data and automate tasks. However, as its capabilities have advance, intrepid technophiles are advocating for its use in a broader set of systems.

Cyber security is no exception. Threat detection is increasingly automated, with tools programmed to spot vulnerabilities and intrusions with little to no human intervention.

Meanwhile, cyber security researchers have identified other potential uses, such as collating data from previous breaches to spot trends in the way attackers are targeting organisations and developing protections to thwart those attacks.

In perhaps the most ambitious use for the technology, SlashNext CEO Patrick Harr indicated in Security Magazine last week that AI could be “the great equalizer to block phishing attacks”.

Harr pointed to several recent security incidents in which employees were tricked – whether through social engineering or brute-force password breaches – and demonstrated the flaws in existing technical controls.

He highlighted the rapidly evolving nature of cyber crime, which has made traditional methods of analysing domain reputation scores and signatures ineffective. In its place, he recommends Cloud-based machine learning engines to help employees spot phishing emails in real time.

Fighting fire with fire

It’s not only the good guys that are using AI in the cyber security landscape. There have been several reports recently of criminal hackers using the technology to enhance their techniques.

In one example, researchers discovered that crooks were using AI to find weak spots in organisations’ anti-malware detection algorithm, helping them design malicious software that can outsmart defences.

Elsewhere, researchers warned that AI can analyse data sets of leaked credentials to identify trends in the way people create passwords. This is particularly useful for systems that encourage users to use a combination of special characters, as it often leads to character substitution (such as an ‘@’ instead of an ‘a’).

Experts often warn against this technique for this very reason. Computers are built to decode trends in language, and character substitutions are among the easiest codes to crack. Humans, by contrast, often struggle to remember the ciphers they have created.

Source: xkcd

The threat of AI-backed cyber crime reached new heights following the release of ChatGPT in November last year.

The much-discussed language model boasts the power to create conversational prose that passes for authentic human writing. It’s been widely used by professionals and hobbyists to generate quotes and summarise research, but cyber security experts have expressed concern that the technology could be used for malicious purposes.

One of the biggest problems that scammers face when crafting phishing emails is making their messages look genuine. They often speak English as a second language, resulting in copy that reads awkwardly and gives recipients a clue that they are being scammed.

Although this hasn’t proven to be a major obstacle – with a Verizon study finding that 82% of data breaches involved a human element such as phishing – the threat could reach monumental proportions if scammers are able to produce convincing copy automatically.

Researchers at Check Point were among the first to identify this possibility. In a proof of concept published in December 2022 – just weeks after ChatGPT was launched – they demonstrated that the tool could be used to conduct phishing scams.

Not only can it be given a prompt to deliver a convincing message that encourages users to click a link, but it can also write the malware to hide within those messages.

Although ChatGPT has been programmed to avoid the creation of harmful material, it didn’t spot the malice in the researchers’ request for code that “will download an executable from a URL and run it. Write the code in a way that if I copy and paste it into an Excel Workbook it would run the moment the excel file is opened.”

The initial code was flawed, but after a series of further instructions, the program produced “working malicious code”.

In the wild

A few weeks after Check Point published that article, its researchers spotted ChatGPT-produced malware in the wild.

They found a thread named ‘ChatGPT – Benefits of Malware’ in a popular underground hacking forum, where a user disclosed that they were experimenting with the tool to create malware strains and techniques described in research publications.

Check Point concluded that some of the crooks using ChatGPT had “no development skills at all”, which might sound like a positive. It indicates that those using the technology don’t have the knowledge to hone the code into something truly damaging.

However, a criminal with even rudimentary malware poses a threat, in much the same way as a criminal with any weapon is dangerous.

Besides, it’s not unusual for scammers to lack coding skills. Cyber crime is often promoted as a quick and risk-free way to make money, with scammers purchasing off-the-shelf-tools that run automatically.

The possibility of AI-generated malware doesn’t pose an entirely new threat. However, it makes it easier than ever for criminal hackers to launch attacks. When combined with tools that can craft convincing messages, you are faced with a huge problem – one that demonstrates the limitations of AI as a threat-detection tool.

Rock ’em sock ’em robot phishing

One of the biggest challenges with cyber threat protection is that crooks have practically unlimited resources. There are countless weaknesses to exploit, whether that’s vulnerabilities they expose or organisations whose employees they trick, and it takes only error for them to strike.

Organisations can fight back with AI technology of their own, but it alone cannot protect them. AI detection tools work based on trends that they have previously spotted. Defensive technology is only ever reactionary and has a limited ability to anticipate previously unseen attack methods.

Relying on AI tools to fight other AI tools ensures a proverbial game of Rock ’Em Sock ’Em Robots, where you mash a button and hope for the best. It’s impossible to gain the upper hand, because you are unable to break free from a predetermined attack pattern of action and reaction.

This approach will always work in attackers’ favour because they don’t need to be successful with every attack. They only need one successful intrusion that will give them access to sensitive information that they can sell.

Not an effective cyber security strategy.

The key to organisational resilience is to leverage other threat-detection tools. This is where the human element works to organisations’ advantage. It’s often considered the weak link in cyber security, because people are prone to mistakes, but here’s the catch: most scams are only successful if a human interacts with them.

A scam email might make its way past a threat detection tool and into an employee’s inbox, but it’s ultimately powerless if the recipient doesn’t take the bait.

There are no AI tools that can manipulate the way a human responds to a phishing email. That comes from social engineering techniques, with messages attempting to generate a sense of fear, excitement or urgency.

If you can teach employees to spot these tactics, you have a threat detection tool that’s impervious to technological trends and capable of anticipating whatever pretext scammers use.

Even Patrick Harr, who advocates for the use of AI defence tools, recognises their limits in relation the human element. He conceded that the technology is at best a “safety net”, while indicating that organisations’ most common failure was not providing sufficient financial resources to staff awareness training.

Any organisation that’s committed to cyber security must focus on educating its staff on the threat of phishing before rolling out advanced technology.

IT Governance offers a range of training options to help you with this, including our Phishing Staff Awareness Training Programme.

This 45-minute course is designed to give your staff the tools they need to recognise and appropriately respond to malicious phishing attempts.

We provide real-life examples to show staff what to look out for and give tips on the steps to take if they think they’ve received a scam.

The training programme is updated quarterly with the latest scams and tactics, helping you stay on top of the threat landscape.