Sophos Studies Reveal AI’s Role in Cybercrime: Misuse and Criminal Hesitation

Sophos boothSophos, a global cybersecurity as-a-service provider, has published two studies on AI in cybercrime. The first reveals potential misuse of technologies like ChatGPT for large-scale scams without technical expertise. The second contrasts this, highlighting cybercriminals’ hesitation and concerns about using AI and large language models (LLMs) for their attacks.

Two studies regarding the use of AI in cybercrime have been published by Sophos, a worldwide cybersecurity as-a-service provider. The first paper, ‘The Dark Side of AI: Large-Scale Scam Campaigns Made Possible by Generative AI,’ shows how criminals may be able to use ChatGPT and other similar technologies in the future to commit widespread fraud with no technical expertise. However, despite AI’s promise, some cybercriminals are cautious and even worried about employing AI for their attacks, according to a second paper titled ‘Cybercriminals Can’t Agree on GPTs.’ This is in contrast to how they feel about large language models (LLMs) like ChatGPT.

The Negative Aspects of Artificial Intelligence

Sophos X-Ops was able to create a completely functional website with AI-generated graphics, audio, and product descriptions, as well as a fake Facebook login and fake checkout page to steal people’ login credentials and credit card data. They did this by using a basic e-commerce framework using LLM tools like GPT-4. With only one button, Sophos X-Ops was able to construct hundreds of comparable websites in minutes using the same technique, requiring very little technical skills to run.

“It’s natural – and expected – for criminals to turn to new technology for automation. The original creation of spam emails was a critical step in scamming technology because it changed the scale of the playing field,” said Ben Gelman, Senior Data Scientist at Sophos. “New AIs are poised to do the same; if an AI technology exists that can create complete, automated threats, people will eventually use it. We have already seen the integration of generative AI elements in classic scams, such as AI-generated text or photographs to lure victims. However, part of the reason we conducted this research was to get ahead of the criminals. By creating a system for large-scale fraudulent website generation that is more advanced than the tools criminals are currently using, we have a unique opportunity to analyze and prepare for the threat before it proliferates.”

Cybercriminals Can’t Agree on GPTs

To better understand attacker sentiments around AI, Sophos X-Ops looked for discussions on LLM in four well-known dark web forums. Although the use of AI by hackers seems to be in its early stages, threat actors on the dark web are talking about how it may be used for social engineering. AI has already been used in romance-based cryptocurrency schemes, according to Sophos X-Ops.

Furthermore, according to Sophos X-Ops, the bulk of postings discussed ‘jailbreaks,’ or methods of getting beyond LLM security measures so that hackers may use them for harmful reasons, and hacked ChatGPT accounts for sale. Ten ChatGPT variants that the developers of Sophos X-Ops stated might be used to initiate cyberattacks and produce malware were also discovered. Threat actors, however, had differing opinions about these derivatives and other nefarious uses of LLMs. A number of criminals expressed worry that the people who made the ChatGPT mimics were attempting to defraud them.

“Since ChatGPT’s launch, there has been a great deal of anxiety about how hackers are abusing AI and LLMs, but according to our study, threat actors are now less enthusiastic than doubtful. We only discovered 100 postings about AI in two of the four dark web forums that we looked at. In contrast, we discovered 1,000 postings on cryptocurrencies throughout the same time period,” said Christopher Budd, director of Sophos’ X-Ops research. “We did see some fraudsters trying to utilize LLMs to construct malware or attack tools, but the outcomes were crude and often received with suspicion from other users. In one instance, a threat actor who was keen to demonstrate ChatGPT’s potential unintentionally gave out important details about who he really was. We also came across a ton of ‘thought pieces’ about the ethical ramifications of AI usage and its possible detrimental impacts on society. Stated differently, it seems that hackers and the general public are now engaged in discussions over LLMs.”