The Rise of FraudGPT: A New Tool for Cybercriminals
Less than two weeks after the emergence of WormGPT, another dangerous tool called FraudGPT is making its way through the dark web. Unlike its predecessor, FraudGPT is specifically designed for offensive purposes, such as launching phishing attacks and creating malicious code. Cybersecurity experts have expressed concern as this tool offers cybercriminals more effective ways to carry out their illicit activities.
FraudGPT: The Dark Web’s Latest Threat
According to Rakesh Krishnan, a senior threat analyst with cybersecurity company Netenrich, FraudGPT has been circulating on Telegram Channels since July 22. It is being sold on various dark web marketplaces and the Telegram platform, exclusively targeting cybercriminals looking to engage in spear phishing, create cracking tools, perform carding, and more.
For a subscription fee ranging from $200 per month to $1,700 per year, malicious actors gain access to a powerful tool that enables them to craft convincing phishing emails, create undetectable malware, develop hacking tools, identify leaks and vulnerabilities, and even learn coding and hacking techniques.
Netenrich’s report highlights that FraudGPT has already garnered over 3,000 confirmed sales and reviews. The operators behind this tool offer ’round-the-clock escrow capabilities, providing cybercriminals with a safe environment to conduct their illicit activities.
ChatGPT and Its Security Concerns
The rise of AI technologies, particularly generative AI technologies like ChatGPT, has raised concerns among cybersecurity professionals. ChatGPT, developed by startup OpenAI and heavily promoted by Microsoft, has gained immense popularity since its launch in November 2022. With over 100 million monthly active users, it has become the fastest-growing consumer application of all time.
However, the ease of use and adaptability of ChatGPT have also made it an attractive tool for threat actors. WormGPT and now FraudGPT have demonstrated how these AI models can be used for malicious purposes beyond generating text. ChatGPT’s flexibility extends to writing code, making it a powerful resource for cybercriminals looking to launch sophisticated attacks.
John Bambenek, a principal threat hunter at Netenrich, explains that generative AI can be utilized to enhance PowerShell tooling or quickly create numerous PowerShell tools, an integral component in many advanced cyber attacks. Unlike ChatGPT, FraudGPT lacks the ethical safeguards, making it a tool without boundaries.
Assessing the Immediate Threat of FraudGPT
Although FraudGPT poses a significant risk, some experts question its actual effectiveness at this stage. Melissa Bischoping, director of endpoint security research at Tanium, argues that FraudGPT’s features don’t bring much new capability compared to what attackers can achieve with ChatGPT. She believes that the hype surrounding AI-based attacker tools allows scammers to exploit the surge in interest.
Timothy Morris, chief security advisor at Tanium, also suggests that it could be a scam. Regardless, businesses are advised to continue employing proven security technologies such as threat hunting, robust security controls, multifactor authentication, and user training.
Pyry Avist, co-founder and CTO at security firm Hoxhunt, acknowledges that “black hat GPT models” like FraudGPT are concerning, but he views them as an extension of a larger trend rather than a groundbreaking innovation in malicious technology. These models essentially act as generative AI jailbreaks, enabling cybercriminals to create compelling phishing emails easily and effectively impersonate high-level company executives.
Identifying the Culprit Behind FraudGPT
According to Netenrich’s investigation, the threat actor responsible for FraudGPT created a Telegram Channel on June 23. The individual claims to be a verified vendor on various dark web marketplaces, including Empire, Torrez, AlphaBay, and Versus. By offering their services on Telegram Channels, they avoid the risks associated with exit scams commonly witnessed in dark web marketplaces.
Addressing the malicious versions of ChatGPT is crucial, but experts suggest placing more emphasis on countering multi-step attacks. Chatbots, combined with deepfake technology, hold the potential to carry out highly sophisticated attack campaigns at scale, amplifying the challenges posed by malware and BEC (business email compromise).
The emergence of FraudGPT on the dark web signals the growing threat of AI-powered tools for cybercriminals. As the development of generative AI technologies continues to accelerate, it is essential for individuals and organizations to stay vigilant and keep their security measures up to date. It’s important not to underestimate the potential impact of malicious AI models, even if their effectiveness is still in question.
To learn more about the latest trends and developments in AI and cybersecurity, visit GPT News Room.