top of page

FraudGPT - New AI Tool Designed for Attacks and Scams

This new cybercrime-generative AI is tailored to create phishing emails, build hacking tools, and carding. 


New cybercrime-generative AI tool FraudGPT is being used by malicious actors.
Image from: Adobe Stock

The advantages AI offers for the future are undebatable. However, given that it is in its early stages and operates without regulation, AI is also being used by malicious actors. One example of such a use case was WormGPT, designed to create cyber attacks. 


The newly launched cybercriminal AI tool FraudGPT is also following in the footsteps of WormGPT. FraudGPT is designed to create sophisticated phishing attacks and build hacking tools. The new cyber-criminal AI is advertised on dark web marketplaces and Telegram channels.


FraudGPT


The FraudGPT advertisements started on July 22, 2023. The community assumes this is the launch date of the new bot. The bot's creator, CanadianKingpin, opened a Telegram Channel on June 23, 2023, where he introduced himself as a verified vendor on many dark web marketplaces. He advertised his hacking activities and hinted that he wanted to use this channel to offer services instead of the dark web marketplaces full of exit scams.


The creator of FraudGPT CanadianKingpin advertised the tool by writing:


"If your [sic] looking for a Chat GPT alternative designed to provide a wide range of exclusive tools, features, and capabilities tailored to anyone's individuals with no boundaries, then look no further!"

According to cybersecurity firm Netenrich, FraudGPT has been circulating since at least July 22, 2023. Its monthly subscription fee is $200. Other options would be $1000 for six months and $1700 for a year.


CanadianKingpin also mentioned that the tool could be used to write malicious code, create undetectable malware, and find leaks and vulnerabilities. He also added that there have been over 3000 confirmed sales and reviews for FraudGPT. The language model used to develop this tool is currently unknown.


Dark Side of the AI Coin


These services are known as phishing-as-a-service (PhaaS) and can potentially act as a launchpad for malicious actors. Knowing the potential dangers of such applications, the US government has been focusing on composing an AI regulation in an effort to tame the industry.


Nerenrich security researcher, Rakesh Krishnan, commented on the FraudGPT and said:


"While organizations can create ChatGPT (and other tools) with ethical safeguards, it isn't a difficult feat to reimplement the same technology without those safeguards. Implementing a defense-in-depth strategy with all the security telemetry available for fast analytics has become all the more essential to finding these fast-moving threats before a phishing email can turn into ransomware or data exfiltration."

Tags:

bottom of page