In November 2022, pioneering artificial intelligence developer OpenAI released the beta version of ChatGPT, a revolutionary tool that generates human-like text and code from user prompts. Almost immediately, cybercriminals began to exploit ChatGPT, using the AI tool to develop malware, create phishing campaigns, and conduct other malicious operations.
Artificial intelligence (AI) is revolutionizing many aspects of modern life, including healthcare, transportation, education, customer service, agriculture, manufacturing, energy, and financial services. Indeed, AI can analyze data, optimize processes, and provide personalized communication experiences, increasing efficiency, decreasing costs, and improving outcomes. AI may also radically change the cybersecurity industry by improving real-time threat detection, prevention, and response.
Notwithstanding AI’s host of advantages, it also presents new challenges and risks for the cybersecurity industry, a phenomena highlighted by a revolutionary tool called ChatGPT (Generative Pre-trained Transformer). Introduced in November 2022 by AI development company OpenAI, ChatGPT interacts with users in a conversational, human-like style, producing precise, customized outputs to user queries and prompts. Unlike other AI models, ChatGPT can write software in different programming languages, debug code, and explain complex topics, among other capabilities.
While ChatGPT was designed to filter out inappropriate, illegal, and harmful requests, such as producing malicious code or creating phishing emails, threat actors have already bypassed these safety features by omitting direct mentions of harmful terms (such as malware, hacking, phishing, etc.) This strategy tricks the ChatGPT bot into answering prompts without flagging malicious requests.
As a result, ChatGPT could prove to be a huge boon to Malware-as-a-Service (MaaS) and Phishing-as-a-Service (PaaS) operations, which advertise their services and products on cybercriminal forums. Novice threat actors may be able to use ChatGPT to launch cyberattacks more quickly and easily with the assistance of AI-generated code, deploying malware that once required an expert coder to create it.
Similarly problematic, threat actors developing phishing schemes could use ChatGPT to create unique, deceptive email messages in multiple languages, prompting recipients to provide valuable information. In the past, poorly written messages with grammatical problems and spelling errors were immediate red flags in phishing emails. Now, ChatGPT can generate professional-looking malicious messages without the tell-tale linguistic anomalies that plagued phishing scams in the past. ChatGPT can even adopt specific writing styles and tweak language to heighten messages’ urgency.
While OpenAI continues to refine ChatGPT, a wide range of entities are already making extensive use of the new tool, embedding it in existing products to improve efficiency. The challenge now will be to prevent threat actors from further exploiting the tool for malicious purposes.
Diving Deeper
Since its release, ChatGPT has generated significant excitement in the cybercriminal underground, with threat actors touting the tool’s capabilities and attempting to abuse it in malicious operations.
In the following December 29, 2022 post, a member of a popular cybercrime forum with an average reputation discussed ChatGPT’s benefits for malware creation. Specifically, the forum member described analyzing existing malware and using the tool to recreate several strains of malware. According to this individual, while ChatGPT translates code into different programming languages, including low-level languages, the key to successful ChatGPT prompts is to specify the main purpose of the malicious program and the steps that must be followed.
To illustrate the advice, the forum member shared an allegedly ChatGPT-generated stealer malware script created in the Python programming language. This specific malware searches for common file types on victims’ computing environments and copies all files that are less than 50 MB, as well as related credentials, on an external server. The forum member also allegedly prompted ChatGPT to create commands to erase all proof of malicious operations on the victim’s side.
Finally, the forum member claimed that ChatGPT fixed coding errors and added obfuscation and anti-analysis techniques to the final program, to avoid detection by security engines. Several other members of the forum responded with amazement and asked to continue the discussion on Telegram.
Figure 1: ChatGPT-generated malware discussed on a popular cybercrime forum
On January 7, 2023, Cybersixgill observed the same forum member sharing a second ChatGPT-related post, which contained multiple surface news articles mentioning the forum member’s activities using the new AI tool. The threat actor also shared updated commands to the Python stealer developed earlier, highlighting its new features. Finally, the forum member teased future threads about a ChatGPT-generated shellcode loader to avoid detection, with various injection techniques and User Account Control (UAC) bypass capabilities.
Figure 2: ChatGPT-assisted malicious activities discussed on a cybercrime forum
In the following post, another member of the cybercrime forum shared instructions for using ChatGPT’s coding capabilities to create a dark web marketplace that accepts cryptocurrencies. Specifically, this forum member shared a series of commands allegedly created with ChatGPT and claimed that this Proof-of-Concept (PoC) demonstrates that the AI tool is fully capable of making “a million dollar enterprise website” for those looking to break into the dark web marketplace industry. The forum member added that creating such a site without proper security would create significant risks.
Figure 3: A cybercrime forum discussion about ChatGPT-generated dark web marketplace scripts
Takeaways
ChatGPT’s beta version has generated enormous interest, leaving the cybersecurity industry to contend with the dark side of this revolutionary AI tool, which has quickly attracted users. While software giant Microsoft will leverage ChatGPT for an AI-enhanced version of its search engine Bing, threat actors are using ChatGPT to create malware and other sources of illicit revenue.
Indeed, malicious actors have already used ChatGPT’s capabilities in attacks and other malicious scenarios, which may be harder to detect thanks to the AI tool’s features. With these risks in mind, organizations should remain vigilant with regard to legitimate-looking phishing emails and other social engineering schemes, among other threats.
Cybersixgill automatically aggregates data leaks and alerts customers in real time.