ChatGPT officially came out in November but has seen a rather lukewarm welcome. While millions of people are flocking to OpenAI to help solve their homework problems, cybersecurity researchers have been warning of potential misuse of OpenAI’s code-writing chatbot.
Researchers at Check Point Research have pointed out three instances where threat actors demonstrated how they used the bot to generate malicious code on underground forums.
The first instance is of a malware author claiming to have used ChatGPT to create a Python information-stealing script that can search for, copy and extract as many as 12 commonly used file types, including Office documents, PDFs and images.

This same author also created a Java script using ChatGPT that secretly downloads PuTTY and Telnet clients on an infected machine using Windows PowerShell.
Another instance of cybercriminals using it came when the researchers discovered a post on an underground forum detailing how one cybercriminal used ChatGPT to create a fully automated marketplace to reading stolen financial data, malware, drugs and ammunition, among other illegal goods.
The post also included a piece of code that uses a third-party API to fetch the latest Monero, Bitcoin and Ethereum prices as part of the market’s payment system.

Last but not least, another threat actor called USDoD on a hacking forum claimed that he made his first-ever Python script using OpenAI’s bot, which could encrypt or decrypt data using the Blowfish and Twofish cryptographic algorithms.
While the report claims the script is harmless, it can easily be modified to encrypt separately and decrypt files to work as ransomware. This shows just how easily ChatGPT can be exploited in its current form to generate malicious code, even by relatively unskilled programmers.
In the News: Security flaw found in Okta’s Auth0’s JWT library allowing RCE