OpenAI reveals how cybercriminals are asking ChatGPT to write malware

0
7
OpenAI reveals how cybercriminals are asking ChatGPT to write malware

OpenAI reveals how cybercriminals are asking ChatGPT to write malware

OpenAI’s latest report reveals how cybercriminals are exploiting ChatGPT to write malware and conduct cyberattacks, raising concerns over the misuse of AI.

listen to the story

Advertisement
OpenAI reveals how cybercriminals are asking ChatGPT to write malware
Representative image created using AI

AI is being used for both good and bad. What’s worse is not just limited to deep fakes but also serious criminal activities like planning cyber attacks. OpenAI has revealed in its latest report that cyber criminals are exploiting its AI-powered chatbot, ChatGPT, to aid malicious activities. According to the company’s latest report, “Influence and Cyber ​​Operations: An Update,” cybercriminals are leveraging ChatGPT to write code, develop malware, conduct social engineering attacks, and conduct post-compromise operations.

Advertisement

As reported by Bleeping Computer, OpenAI’s report details several incidents where ChatGPT was found to be assisting cybercriminals in online attacks. Since the beginning of 2024, OpenAI has dealt with more than 20 malicious cyber operations involving misuse of ChatGPT, impacting various industries and governments in multiple countries. These cases range from malware development and vulnerability research to phishing and social engineering campaigns.

According to OpenAI, criminals used ChatGPT’s natural language processing (NLP) and code-generation capabilities to accomplish tasks that typically require significant technical expertise, thus breaking the skill threshold for cyberattacks. Decreases.

The first known case of AI-assisted attacks came to light in April 2024, when cybersecurity firm Proofpoint identified the Chinese cyber-espionage group TA547, also known as the “Scully Spider,” for its malware series. Deploying an AI-generated PowerShell loader. This follows a September report from HP Wolf Security highlighting AI-generated scripts used by cybercriminals in multi-stage infections targeting French users.

One of the most notable cases, OpenAI reports, is linked to the Chinese cyber-espionage group ‘SweetSpectre’, which was first documented by Cisco Talos in November 2023. SweetSpectre targeted Asian governments and even directly attacked OpenAI by sending spear-phishing emails to its employees. , which contain malicious zip files disguised as support requests. If opened, the files trigger an infection chain, deploying the SugarGh0st remote access trojan (RAT). OpenAI highlights that SweetSpectre used ChatGPT to conduct reconnaissance and vulnerability analysis, including discovering Log4j versions vulnerable to the infamous Log4Shell exploit.

Another significant case mentioned in OpenAI’s report involves the Iranian threat group ‘Cyberav3ngers’, which is linked to the Islamic Revolutionary Guard Corps. According to the report, CyberAV3ngers used ChatGPT to find default credentials for industrial routers and programmable logic controllers (PLCs), critical components in manufacturing and energy infrastructure. They also asked ChatGPT for assistance in developing custom Bash and Python scripts to avoid detection.

OpenAI has taken measures to address the growing issue by shutting down accounts involved in these operations and sharing relevant indicators of compromise (IOCs), including IP addresses and attack methods, with cybersecurity partners. Additionally, OpenAI is strengthening its monitoring systems to detect suspicious patterns that may indicate harmful behavior, with the goal of preventing further exploitation of its platform for malware development, social engineering, or hacking attempts. .

LEAVE A REPLY

Please enter your comment!
Please enter your name here