OpenAI has revealed that, in recent weeks, its engineers have identified more than 20 cyberattacks carried out using artificial intelligence tools such as ChatGPT and other extended language models (LLM). According to the company's technicians, those responsible for these attacks are hackers from China and Iran, whohave taken advantage of AI capabilities to develop and debug malware, in addition to carrying out other malicious activities. This situation exposes one of the most troubling sides of AI use: its potential to facilitate cybercrime.
First attacks: Chinese activists and 'SweetSpecter'
The first ChatGPT-related cyberattack was orchestrated by Chinese activists and targeted several governments in Asian countries. This attack used a spear-phishing technique known as 'SweetSpecter', which is based on sending a ZIP file containing a malicious file. Upon downloading and opening the file, the user's system becomes infected, allowing the attackers to gain access to their computer. OpenAI engineers detected that this malware was developed using multiple ChatGPT accounts to write the malicious code and discover vulnerabilities in affected systems.
Attacks in Iran: 'CyberAv3ngers' and 'Storm-0817'
Another prominent attack was carried out by an Iranian group known as 'CyberAv3ngers', which used ChatGPT to exploit vulnerabilities in macOS devices and steal user passwords. Likewise, another Iranian group, dubbed 'Storm-0817', used artificial intelligence to develop malicious software targeted at Android devices. This malware was capable of accessing contact lists, call logs and browser history, which seriously compromised users' privacy.
OpenAI has stressed that although these attacks were carried out using ChatGPT, the methods employed are neither new nor innovative. The Chinese and Iranian hackers leveraged familiar techniques to create the malware, indicating that while AI was useful in simplifying the process, completely new variants of malware have not been developed. However, theease with which harmful tools can be created using artificial intelligence is a disturbing reminder of the potential for AI in malicious hands. OpenAI's response and future challenges
In the face of these attacks, OpenAI has assured that it is working to improve its systems and prevent its technology from being used for malicious purposes. The company has created specialized security teams that share their findings with other companies and the technology community to help stop these cyberattacks. Even so, OpenAI emphasizes that the responsibility does not only fall on them, but that other companies that develop generative artificial intelligence models must also take measures to ensure security and prevent these tools from being exploited by cybercriminals.