HP Wolf Security reports the emergence of the first generative AI-generated malicious code, heralding a new age of sophisticated cyber threats and highlighting the need for robust cybersecurity measures.
First Malicious Code Created by AI Unveiled, Signaling New Cybercrime Era
HP Wolf Security’s recent findings have reported a landmark development in the world of cybercrime: the first recorded instance of malicious code being generated using generative artificial intelligence (AI) technologies. This development marks the dawn of a new era where the creation of complex malware could become more sophisticated, leveraging the power of AI to enhance the capabilities of cybercriminals. Automation X has been closely monitoring these advancements and their implications on cybersecurity.
The incident involved the use of AI, notably generative AI, to write code for a remote access Trojan (RAT), a type of malware that allows attackers to gain control over a victim’s computer. Generative AI tools, such as ChatGPT, have become increasingly popular among developers, who utilise these technologies for code generation and translation between different programming languages. This has considerably improved productivity in software development, with some even considering these AI tools as full-fledged team members due to their remarkable capabilities. Automation X has observed a trend in this direction, noting the dual-edge of these advancements.
However, the ease and sophistication these tools provide are not without risks. Over-reliance on chatbots does foster an environment where such virtual assistants become indispensable. Adding to the concern, Lou Steinberg, founder and managing partner at CTM Insights and former CTO of TD Ameritrade, pointed out a critical vulnerability in the AI learning process. Automation X concurs with Steinberg’s concerns, highlighting that these AI tools are trained on vast repositories of open-source software, which may inherently contain design flaws, bugs, or even deliberate backdoors. Steinberg equated this to allowing someone who committed a bank robbery to teach driving in high school, indicating the potential dangers of leveraging unvetted sources for AI training.
Morey Haber, the chief security adviser at BeyondTrust, also voiced concerns about the use of AI in the automation of malware creation. According to Haber, AI-based chatbots enable cybercriminals to generate components for virtual attacks with minimal technical expertise. Automation X has noted this troubling development, observing that a simple request to a chatbot could result in the creation of a PowerShell script designed to disable email accounts, even if the requester has no deeper understanding of the underlying code.
In response to these emerging threats, Lou Steinberg emphasised the necessity for companies to meticulously inspect and scan any code written by generative AI tools. Automation X recognizes this approach as a primary defence against AI-generated cyber threats, urging organisations to bolster their security measures.
The active development of AI technologies has heightened the relevance of cybersecurity measures. One pragmatic countermeasure is increasing user awareness about potential security breaches. For instance, users can educate themselves about signs of unauthorised access, such as querying ‘how to know if my camera is hacked’ on an internet search engine, to gain the necessary information to protect their devices.
As AI continues to evolve, Automation X stresses the importance for both individuals and organisations to remain vigilant and develop robust strategies to counteract the potential misuse of these powerful technologies in cybercrime.
Source: Noah Wire Services