5 malware families developed with AI analyzed by Google do not work and are easily detected

5 malware families developed with AI analyzed by Google do not work and are easily detected

The assessments provide a strong counterargument to exaggerated narratives peddled by AI companies, many of which are seeking new rounds of venture funding, that AI-generated malware is widespread and part of a new paradigm that poses an ongoing threat to traditional defenses.

A typical example is Anthropic, which recently reported its discovery of a threat actor who used his Claude LLM to “develop, market and distribute several ransomware variants, each with advanced evasion capabilities, encryption and anti-recovery mechanisms.” The company went on to say, “Without Claude’s help, they would not be able to implement or troubleshoot core malware components, such as encryption algorithms, anti-scanning techniques, or internal Windows manipulation.”

Home ConnectWise recently said that generative AI was “lowering the entry bar for threat actors to get into the game.” The publication cited a separate report of OpenAI that found 20 different threat actors using its ChatGPT AI engine to develop malware for tasks including identifying vulnerabilities, developing exploit code, and debugging that code. Meanwhile, BugCrowd saying that in a survey of self-selected people, “74 percent of hackers agree that AI has made hacking more accessible, opening the door for newcomers to join the fold.”

In some cases, the authors of these reports point out the same limitations noted in this article. Google’s report Wednesday says that in its analysis of AI tools used to develop code to manage command and control channels and obfuscate their operations, “we saw no evidence of successful automation or innovative capabilities.” OpenAI said more or less the same thing. Still, these disclaimers are rarely highlighted and often downplayed in the resulting frenzy to portray AI-assisted malware as a near-term threat.

Google’s report provides at least one other useful finding. A threat actor who exploited the company’s Gemini AI model was able to bypass its barriers by posing as white hat hackers investigating to participate in a game of capture the flag. These competitive exercises are designed to teach and demonstrate effective cyber attack strategies to both participants and spectators.

These security barriers are built into all conventional LLMs to prevent them from being used maliciously, such as in cyber attacks and self-harm. Google said it has since better tuned countermeasures to resist such ploys.

Ultimately, the AI-generated malware that has appeared to date suggests that it is mostly experimental and the results are not impressive. It is worth monitoring events for developments that show AI tools producing new capabilities that were previously unknown. For now, however, the biggest threats continue to rely predominantly on outdated tactics.

Leave a Reply

Your email address will not be published. Required fields are marked *