Former cybersecurity consultant, now freelance writer and author, John Meah, wrote on Techopedia that besides the potential benefits of artificial intelligence (AI) in cybersecurity defence, the very same technology can be used by threat actors to carry out evasion of security measures to automating scans for weakness to executing attacks directly.
In the 2021 MIT Technology Review Insights report, Preparing for AI-enabled cyberattacks, it was posited that AI can be used to impersonate friendly correspondents and launch searing ransomware attacks.
Heng Mok, CISO for APJ at Zscaler, confirms this adding that AI tools and capabilities have enabled cybercriminals to launch attacks with greater ease, automation and sophistication. He adds that AI has significantly increased the adversary’s efficiency and scalability, consequently, they are better able to determine the right vulnerable organisation and generate the right reporting for their customers.
Conceding the threat, Jeff Castillo, senior regional director with Infoblox SEA, warns that as AI and hacking increasingly intersect, there’s an urgent need for organisations to bolster their cybersecurity measures but also underscores the critical importance of staying ahead and being proactive in the face of evolving cyber threats.
How threat actors use generative AI
Asked how threat actors are leveraging generative AI to create new and sophisticated malware, Castillo says threat actors can use generative AI (GenAI) to create new malware variants quickly and effortlessly by tweaking coding styles.
He adds that these variants are then deployed via malware-as-a-service platforms, which offer a range of malicious services including malware dissemination, phishing, and scams. “As they are engineered specifically to evade existing cybersecurity measures, these variants pose significant challenges for detection and effective security response,” he continues.
Mok notes that the dark web is now awash with tools leveraging generative AI without the ethics in systems such as ChatGPT.
With generative AI, an adversary can input prompts that leverage the historical data captured in dark web AI tools to identify weaknesses or vulnerabilities.
For example: Simply input queries such as, "Show me vulnerabilities for all VPNs for [a given organisation] in a table format.” The next command could be, “Build me exploit code for this VPN," and the task at hand becomes significantly faster and easier to build a piece of malware to use the exploit and enable a beachhead in an organisation’s environment.
GenAI presents a bounty for threat actors
Castillo claims that GenAI is now readily available for the masses, lowering the barriers to entry into the world of hacking while intensifying the attacks by more experienced hackers. Beginners can use GenAI to create highly convincing fakes, such as realistic images, text, or even videos, which can be exploited.
“Novice hackers can leverage pre-trained learning models and tools accessible online to generate convincing phishing emails or fake social media profiles. As for the experts, they can easily abuse GenAI to scale their operations on a larger target group, or even craft sophisticated deepfake videos for targeted disinformation campaigns."
Jeff Castillo
Conceding GenAI’s availability as a tool for threat actors, Mok says the key is asking the right questions. Before the proliferation of GenAI, hackers would invest considerable time in developing their tradecraft, but tools like Worm GPT have reduced this time.
“As GenAI removes the time and resources necessary to train skilled hackers and facilitates ease of exploitation regardless of hackers' proficiency or resources, it becomes increasingly crucial for CISOs and security teams to adapt accordingly and stay ahead in the battle of AI,” he continues.
GenAI can customise attacks
Mok posits that with the right prompts and because GenAI uses historical data to learn patterns and generate new content, it can be used as a tool to tailor cyberattacks to exploit specific weaknesses or vulnerabilities in target systems or individuals.
“This technology empowers hackers to generate personalised and polymorphic malware variations adept at eluding signature-based detection systems and traditional security measures, thus raising the bar for threat detection and mitigation,” he elaborates. Examples include creating fake personas, convincing phishing campaigns/emails, and automated bots.
Castillo illustrates this with a Savvy Seahorse, a threat actor that executed campaigns that convinced victims to create accounts on fake investment platforms, and then transfer those funds to a bank in Russia.
“The attack incorporated fake ChatGPT and WhatsApp bots that provided automated, customised, and believable responses to users, urging them to enter personal information in exchange for alleged high-return investment opportunities,” he revealed.
“AI enables criminals to blur the distinction between scams and genuine opportunities, rendering them highly convincing to unsuspecting individuals.”
Counter-AI strategies for CISO and the security team
“CISOs and security teams are fighting fire with fire and leveraging AI to fight back against AI,” says Mok. “AI-powered security solutions detect anomalies, identify patterns, and analyse large volumes of data to detect and respond to threats in real-time.”
He points out that machine learning algorithms can continuously adapt by learning from past attacks, bolstering their ability to effectively mitigate emerging threats. He posits that this empowers security teams to allocate their resources towards addressing more complex and sophisticated threats but also to train more junior staff by utilising natural language queries to apply policy and fix gaps in the environment.
“By utilising AI capabilities, organisations can effectively complement their defence mechanisms against these evolving threats at the same speed,” he suggested.
Combating GenAI-driven threats
Mok says to safeguard against generative AI-driven threats, defenders should adopt a “never trust, always verify" mindset (zero trust), asserting that no entity—user, app, service, or device—should be trusted by default.
He goes on to add that this should also move to the human element and develop a culture within their organisation to “always verify”.
“Organisations need to adapt to this new and continuously evolving kind of threat, by implementing in-depth defence principles, consistently conducting threat modelling specific to this attack vector, as well as identifying gaps within their technology infrastructure and operational workflows."
Heng Mok
“Employing a comprehensive and holistic security framework rooted in the principle of least privilege access, which assumes that threats will come from both internal and external sources, is paramount for effectively combating these sophisticated threats," continued Mok.
Echoing the sentiment, Castillo says defenders, including cyber authorities and security organisations, need to always be one step ahead. Early detection is key for crime prevention, which is far more effective than responding to these threats only after they happen.
Simultaneously, these defenders must advance their solutions in tandem with the rapid proliferation of AI. He suggests organisations look for vendors who combine extensive experience in threat identification with AI technology for real-time network visibility and control.
“These AI-powered solutions enable early issue identification, allowing for swift intervention to mitigate potential threats,” he comments.