Artificial intelligence (AI) is not only changing how businesses operate, but also how they should be running their cybersecurity processes, including the need to never overlook the basics.
Recent security incidents highlight a fundamental truth that, the approach to cybersecurity must evolve as AI reshapes the industry, said Edward Chen, deputy chief executive of national cyber resilience for Cyber Security Agency of Singapore, who was speaking at Palo Alto Networks’ Ignite on Tour conference in Singapore this week.
Chen noted that whitehat hacker Gal Nagli had identified the first publicly known critical vulnerability in ChatGPT in March 2023. Nagli’s team at Wiz Research this January also uncovered a vulnerability in China’s GenAI (generative AI) platform DeepSeek.
Pointing to the DeepSeek vulnerability, Chen noted that the fault stemmed from unsecured open ports in the Chinese GenAI platform’s database that could have provided unauthorised access.
“This wasn’t some complex flaw hidden deep within an advanced AI model. Instead, it was a basic security misstep,” he said. “The kind we often overlook in our rush to innovate.”
It underscores the need to focus on the technologies needed to combat threats and safeguard systems, according to the Singapore government official.
While large-scale AI-powered autonomous attacks had yet to occur, he said recent developments in phishing and malware were “troubling”.
For instance, more than 10% of phishing email in Singapore last year contained AI-generated content, he noted, adding that these often were highly personalised and contextually relevant messages that were difficult to distinguish from legitimate ones.
He cited further research from Palo Alto Unit 42, which revealed that LLMs could be used to generate malware variants capable of evading detection 88% of the time.
With new AI threats comes need for AI as defence
Increasing AI adoption also would create a larger and more complex attack surface to secure. “Every AI-powered system, or device, is now a potential attack target,” Chen said.
He added that AI also was creating new classes of attacks, including prompt injection attacks, which manipulate AI models with deceptive inputs so they would leak sensitive data or execute harmful commands.
While efforts are made to update tools, such as with input sanitisation and adversarial training, the industry still is in the early stages of understanding and fully addressing such threats, he said.
“At the same time, we cannot ignore traditional attack vectors -- misconfigurations, unpatched systems, and weak credentials. These remain the easiest ways for attackers to break in,” Chen said, noting that DeekSeek’s vulnerability was the result of a simple security misconfiguration, rather than an AI-specific attack. “This tells us two things: First, we must secure AI. Second, we must not forget the basics, too.”
Chen said: “We must fight AI with AI. After all, you don’t bring a knife to a gunfight. If cybercriminals are adopting AI-driven tactics, we must do the same.”
AI, he said, already was transforming malware analysis, with a November 2023 report from VirusTotal stating AI-driven detection could identify complex malicious scripts with up to 70% greater accuracy than traditional methods.
Agentic AI also could support key functions, including threat analysis and real-time decision making, and take defensive actions, he added. “While human oversight remains essential, AI-driven autonomous response systems will be game-changers,” he said.
Need for data to combat cyber threats
So, too, will data, which plays an important role in enhancing detection and defence capabilities, according to Simon Green, Palo Alto’s Asia-Pacific Japan president.
He touted the security vendor’s 20-year history, during which it had been collecting data on cyber threats and activities. It continues to build on this database, adding eight to nine petabytes of data each day, Green said.
With 90,000 customers worldwide, Palo Alto analyses 500 billion events each day, blocking 11.3 billion malicious transactions and detecting more than 2.5 million new threats.
Cybersecurity activities are growing at scale and the volume will further accelerate with AI, Green noted. It underscores the need for tools to evolve to have real-time and automation capabilities, powered by AI and data, so companies can manage and keep pace amid growing cyber threats, he added.
Palo Alto’s large database, for one, fuels its machine learning and AI models, feeding them with contextual data and enabling the delivery of the vendor’s Precision AI framework, he said.
The threat detection and prevention system uses deep learning and GenAI to analyse and learn from historical as well as real-time threat data. It predicts attack patterns based on nuanced indicators, creating predictive models that identify anomalies, and generates insights to enhance decision making, according to Palo Alto.
AI now is used to orchestrate attacks, making to more difficult for businesses to detect legitimate transactions, and also have accelerated the scale of attacks, said Steven Scheurmann, Palo Alto’s Asean regional vice president, in an interview with FutureCISO on the sidelines of the event.
Echoing Green’s message about the role data plays, Scheurmann said context-based datasets also will be critical in detecting anomalies specific to verticals, such as financial services and healthcare.
It highlights the need for industry-specific LLMs (large language models) and AI models built on data that carry contextual information, he said. This will enable the systems to better detect unusual patterns and identify abnormal user behaviours, he added.
The added layers of context-based analytics will be critical in attacks using deepfakes, for instance, where requests for fund transfers from a particular company executive during that time of day are “normal” behaviour.
Amid the focus on AI and automation, though, organisations should not overlook the basics in cybersecurity, Scheurmann stressed. Misconfiguration, poorly managed credentials, and unpatched vulnerabilities remain common issues, he said.
These fundamentals are paramount even before companies begin adopting emerging technologies, such as GenAI, he added.