Director of the Cybersecurity and Infrastructure Security Agency Jen Easterly warns that AI is “the most powerful capability and weapon of our time”.
She wasn’t being coy nor flippant about it. Calling on software vendors to stop building insecure-by-design products that maximise profits over safety, she commented that “So far the cost that we’ve paid for speed over security is pretty steep but not existential,” Easterly said. She fears that AI will be different.
Kunal Anand, CTO and CISO for Imperva, says attackers have been leveraging AI, and the most popular usage right now is in fraud.
“Specifically, we are seeing people leverage generative AI to solve Captchas at an impressive rate. A big change in fraud activity is coming, and that means we will also see a change in the way those types of attacks are mitigated,” he continued.
AI – a cause for concern for CISOs and CIOs
Asked whether AI-powered systems are reasonably mature enough that CISOs and CIOs need to be worried, Anand advised that everyone needs to start thinking about the implications for their business, and not just from a security perspective.
“If someone was using AI, how can my business be potentially defeated? How can attackers find weakness in a system and then fundamentally break that system?
“We’re really having to reconsider software design in general, as organisations tend to prioritise speed-to-market over security. It’s time to slow down and think about the threat model again, and it’s time to probably address the threat model not just from humans, but from AI capabilities out there.”
Kunal Anand
Recommendations for managing AI adoption in-house
IDC forecasts that AI-related spending in Asia/Pacific (excluding Japan) will reach US$78.4 billion by 2027. In 2023, the largest proportion of AI spend was on infrastructure provisioning (see Figure 1) with infrastructure services providers building AI systems in anticipation of the demand from end customers for AI services.
Figure 1: IDC Asia/Pacific Top 10 largest spending AI use-case-total
As understanding of the possibilities mature, IDC posits that AI will be used to reimagine operations, improve customer experiences, and maintain a competitive edge in a rapidly changing market.
"IDC predicts AI systems will grow into an essential IT tool for businesses to improve productivity and better engagement with customers, employees, and stakeholders in Japan. The challenge will be to develop appropriate industrial use cases that have an impact on business while paying attention to security, accuracy, and ethics," says Takashi Manabe, group director for data and analytics at IDC Japan.
Anand says there are some tactical steps organisations can take. “As an example, one of the things that I did at Imperva this year was to create an AI cyber committee, where leaders of the different business units came together to discuss how AI is going to fundamentally change every single business unit in our organisation. We went on to craft terms of use and high-level policies around AI, including data protection,” he elaborated.
Addressing the ever-widening skills gap
In the World Economic Forum Future of Jobs 2023 report, the organisations continue to grapple with talent availability with the skills gap and the ability to attract as barriers preventing industry transformation.
Anand concurs adding that in Asia, “We’re currently facing not only a skill shortage, but a talent gap as well, particularly in the IT and security industry.”
Citing the same WEF report, he says reskilling is now a priority for many organisations to answer the skills shortage. “For example, in Singapore, 51% of organisations said that the #1 thing to reskill people on is AI,” cited Anand.
“In addition, we can look at job profiles to ascertain the adjacent skills that cybersecurity people are looking for. If a majority of them need to adopt more knowledge on AI, for instance, we need to train and reskill people,” he concluded. “In general, organisations should also start interacting with large language models like ChatGPT to get familiar with them.”
The limitations of AI today
Perhaps reflecting on the image of its creators, AI is not infallible. A McKinsey study reveals a lack of established policies governing employee use of AI at work is in effect (71%). In the excitement to try GenAI, few are mitigating the most cited risk with GenAI: inaccuracy.
Figure 2: Most cited GenAI risks that organisations consider relevant
There are arguments for the idea that humans are biased while machines are not. Humans make decisions drawing from a combination of sources: collective knowledge, established frameworks and guidelines and guardrails, and even personal bias.
It can be argued that AI, which is software written and developed by humans, is also biased because it draws from a collection of knowledge. Proponents argue that if that collection is limited or biased, humans need to intervene because accountability is the convergence of fairness, inclusivity and transparency.
Anand concedes that generative AI tools like ChatGPT can hallucinate. “That’s fine for creative writing but not in cybersecurity, because you could lose data or open up attack vectors,” he cautions.
“Even for things like code generation where people are leveraging AI for writing code, you cannot trust the quality of code created. Properly deployed, however, AI can help organisations move faster through an incident response process.”
Kunal Anand
“Humans will never be out of the loop completely, but their roles will change over time,” he surmised.
AI that uniquely Asia
Given the many nuances in how businesses operate in Asia, the eventual adoption of AI will likely give rise to many flavours of the same artificial intelligence embedded in day-to-day operations.
“We are in a sea of changes,” says Anand. “Networks have changed dramatically, from the move to cloud to APIs. During COVID everybody accelerated their transformation and modernisation. But people generally still don't know where their sensitive data is – if an attacker gets access to it, they wouldn’t even know what the attacker took.”
Asked for his closing thoughts on AI and its inevitable inclusion in business, Anand recommends: “I encourage organisations to not think that the controls that you have today are sufficient to carry them into 2024. Reexamine your investment across all key areas, whether it's at the network, cloud, or application workload.”
Click on the PodChat player to hear Anand's view on AI defence on both sides of the fence.
- What does AI look like when deployed as part of a cyberattack?
- Should we be concerned that AI adoption, including its variants, maybe too fast for governance and security processes to keep pace with?
- Any recommendation for how to better keep AI adoption trends in line with an organisation’s ability to manage its risks?
- Given that AI will eventually be embedded, if not driving, some cybersecurity processes, how can security teams remain relevant, and of value, to the organisation?
- You are a CISO and a CTO at Imperva. What is the benefit of having both titles and how do you balance the priorities of each role?
- In 2024, what will be the unique threat we can expect to see in Asia? And how to counter/mitigate the threat?