Gartner says the mass availability of generative AI has become a top concern for enterprise risk executives in the second quarter of 2023. It was mentioned by 66% of 249 senior risk executives, just one point behind third-party viability and ahead of financial planning uncertainty.
Ran Xu, a research director in the Gartner Risk & Audit Practice says this reflects both the rapid growth of public awareness and usage of generative AI tools, as well as the breadth of potential use cases and therefore potential risks, that these tools engender.
Since GenAI going mainstream is now inevitable, perhaps risk managers can use the very same technology to mitigate the risks that come with the technology.
Asked about GenAI being used as a weapon to penetrate an organisation, Ramprakash Ramamoorthy, director of research at ManageEngine, starts off by revealing the how of the process. He explains that by leaking data into large language models (LLMs), this can be weaponised as attackers who traditionally write phishing emails are now able to write them very professionally using Generative AI.
"Content is so naturally generated that it is extremely difficult for privacy-aware folks to distinguish between an email generated and a legit email," he explained. He suggested one way to mitigate AI-generated threats is to deploy AI tools.
"For example, using continuous user and entity authentication ensures original user and not random sources. Continuously raising employee awareness can also help to prevent business data from getting leaked," he commented.
Managing the proliferation of uncontrolled use of GenAI
Ramamoorthy believes that the starting point is awareness and education. "Firstly, vet if there is any Personally Identifiable Information (PII) that is leaving the organisation's ecosystem into the generative engine," he went on.
"Secondly, have an enterprise-ready LLM instead of a free LLM model where they don't use your data for training the model. Support tickets, for example, have a lot of sensitive customer information and sharing that with a generative AI can cause problems because you don't know what the privacy policy of a consumer LLM is," he cautioned.
Guardians of enterprise GenAI
FutureCIO asked Ramamoorthy who should the CISO engage with within the organisation to better understand the risk of Gen AI, be responsible for educating stakeholders to take advantage of the technology for productivity purposes and ensure that they abide by and protect the enterprise against possible attacks.
He responded by recommending using a risk matrix to mark the use cases with high, medium, and low priority use cases and look for the risks associated with them.
"Take the high-priority and low-risk use cases for a controlled experimental approach, using trials and errors, where you slowly roll out the use of generative AI tools into your business processes," he explained. "Next is to identify the right vendor and right LLM model that is specifically tuned to your workloads."
Update security frameworks to be AI-ware
Ramamoorthy opines that a lot of security practices have traditionally been based on statistical thresholds, which makes is easy for attackers to thrive under these thresholds.
"Now with the power of generative AI, you might be able to get a summarisation of incidents and a combination of external information like incident timeline and potential root cause to complement your internal infrastructure. The combination of internal security posture versus external security threat database will be potent and that is where I think AI can really step up your AI security game."
Ramprakash Ramamoorthy
Futureproofing security frameworks
Given that artificial intelligence is still an emerging technology, particularly GenAI, can CISO hope to enforce security frameworks that can adapt to whatever new threats, including the threats that arise with the use of Gen AI?
According to ManageEngine's Ramamoorthy, the right way to do this is to have the right mix of traditional AI tools and AI-powered security tools. "Have them run in tandem with each other so there is no chance of any security vulnerability creeping into the system," he suggested.
He goes on to add that it is also important to educate the security team about the pros and cons of using an AI system as organisations pivot from a deterministic system to a probabilistic system.
"It's important for security leaders to understand the pros and cons of the tech and see where it can help them," he added.
Change may be necessary
Like many promising technologies, organisations may need to make changes to how they study, pilot and deploy new technologies, particularly when there is little history to learn from.
On the bright side, Ramamoorthy contends that today's security process is very extendable. "Applying AI to it is straightforward," he opines. "Take a level-headed approach and decide whether you will really need the power of a huge LLM, or will you need the power of a narrow model that can solve specific use cases and specific security attacks like ransomware and malware."
He concedes that the decision is up to each individual organisation's capacity. "But I would say you don't have to start from scratch. It's more of an evolutionary approach to whatever you have right now." He added.
GenAI introduces zero-short learning
In terms of security-specific trends to expect as AI continues to evolve, Ramamoorthy says the GenAI model has brought into the table zero-shot learning – where you don't have to give specific examples for the AI model to give out specific answers.
"It's able to look at questions that it has not seen before and be able to understand the context and give out an answer. That is a huge value add for the security domain," he elaborated.
He predicts that: "Over the years we will see specific narrow models that are trained on a limited amount of data that can run with limited inference capabilities. We will also see how to build large language models at scale so that the entry barrier to it is minimised."
Click on the PodChat player and hear the details of Ramamoorthy's strategy to secure GenAI initiatives.
- Deception brilliance: How is Generative AI (including ChatGPT) enabling more phishing scams and malicious deepfakes in the enterprise?
- The weaponization of Generative AI is here or so I read. Briefly, can you describe how is generative AI being used to attack users?
- What can the CISO/CIO do to better manage the proliferation/use of GenAI by end users in the company?
- Who should the CISO engage to better counter the risks of GenAI?
- What does an updated security framework that incorporates the potential benefits and risks of AI look like?
- We are still in the very early days of GenAI as an enterprise technology. Can CISOs ever have a hope of creating and enforcing security frameworks that are able to adapt to whatever new threats come?
- Given what you call it, AI is pivotal, are the current generation of security tools/frameworks can handle any threats that come as a result of the use of GenAI?
- We are coming into 2024, what trends can we expect as far as the evolution of AI, including generative AI as it relates to security?