“Leaders will need to embody a mindset of constant learning and adaptability, staying ahead of technological advancements and integrating AI into their strategic decision-making processes. Agility in learning and proactive in practice will be key drivers of success.” Workforce 2024 Global Insights Report, Korn Ferry
Boards and CISOs have different opinions about their biggest cybersecurity threats. The Cybersecurity: The 2023 Board Perspective points out that the best way to communicate the importance of cybersecurity and AI risks is by focusing on the business consequences.
Communications imperative
How can we effectively communicate the importance of cybersecurity and AI risks to non-technical, non-security board members in a way that resonates with them?
“For non-technical people, the most effective way to communicate the importance of cybersecurity is by providing real-world examples,” suggested Georgia Edell, a consultant at Frost & Sullivan. She posits that seeing real cases can help illuminate and reinforce the severity and potential consequences.
“Highlighting the business impact is important as is simplifying the story to communicate effectively. The exact technical details become less important in this case; it becomes more critical to convey the possibility that it happens to you. Awareness is a required first step,” believes Edell.
Continuous learning
In a future of persistent threat, evolving regulation, the certainty of unpredictability, and technological innovation that is spiralling faster than any one person or organisation to effectively harness with predictable outcomes. “Continuous learning is crucial for leaders to remain effective in a rapidly changing business environment,” suggested Korn Ferry.
Recognising this, the next question is how?
Edell believes that knowledge and understanding are often best achieved through absorbing small pieces, continuously. She asserts that constant communication, knowledge sharing, and short updates can provide easily ingestible highlights.
“People also need time to reflect on new pieces of information; providing continuous updates will position board members to have a greater understanding in the context of how the technology has evolved and can consider any implications as a part of the whole picture of the organisation,” she continues.
For his part, Dan Rapp, global VP of AI/ML at Proofpoint, predicts that as a transformational technology, AI will impact all aspects of business over the next few years. He adds that it is essential, probably existential, that all employees, executives and directors are AI literate. “AI literacy and objectives targeting leveraging of AI in each business functions operations should be key business objectives that each executive should be held accountable for,” says Rapp.
Balancing expectations with realities
Proofpoint’s Rapp acknowledges the challenge will get harder as enterprises integrate generative AI into their organisations’ workflow.
“This is made even more challenging with the current economic situation, with 51% of CISOs agreeing that this has hampered their ability to make business-critical investments,” he continues.
As cybersecurity budgets continue to remain flat at best – more needs to be done with less. However, one of the advantages we see with AI is the augmenting and accelerating workflows. Any cost analysis should carefully take increased productivity into account.
Frost’s Edell predicts that balancing AI adoption and budget constraints will become increasingly important. She suggests that by leveraging existing cybersecurity solutions and infrastructure, organisations can support AI adoption while minimizing expenses. And when considering new solutions, “it’s important to recognize that many vendors are including and heavily publicizing their AI capabilities,” she adds.
“Organisations need to be sure that the tools they invest in are correct for their organisation and business goals. This should weigh initial investment, maintenance, risks and future roadmap with the potential benefits.” Georgia Edell
Edell suggests one way to approach the investment is through risk-based budgeting, prioritizing the allocation of budget based on the security risk level. “In this way, organisations can act within their means, improving and implementing incrementally. Organisations need to act now, but it’s not realistic to do everything at once,” she elaborates.
Aligning ethical principles with regulatory requirements
According to Edell, organisations must consider their own business and establish a framework for AI use. As for the how she suggests that this should be developed through the consideration of all stakeholders. Incorporating regular assessments is necessary to ensure continued efficacy and compliance with regulations and established ethics.
“Transparent and regular communication to the board by providing clear documentation on how AI systems process data and make decisions helps to keep them informed and engaged in oversight responsibilities,” elaborates Edell.
Rapp suggests establishing a framework and developing guidelines around AI use, collection, and innovation as the first step organisations need to take. He further suggests that this framework should focus on fairness, transparency, accountability, and privacy – elements essential to aligning with ethical principles and regulatory requirements.
“To communicate this need to boards, CISOs must focus on the positive outcomes that this framework will promote,” he posits. “Examples include building trust with customers, partners, and regulators, as well as future-proofing the business by positioning the company as one that advocates for ethical AI in the long run.”
Facing up to the realities
Asked how organisations can leverage AI to address complex cybersecurity challenges and transform our security workflows while maintaining a pragmatic and business-focused approach,
Rapp says AI can analyse vast datasets to detect anomalies that may indicate security breaches or unusual behaviour, and even handle monotonous and repetitive security tasks, ensuring that cybersecurity best practices are consistently implemented without the risk of human error, boredom, or negligence.
In the 2024 Voice of the CISO, 86% of CISOs in Singapore are already looking to deploy AI-powered solutions within their organisation.
“CISOs should focus on illustrating cost savings or revenue gains due to AI implementation versus traditional approaches to prove how AI has improved threat detection rates or faster response times, ultimately resulting in reduced downtime and improved operational efficiency.” Dan Rapp
Edell agrees that cybersecurity is becoming more complex. She observes that AI is being leveraged to create more powerful and insidious attacks. “That’s one of the reasons we need to leverage AI in the security effort too,” she surmised. “AI-powered solutions can be very impactful for supporting security teams.”
“Organisations should be leveraging AI in the places that make sense for them, in terms of the goals and parameters of their own business. By deploying AI-powered solutions in high-impact areas, organisations can begin to transform their security workflows and establish a more robust security posture.” Georgia Edell
In 2025, CISOs and boards in Asia must bridge their perspectives on cybersecurity, prioritising clear communication and collaboration. By aligning on business impacts, leveraging AI for enhanced security, and committing to continuous learning, they can transform challenges into opportunities. A unified approach will not only fortify defences but also drive innovation, ensuring resilience in an ever-evolving digital landscape. Together, they can lead the charge toward a safer, more secure future.