The convergence of AI adoption and escalating cyber threats demands a paradigm shift in how organisations protect data and build trust.
As AI accelerates attack sophistication—enabling automated phishing, synthetic identity fraud, and adversarial machine learning—businesses face dual challenges: hardening defences against AI-powered threats while ensuring ethical, transparent AI deployment to overcome employee scepticism.
Singapore and leaders across Asia must prioritise zero-trust frameworks, AI-specific security guidelines (e.g., Singapore's CSA lifecycle controls), and human-AI collaboration models to close critical trust gaps.
Organisations that invest in AI governance, workforce upskilling, and proactive threat-hunting agents will gain resilience and competitive advantage. In contrast, those failing to address AI-driven risks or employee distrust will face operational and reputational fallout.
AI-powered threats and the need for a paradigm shift in security
AI's rise has brought unprecedented capabilities for both defenders and attackers. According to Assaf Keren, CSO at Qualtrics, said that AI-driven cyberattacks are evolving rapidly, with phishing emails becoming far more sophisticated and deepfakes emerging as a real threat vector. He explains:
"Phishing emails are really good right now because it's easy for somebody... to write a phishing email in English and Japanese and Chinese and in whatever language they want and will look really, really good... Deepfakes are becoming a thing of reality. We've been seeing both cloning and video cloning and lip-syncing." Assaf Keren
This sophistication lowers the barrier for cybercriminals, who no longer need deep technical skills to launch effective attacks. AI tools enable attackers to automate and personalise attacks at scale, creating an arms race between offensive and defensive AI capabilities that Keren predicts will intensify over the next two to five years.
Building trust through secure-by-design AI and governance
The rapid pace of AI innovation has outstripped regulatory frameworks, creating uncertainty and fear among organisations about deploying AI securely. Keren highlights the lack of transparent (region-wide) governance.
However, she points to emerging frameworks such as the US NIST AI Risk Management Framework and Singapore's Cyber Security Agency (CSA) Guidelines on Securing AI Systems, launched in October 2024. These guidelines emphasise securing AI systems by design and default, addressing traditional cybersecurity risks and AI-specific threats like adversarial machine learning.
"If you do the basics right, then you fix 80% of the problem... authorisation management, defence against injection attacks, baseline infrastructure security... We just need to shift them around a bit to make sure that they are also encompassing AI models." Assaf Keren
This approach aligns with the zero-trust security framework, which Asian organisations rapidly adopt to secure expanding digital perimeters in hybrid and remote work environments. A recent OKTA survey found that while only 8% of organisations in Asia had implemented zero trust, 82% planned to do so within 12 to 18 months, reflecting a strong regional commitment to this security model.
Transparency and ethical AI to overcome employee scepticism
Beyond technical defences, building trust in AI requires transparency and ethical deployment. Qualtrics research highlights that 2025 is an inflexion point where transparent AI use becomes central to customer and employee experience success.
Assaf Keren
"AI resistance is a bigger threat than AI acceptance… Businesses and governments that prioritise understanding how AI works, enabling their teams, rapidly implementing the necessary guardrails, and ensuring compliance are going to create a competitive advantage." Assaf Keren
Organisations must communicate when and how AI is used, demonstrating tangible value to employees and customers to foster acceptance. This is crucial as many employees already use AI tools informally ("shadow AI"), often without organisational controls, which can introduce risks if not properly managed.
Human-AI collaboration and workforce upskilling
To close critical trust gaps, leaders must prioritise human-AI collaboration models that empower employees rather than replace them. Keren envisions AI automating mundane security tasks, freeing human experts to tackle complex challenges:
"We still spend most of our people's time on manual tasks that they should not be doing… We need our people to start thinking and tackling big problems that require human minds to do. And we shift away a lot of the manual work things that AI can do for us." Assaf Keren
This shift requires significant workforce development. Structured upskilling programs tailored to different roles and risk profiles are essential to addressing the acute shortage of AI-skilled cybersecurity professionals in Asia. Collaboration between HR and security specialists can help design effective training and profiling tools to target high-risk functions.
Proactive threat hunting and AI governance
Proactive threat hunting using AI-driven security tools is becoming a key strategy to stay ahead of evolving threats. Agentic AI systems, capable of autonomous threat detection and response, are expected to become integral to cybersecurity operations in 2025, enhancing efficiency and response times.
Effective AI governance frameworks are also critical. According to recent industry analysis, AI governance involves structured policies and processes overseeing the entire AI lifecycle, including data management, ethical checkpoints, and continuous monitoring to ensure stability and compliance.
Organisations that embed AI governance into broader compliance and performance review initiatives will better align AI projects with business objectives while protecting data and brand reputation.
Regional leadership and competitive advantage
Singapore's CSA Guidelines on AI security exemplify regional leadership in setting AI-specific security standards. These living documents provide practical controls for system owners and reflect a proactive, community-driven approach to evolving AI threats.
Asian organisations' rapid adoption of zero-trust frameworks and investments in AI governance and workforce readiness position them to gain resilience and competitive advantage.
Conversely, organisations that fail to address AI-driven risks or employee distrust risk operational disruptions and reputational damage. The stakes are high, as poor customer and employee experiences linked to AI misuse or insecurity can cost businesses trillions globally.
In summary, the convergence of AI adoption and escalating cyber threats demands a holistic, multi-faceted response. Organisations must:
Implement secure-by-design AI systems aligned with evolving regulations and frameworks like Singapore's CSA Guidelines and zero-trust models.
Prioritise transparency and ethical AI deployment to build trust among employees and customers.
Invest in workforce upskilling and human-AI collaboration to leverage AI's productivity benefits while managing risks.
Adopt proactive AI-driven threat hunting and robust AI governance to maintain security and compliance.
Qualtrics' Keren points out that while the risks are real and evolving, AI also offers unprecedented opportunities to enhance security and operational efficiency—provided organisations embrace a balanced, informed approach to AI trust and governance.
Click on the PodChats player to hear Keren's discourse on CSO insights for building trust in AI.
How can organisations implement secure-by-design AI systems that comply with evolving regional regulations, such as Singapore's CSA Guidelines?
What governance lessons from past tech integrations (e.g., cloud and IoT) can be applied to mitigate risks associated with AI systems?
What strategies can effectively combat AI-automated attack chains, like multi-modal phishing campaigns, without hindering innovation in defensive AI tools?
How should cybersecurity teams balance the efficiency gains from AI-driven tools against potential risks like shadow AI deployments and data poisoning?
How can CISOs bridge the trust gap between leadership and employees by using AI?
What is your advice for CISOs on how to build trust?
Allan is Group Editor-in-Chief for CXOCIETY writing for FutureIoT, FutureCIO and FutureCFO. He supports content marketing engagements for CXOCIETY clients, as well as moderates senior-level discussions and speaks at events.
Previous Roles
He served as Group Editor-in-Chief for Questex Asia concurrent to the Regional Content and Strategy Director role.
He was the Director of Technology Practice at Hill+Knowlton in Hong Kong and Director of Client Services at EBA Communications.
He also served as Marketing Director for Asia at Hitachi Data Systems and served as Country Sales Manager for HDS’ Philippines. Other sales roles include Encore Computer and First International Computer.
He was a Senior Industry Analyst at Dataquest (Gartner Group) covering IT Professional Services for Asia-Pacific.
He moved to Hong Kong as a Network Specialist and later MIS Manager at Imagineering/Tech Pacific.
He holds a Bachelor of Science in Electronics and Communications Engineering degree and is a certified PICK programmer.