For months now, the media has been covering prognostications about artificial intelligence. Is AI a friend or foe specifically in cybersecurity? Generative AI, for instance, has been posited to pose multiple security risks including privacy violations, deepfakes and misinformation, adversarial testing and defence, algorithmic transparency, and the list continues.
The State of Security: The Race to Harness AI report refers to generative AI (GenAI) policy as uncharted territory and calls for organisations to form cross-functional governance boards to oversee the development and adoption of AI with a comprehensive framework for responsible AI. Easier said than done. If for years, CISOs have struggled to achieve full “information security” protection for their charge, the entry of AI is not going to make it easier – or change things overnight.
But security leaders are pushing with experiments to see how AI can assist in managing if not curtailing, cyber threats.
Nantha Ram R, global head of Cybersecurity (Manufacturing, Retail) & Automation and Orchestration at Dyson says organisations increasingly use generative AI (Artificial Intelligence) and machine learning (ML) to bolster their cybersecurity strategies.
He believes these technologies enable a proactive approach to identifying risks and mitigating attacks before they escalate, thus reducing response times to cyber incidents. He cautions, however, that the use of AI also presents risks, such as the potential for misuse in sophisticated phishing and deepfake attacks.
“To address these challenges, organisations are implementing several strategies, including establishing governance frameworks for AI oversight, fostering human-AI collaboration to enhance decision-making, securing AI models against adversarial attacks, and adhering to ethical guidelines in AI deployment.” Nantha Ram R
He adds that organisations need to prioritise continuous learning to adapt AI models to evolving threats, ensuring they remain ahead of potential attackers who may also utilise AI for malicious purposes.
Looming threats compounded by 5G
As 5G technology matures and organisations consider Private 5G connectivity, Ram explains that the rollout of 5G networks brings substantial benefits, such as increased data speeds and enhanced connectivity. Still, he cautions that 5G also introduces significant cybersecurity threats.
He says the proliferation of connected devices, especially in IoT ecosystems, expands the attack surface, making networks more susceptible to cyberattacks like DDoS and malware propagation.
“Additionally, vulnerabilities in the complex supply chain for 5G technology, weak security in IoT devices, risks associated with network slicing, and data privacy concerns further heighten the potential for exploitation,” he elaborates.
To mitigate these risks, organisations should adopt a zero trust architecture that continuously verifies identities and devices, ensuring only authorised access to network segments. Implementing secure network slicing, adopting IoT-specific security frameworks, and managing supply chain risks are essential steps.
Ram is positive about leveraging AI and machine learning for real-time threat detection can help organisations monitor network traffic and respond to anomalies effectively. “By implementing proactive measures, organisations can navigate the evolving threat landscape of 5G and protect their network infrastructures,” he continues.
Don’t forget IoT
It is forecast that by 2025 over 40 billion Internet of Things (IoT) devices will be connected, and ready to send and receive data. Utilising smart sensors, Industrial IoT (IIoT) will allow for optimisation in production processes.
Dyson’s Ram posits that as IoT devices proliferate, organisations face increased cybersecurity vulnerabilities due to the often inadequate security controls of these devices. He suggests that to protect their networks, organisations should implement several key strategies, including a zero trust architecture that verifies each device before granting access, network segmentation to isolate IoT devices from critical systems, and end-to-end encryption to safeguard data.
He adds that secure device configuration, regular monitoring for anomalies, vendor risk management, and ongoing employee training are essential components of a robust IoT security strategy.
“The responsibility for overseeing IoT security typically lies with the Chief Information Security Officer (CISO), but effective security requires collaboration among IT, vendor management, and compliance teams,” says Ram. This holistic approach ensures that security measures are integrated across the organisation, helping to mitigate risks associated with IoT devices.
“By fostering collaboration and adopting comprehensive strategies, organisations can significantly enhance their IoT security posture.” Nantha Ram R
Ethical implications and cybersecurity risks
Since the beginning of 2024, serious dialogues have been going around among enterprise leadership about the ethical implications of unmanaged GenAI use. Acknowledging the benefits of the technology as well as the concerns around transparency, bias, and data privacy its use brings, Ram suggests these leaders must navigate the associated ethical implications and cybersecurity risks.
“To address these issues, organisations should prioritise transparency in AI decision-making, actively audit algorithms for bias, and establish strict data governance frameworks to ensure ethical data use. Moreover, clear governance structures are essential to maintain accountability and human oversight, particularly in sensitive sectors such as healthcare and finance,” he adds.
Beyond ethical considerations, he also says organisations must tackle cybersecurity risks inherent in AI systems. “This includes protecting AI models from adversarial attacks, securing the AI supply chain, and ensuring data integrity through robust security measures. Continuous monitoring and incident response plans tailored to AI-related breaches are critical for safeguarding these systems,” he elaborates.
By forming an AI Ethics Committee and implementing ongoing education programs, organisations can effectively balance innovation with responsible practices. “A proactive approach to ethics and security in AI deployment is essential for harnessing its full potential while minimising risks,” says Ram.
Regulatory frameworks will come
Ram says regulatory frameworks across Asia are anticipated to evolve significantly to address these challenges, particularly concerning data privacy, AI governance, and sector-specific cybersecurity standards.
He posits that countries in Asia are likely to adopt stricter data protection laws like Europe's GDPR, with a focus on ethical AI use and accountability. He also alludes that sector-specific standards will likely be introduced to address unique risks in critical areas like healthcare and finance, while cross-border cooperation will enhance collaborative responses to cyber threats.
Ram warns that organisations must remain agile and proactive as these regulatory changes unfold. He foresees compliance becoming more intricate, but it presents opportunities for businesses to enhance their cybersecurity posture and gain a competitive advantage.
By investing in cybersecurity early and engaging with regulators, organisations can shape emerging frameworks and position themselves as leaders in the industry. Ram says: “Staying ahead of regulatory changes not only mitigates risks but can also transform compliance into a competitive advantage,”
AI and cybersecurity in 2025
Looking ahead Ram says the rapid adoption of emerging technologies like generative AI, machine learning (ML), 5G, IoT, and quantum computing presents both significant opportunities and cybersecurity challenges.
He proposes that to effectively mitigate these risks, organisations must implement innovative tools and strategies tailored to the specific threats posed by these technologies. Key innovations include AI-powered threat detection systems for real-time monitoring and anomaly detection, zero trust Architecture to ensure continuous verification of users and devices, and advanced security solutions for 5G networks, which will be critical for maintaining network integrity and protecting connected IoT devices.
He goes on to suggest that other essential tools will include post-quantum cryptography to safeguard sensitive data against future quantum threats, comprehensive IoT security platforms for centralised device management, and privacy-preserving AI technologies to ensure compliance with data regulations.
Additionally, organisations will need AI governance and audit tools to ensure ethical AI use, security automation platforms for faster incident response, and advanced penetration testing tools to identify vulnerabilities.
“By investing in these solutions proactively, organisations can enhance their cybersecurity posture and effectively manage the complexities of emerging technologies. Preparing now with the right tools will position organisations to navigate the evolving cybersecurity landscape of the future,” concludes Ram.