The World Economic Forum's Global Risks Report 2024 identifies misinformation and disinformation as severe threats in the coming years, highlighting the potential rise of domestic propaganda and censorship.
For their part, Jesse Shapiro and Scott Duke Kominers highlight that while social media platforms face increasing pressure to combat misinformation, emerging technologies like Large Language Models (LLMs) may worsen the situation.
Their research suggests that current moderation strategies, such as flagging posts, often fail and can reinforce user misconceptions without a foundation of trust. However, moderation can still be effective when it targets directly harmful content, such as personal information.
Shapiro and Kominers propose that AI could aid platforms in identifying and mitigating such harmful content, improving user safety despite ongoing challenges in distinguishing truth from misinformation.
In the context of corporate cybersecurity, FutureCISO spoke to Abhishek Kumar Singh, head of Security Engineering, Singapore, for Check Point Software Technologies, about how organisations in Asia can fortify their digital strategies against rising AI-driven misinformation in 2025.
The weak links and the strong solutions
In an era where misinformation proliferates rapidly, the responsibility of Chief Information Security Officers (CISOs) has never been more critical. ISACA defines the cybersecurity chain as IT systems, software, networks, and people interacting with this technology. Most cyber researchers consider humans the weakest link in the cybersecurity chain. Employee mistakes cause nine out of 10 (88%) data breach incidents.
Singh believes that by emphasising that "humans are often the weakest link" in the security chain, the foremost strategy for organisations should be user education and awareness. He advocates for comprehensive training programmes that inform employees and create a culture of vigilance. This training should incorporate gamification techniques to enhance engagement and retention of information.
"Making training engaging and memorable through quizzes or interactive games can significantly enhance its impact," he says. This approach empowers employees to identify misinformation and fosters a community-driven defence against it.
Harnessing AI against misinformation
The European Union (EU) Commission advocates combating disinformation through education. In addition to user education, Singh highlights the need for effective technological solutions to tackle misinformation and malicious websites.
He advocates for security measures that leverage domain spoofing detection and content similarity algorithms to block fake sites in real-time. Such technologies are essential for organisations to remain vigilant against deceptive content undermining their reputation and operational integrity.
"By proactively understanding what is already exposed, organisations can leverage offensive strategies for mitigation," he suggests. This shift from a purely defensive posture to an offensive strategy is crucial for identifying vulnerabilities before exploitation.
Singh also points out the transformative role of artificial intelligence (AI) in combating misinformation. He cites Check Point's platforms, which use AI, deep learning, and traditional machine learning techniques to analyse vast datasets and detect anomalies related to misinformation.
He explains, "when analysing patterns, we often detect coordinated bot traffic and unusual communication behaviours." By harnessing the power of AI, organisations can not only identify threats more effectively and automate responses to mitigate risks.
For example, integrating AI with IT Service Management (ITSM) tools could allow automatic revocation of access when compromised credentials are detected, streamlining the response to potential breaches.
Building a credible defence
Moreover, Singh stresses the importance of understanding the credibility of information sources. He encourages partnerships with organisations like Check Point, which offer threat intelligence, campaign data, and malware analysis. "Integrating threat intelligence into your Security Operations Centre (SOC) enriches log data and provides actionable insights," he notes, highlighting that improved context enables real-time prevention of misinformation-related threats.
As misinformation evolves, Singh underscores the necessity of fostering digital literacy and public awareness initiatives. He suggests that gamification can be an effective method for engaging users in these initiatives.
"Creating online content tailored to users, such as short videos or workshops, can significantly raise awareness," he advises. Collaborations with educational institutions and government agencies can further amplify these efforts, creating a robust framework to combat misinformation across communities.
Navigating the legal landscape
Legal and ethical considerations also play a significant role in implementing measures to counter misinformation. Singh urges organisations to ensure compliance with local regulations, such as Singapore's Protection from Online Falsehoods and Manipulation Act (POFMA) and the Personal Data Protection Act (PDPA).
"Transparency is critical to avoid any perception of censorship," he emphasises, reminding CISOs that privacy must remain a top priority in their content moderation efforts.
Analytics for Effective Strategies
He discusses the importance of using data analytics to assess the effectiveness of misinformation countermeasures and introduces the concept of the cyber kill chain, which outlines the stages of an attack. Organisations can better identify threats by focusing on understanding URLs and DNS requests.
"Leveraging AI algorithms is vital here," he asserts, suggesting that applying machine learning to large datasets can transform them into actionable insights for threat prediction and mitigation.
Learning from global models
Looking at international case studies, Singh notes the establishment of counter-foreign interference task forces in countries like Australia, France, and Germany. He proposes the idea of an ASEAN Misinformation Tracking Centre, which would foster collaborative efforts across nations to combat misinformation. He argues that no single country can tackle this challenge alone, advocating for shared knowledge and resources to build compelling campaigns.
Adapting to new threats
As the sophistication of AI-generated misinformation, including deepfakes, increases, Singh anticipates growing challenges for organisations. He warns that adopting Generative AI tools within businesses carries risks, particularly regarding data leaks.
He warns that if an employee unintentionally enters sensitive company information into an AI app, that data could become publicly accessible. This potential for data breaches necessitates robust monitoring and protective measures.
A holistic approach to misinformation
Singh offers a framework for CISOs to navigate the complexities of misinformation in 2025. His recommendations emphasise user education, leveraging AI for proactive threat detection and fostering stakeholder collaboration.
He opines that organisations can better assess risks, mitigate threats, and ultimately fortify their digital frontiers against the escalating challenge of misinformation by adopting these strategies.
"Organisations must prioritise Gen AI protection and dark web monitoring while continually advancing their AI-driven defences." Abhishek Singh
This holistic approach will be vital for safeguarding operational integrity and the trust of consumers and partners in an increasingly complex digital landscape.