• About
  • Subscribe
  • Contact
Friday, May 9, 2025
    Login
FutureCISO
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
No Result
View All Result
FutureCISO
No Result
View All Result
Home Technology Data Protection

Fortifying the digital frontier amidst rising AI-driven misinformation

Fortifying the digital frontier amidst rising AI-driven misinformation

allantan by allantan
January 2, 2025
Share on FacebookShare on Twitter

The World Economic Forum's Global Risks Report 2024 identifies misinformation and disinformation as severe threats in the coming years, highlighting the potential rise of domestic propaganda and censorship.

For their part, Jesse Shapiro and Scott Duke Kominers highlight that while social media platforms face increasing pressure to combat misinformation, emerging technologies like Large Language Models (LLMs) may worsen the situation.

Their research suggests that current moderation strategies, such as flagging posts, often fail and can reinforce user misconceptions without a foundation of trust. However, moderation can still be effective when it targets directly harmful content, such as personal information.

Shapiro and Kominers propose that AI could aid platforms in identifying and mitigating such harmful content, improving user safety despite ongoing challenges in distinguishing truth from misinformation.

In the context of corporate cybersecurity, FutureCISO spoke to Abhishek Kumar Singh, head of Security Engineering, Singapore, for Check Point Software Technologies, about how organisations in Asia can fortify their digital strategies against rising AI-driven misinformation in 2025.

The weak links and the strong solutions

In an era where misinformation proliferates rapidly, the responsibility of Chief Information Security Officers (CISOs) has never been more critical. ISACA defines the cybersecurity chain as IT systems, software, networks, and people interacting with this technology. Most cyber researchers consider humans the weakest link in the cybersecurity chain. Employee mistakes cause nine out of 10 (88%) data breach incidents.  

Abhishek Singh

Singh believes that by emphasising that "humans are often the weakest link" in the security chain, the foremost strategy for organisations should be user education and awareness. He advocates for comprehensive training programmes that inform employees and create a culture of vigilance. This training should incorporate gamification techniques to enhance engagement and retention of information.

"Making training engaging and memorable through quizzes or interactive games can significantly enhance its impact," he says. This approach empowers employees to identify misinformation and fosters a community-driven defence against it.

Harnessing AI against misinformation

The European Union (EU) Commission advocates combating disinformation through education. In addition to user education, Singh highlights the need for effective technological solutions to tackle misinformation and malicious websites.

Related:  65% of financial organisations suffered ransomware damage in 2023

He advocates for security measures that leverage domain spoofing detection and content similarity algorithms to block fake sites in real-time. Such technologies are essential for organisations to remain vigilant against deceptive content undermining their reputation and operational integrity.

"By proactively understanding what is already exposed, organisations can leverage offensive strategies for mitigation," he suggests. This shift from a purely defensive posture to an offensive strategy is crucial for identifying vulnerabilities before exploitation.

Singh also points out the transformative role of artificial intelligence (AI) in combating misinformation. He cites Check Point's platforms, which use AI, deep learning, and traditional machine learning techniques to analyse vast datasets and detect anomalies related to misinformation.

He explains, "when analysing patterns, we often detect coordinated bot traffic and unusual communication behaviours." By harnessing the power of AI, organisations can not only identify threats more effectively and automate responses to mitigate risks.

For example, integrating AI with IT Service Management (ITSM) tools could allow automatic revocation of access when compromised credentials are detected, streamlining the response to potential breaches.

Building a credible defence

Moreover, Singh stresses the importance of understanding the credibility of information sources. He encourages partnerships with organisations like Check Point, which offer threat intelligence, campaign data, and malware analysis. "Integrating threat intelligence into your Security Operations Centre (SOC) enriches log data and provides actionable insights," he notes, highlighting that improved context enables real-time prevention of misinformation-related threats.

As misinformation evolves, Singh underscores the necessity of fostering digital literacy and public awareness initiatives. He suggests that gamification can be an effective method for engaging users in these initiatives.

"Creating online content tailored to users, such as short videos or workshops, can significantly raise awareness," he advises. Collaborations with educational institutions and government agencies can further amplify these efforts, creating a robust framework to combat misinformation across communities.

Navigating the legal landscape

Legal and ethical considerations also play a significant role in implementing measures to counter misinformation. Singh urges organisations to ensure compliance with local regulations, such as Singapore's Protection from Online Falsehoods and Manipulation Act (POFMA) and the Personal Data Protection Act (PDPA).

Related:  Healthcare organisations lose 20% of their sensitive data in every ransomware attack

"Transparency is critical to avoid any perception of censorship," he emphasises, reminding CISOs that privacy must remain a top priority in their content moderation efforts.

Analytics for Effective Strategies

He discusses the importance of using data analytics to assess the effectiveness of misinformation countermeasures and introduces the concept of the cyber kill chain, which outlines the stages of an attack. Organisations can better identify threats by focusing on understanding URLs and DNS requests.

"Leveraging AI algorithms is vital here," he asserts, suggesting that applying machine learning to large datasets can transform them into actionable insights for threat prediction and mitigation.

Learning from global models

Looking at international case studies, Singh notes the establishment of counter-foreign interference task forces in countries like Australia, France, and Germany. He proposes the idea of an ASEAN Misinformation Tracking Centre, which would foster collaborative efforts across nations to combat misinformation. He argues that no single country can tackle this challenge alone, advocating for shared knowledge and resources to build compelling campaigns.

Adapting to new threats

As the sophistication of AI-generated misinformation, including deepfakes, increases, Singh anticipates growing challenges for organisations. He warns that adopting Generative AI tools within businesses carries risks, particularly regarding data leaks.

He warns that if an employee unintentionally enters sensitive company information into an AI app, that data could become publicly accessible. This potential for data breaches necessitates robust monitoring and protective measures.

A holistic approach to misinformation

Singh offers a framework for CISOs to navigate the complexities of misinformation in 2025. His recommendations emphasise user education, leveraging AI for proactive threat detection and fostering stakeholder collaboration.

He opines that organisations can better assess risks, mitigate threats, and ultimately fortify their digital frontiers against the escalating challenge of misinformation by adopting these strategies.

"Organisations must prioritise Gen AI protection and dark web monitoring while continually advancing their AI-driven defences." Abhishek Singh

This holistic approach will be vital for safeguarding operational integrity and the trust of consumers and partners in an increasingly complex digital landscape.

Tags: Artificial IntelligenceCheck Point Software Technologiescybersecuritymisinformation
allantan

allantan

Allan is Group Editor-in-Chief for CXOCIETY writing for FutureIoT, FutureCIO and FutureCFO. He supports content marketing engagements for CXOCIETY clients, as well as moderates senior-level discussions and speaks at events. Previous Roles He served as Group Editor-in-Chief for Questex Asia concurrent to the Regional Content and Strategy Director role. He was the Director of Technology Practice at Hill+Knowlton in Hong Kong and Director of Client Services at EBA Communications. He also served as Marketing Director for Asia at Hitachi Data Systems and served as Country Sales Manager for HDS’ Philippines. Other sales roles include Encore Computer and First International Computer. He was a Senior Industry Analyst at Dataquest (Gartner Group) covering IT Professional Services for Asia-Pacific. He moved to Hong Kong as a Network Specialist and later MIS Manager at Imagineering/Tech Pacific. He holds a Bachelor of Science in Electronics and Communications Engineering degree and is a certified PICK programmer.

No Result
View All Result

Recent Posts

  • DDoS attacks surge in Asia Pacific, claims Cloudflare
  • Reimagining security for the AI Era
  • PodChats for FutureCISO: Articulating the business value of security in 2025
  • New standard for cybersecurity at the storage layer
  • Cybersecurity challenges persist despite improved defenses

Categories

  • Blogs
  • Compliance and Governance
  • Culture and Behaviour
  • Cybersecurity careers
  • Data Protection
  • Endpoint Security
  • Incident Response
  • Network Security
  • People
  • Process
  • Resources
  • Risk Management
  • Technology
  • Training and awarenes
  • Videos
  • Webinars and PodChats
  • White Papers

Strategic Insights for Chief Information Officers

FutureCISO serves the interests of the Chief Information Security Officer (CISO) and the information security profession. Its purpose is to provide relevant and timely industry insights around all things important to security professionals and organisations that recognize and value the importance of protecting the organisation’s data and its customers’ privacy.

Cxociety Media Brands

  • FutureIoT
  • FutureCFO
  • FutureCIO

Categories

  • Privacy Policy
  • Terms of Use
  • Cookie Policy

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl

Login to your account below

or

Not a member yet? Register here

Forgotten Password?

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
Login

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl