• About
  • Subscribe
  • Contact
Thursday, March 12, 2026
  • Login
FutureCISO
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
  • Events
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
  • Events
No Result
View All Result
FutureCISO
No Result
View All Result
Home AI and Machine Learning

Commentary: Which side of the double-edged AI sword are organisations on?

by Eileen Yu
March 12, 2026
Photo by Jan van der Wolf: https://www.pexels.com/photo/a-red-and-blue-no-entry-sign-against-a-white-wall-27815871/

Photo by Jan van der Wolf: https://www.pexels.com/photo/a-red-and-blue-no-entry-sign-against-a-white-wall-27815871/

Share on FacebookShare on Twitter

Artificial intelligence (AI) is a double-edged sword. You’ve probably heard that said pretty often now, and this perhaps has never been as clearly demonstrated as it has this past couple of weeks.

For me, it also begs the question: do organisations know which side they’ll end up holding?

So, what happened? Long story short, one of the big names in AI, Anthropic, refused to allow the US Department of Defense to tap its Claude platform for mass surveillance and autonomous weapons systems.

It meant risking a US$200 million deal the AI vendor inked with the Defense Department last July and prompted the US government to designate Anthropic a supply chain risk -- a label previously given only to foreign adversaries. Anthropic has responded with a lawsuit, describing the move as “unprecedented and unlawful” and “harming Anthropic irreparably”.

Meanwhile, OpenAI signed an agreement to provide the US Defense Department access to its AI models for classified networks, including “any lawful use” terms that Anthropic had prohibited on its platform.

The flow of events pushed an initial wave of 1.5 million users to cancel their ChatGPT subscriptions. This figure reportedly has now hit 4 million users.

Anthropic, on the other hand, saw a surge in demand for Claude and support from the industry, including researchers and employees from OpenAI and Google.

Anthropic has filed a lawsuit challenging the US government’s move to designate it a supply chain risk.

In his statement on the scuffle with the US Defense Department, Anthropic CEO Dario Amodei said: “We do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision making -- that is the role of the military. Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas and not operational decision-making.”

Incidentally, Amodei joined OpenAI in 2016, but left over differences regarding the company’s future direction, and founded Anthropic in 2021 alongside other former senior employers of OpenAI.

In a post on X (formerly called Twitter), OpenAI’s Sam Altman admitted it had rushed to seal the deal and looked “opportunistic and sloppy”.

Related:  Gartner: Generative AI is not significantly impacting IT spending levels

While the saga continues to play out, it has highlighted a longstanding discussion about AI safety and ethics and where lines should be drawn.

Some five years ago, for instance, a handful of tech giants said they would restrict the sale of facial recognition software to law enforcement agencies.

IBM stopped selling facial recognition tools altogether over concerns of mass surveillance or racial profiling, while Microsoft said its facial recognition technology would not be made available to US police “until strong regulation, grounded in human rights, have been enacted”.

Discussions about AI ethics go beyond national security and is an issue all organisations that use AI, basically everyone, will eventually -- if not already – have to grapple with.

What it means for organisations

I spoke with some tech lawyers and the obvious consensus is that companies that develop the technology own it, and by extension, control it and get to decide how it’s use and who can use it.

Companies concerned about the ethics and safety of AI vendors, whether for regulatory compliance or other reasons, should look to build and use their own AI models and LLMs (large language models).

At the very least, they should opt for paid enterprise versions of AI software, which usually would ensure their data won’t be used to train these AI models.

Some governments already have plans to move away from using AI applications from specific countries. France said it would replace US platforms Microsoft Teams and Zoom with its own locally developed video conferencing tools, which will be used by all its government agencies by 2027. The decision is part of France’s aim to stop using foreign software, particularly those from the US, and take back control of its critical digital infrastructures.

AI sovereignty increasingly has come up in business conversations as geopolitical tensions show no sign of easing.

And just as a tech vendor’s origins have become a focus in procurement decisions, so too are its principles, as evident by the exodus of OpenAI subscribers and surge in Claude demand following the Anthropic-US Defense saga.

In our chat, the lawyers noted that we should take comfort in knowing that some AI vendors have chosen to take principled decisions, despite the potential loss in revenue.

Related:  From lecturer to CISO: People-first path to resilient leadership

There is a public face to the issue and a company’s stance and values do matter, at least, to its users. Cross a line and it will result in a swift shift in public view.

And this doesn’t apply just to AI vendors, but also to any company that uses AI.

No turning back doesn’t mean push ahead blind

Remember that AI is a double-edged sword. Use it well and you reap the rewards. Use it unwisely and you risk getting cut.

And that has been my apprehension over where so many may be heading in their rush to grab the AI baton.

As it is, 63% of companies have highlighted moderate to large gaps between their AI goals and current capabilities, with 33% citing regulatory and compliance challenges as the biggest barriers to scaling AI. Another 31% face issues demonstrating ROI, while 27% struggle with lack of talent, according to a Cognizant study, which polled 600 AI decision makers in Singapore, Germany, Australia, and the US.

Some 52% already are investing at least US$10 million a year on AI initiatives, and 91% expect AI budgets to expand in the next two years.

But while funds may not be a problem for these organisations, a misstep with AI can be extremely costly.

It must be tightly integrated across enterprise systems to be truly effective, and so AI agents can better power workflows.

However, this also means it will be challenging and very complex to decouple when there are oversights or deployment regrets.

And wrong decisions, whether made with good intentions or otherwise, can lead to serious security consequences or, as the Anthropic-OpenAI positions have revealed, dramatic losses.

Does it mean organisations should hold back on their AI adoption? Of course not. Besides, I believe the ship has long sailed on that. Just like we can no longer turn off the internet, there’s no turning back with AI. But it also doesn’t mean we should head forward blind.

The question then really is, which side of the double-edged sword do organisations want to end up holding?

Tags: AI ethicsArtificial Intelligencecybersecuritygenerative AI

Eileen Yu

Eileen is currently an independent tech journalist and content specialist, providing analysis of key market developments across the Asian region and helping enterprises craft their communications plan. She also moderates panel discussions and roundtables, as well as provides media training to help senior executives better manage press interviews. Eileen has worked with corporate clients in markets, such as cybersecurity and enterprise software, and non-tech including financial services and logistics. She also has planned high-level panel and roundtable discussions and has been an invited speaker on online media. On CXOCIETY, she contributes articles across the four CXOCIETY brands -- FutureCIO, FutureCISO, FutureIoT, and FutureCFO -- covering key industry developments impacting the Asia-Pacific region, including cybersecurity, AI, data management, governance, workforce modernisation, and supply chain. Eileen has more than 25 years of industry experience at established media platforms, including ZDNET in Singapore, where she led the tech site's Asian editorial team and blogger network. Before her stint at ZDNET, she was assistant editor at Computer Times for Singapore Press Holdings and deputy editor of Computerworld Singapore. With her extensive industry experience, Eileen has navigated discussions on key trending topics including cybersecurity, artificial intelligence, quantum computing, edge/cloud computing, and regulatory policies. Eileen trained under the Journalism department at The University of Queensland, Australia. There, she earned a Bachelor of Arts (Honours) degree in Journalism, with a thesis titled, To Censor or Not: The Great Singapore Dilemma.

No Result
View All Result

Recent Posts

  • Commentary: Which side of the double-edged AI sword are organisations on?
  • TrendAI drives global takedown of Tycoon 2FA phishing operation
  • Trellix SecondSight to enhance cyber resilience against advanced threats
  • From lecturer to CISO: People-first path to resilient leadership
  • Survey says 87% of security teams prioritise agentic AI adoption

Categories

  • AI and Machine Learning
  • Artificial Intelligence
  • Blogs
  • CISO
  • CISO strategies
  • Cloud, Platforms and Ecosystems
  • Cloud, Virtualization, Operating Environments and Middleware
  • Compliance and Governance
  • Compliance and Governance
  • Compliance and Governance|People
  • Compliance and Governance|Technology
  • Computer, Storage, Networks, Connectivity
  • Culture and Behaviour
  • Culture and Behaviour|People
  • Cyber risk management
  • Cyber risk management
  • Cyberattacks and data breaches
  • Cybersecurity careers
  • Cybersecurity careers
  • Cybersecurity operations
  • Cybersecurity operations
  • Data Protection
  • Data Protection
  • Endpoint Security
  • FutureCISO
  • Governance, Risk and Compliance
  • Governance, Standards and Regulations
  • HR, education and Training
  • Incident Response
  • Network Security
  • Operations
  • People
  • Process
  • Remote work
  • Resources
  • Risk Management
  • Risk Management
  • Security
  • Technology
  • Training and awarenes
  • Videos
  • Vulnerabilities and threats
  • Vulnerabilities and threats
  • Webcasts/Podcasts
  • Webinars and PodChats
  • White Papers

Strategic Insights for Chief Information Officers

FutureCISO serves the interests of the Chief Information Security Officer (CISO) and the information security profession. Its purpose is to provide relevant and timely industry insights around all things important to security professionals and organisations that recognize and value the importance of protecting the organisation’s data and its customers’ privacy.

Cxociety Media Brands

  • FutureIoT
  • FutureCFO
  • FutureCIO

Categories

  • Privacy Policy
  • Terms of Use
  • Cookie Policy

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl

Login to your account below

or

[wpli_login_link]

Not a member yet? Register here

Forgotten Password?

Fill the forms bellow to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In
No Result
View All Result
  • People
  • Process
  • Technology
  • Resources
    • White Papers
    • PodChats
  • Events
  • Login

Copyright © 2024 Cxociety Pte Ltd | Designed by Pixl