Artificial intelligence (AI) is a double-edged sword. You’ve probably heard that said pretty often now, and this perhaps has never been as clearly demonstrated as it has this past couple of weeks.
For me, it also begs the question: do organisations know which side they’ll end up holding?
So, what happened? Long story short, one of the big names in AI, Anthropic, refused to allow the US Department of Defense to tap its Claude platform for mass surveillance and autonomous weapons systems.
It meant risking a US$200 million deal the AI vendor inked with the Defense Department last July and prompted the US government to designate Anthropic a supply chain risk -- a label previously given only to foreign adversaries. Anthropic has responded with a lawsuit, describing the move as “unprecedented and unlawful” and “harming Anthropic irreparably”.
Meanwhile, OpenAI signed an agreement to provide the US Defense Department access to its AI models for classified networks, including “any lawful use” terms that Anthropic had prohibited on its platform.
The flow of events pushed an initial wave of 1.5 million users to cancel their ChatGPT subscriptions. This figure reportedly has now hit 4 million users.
Anthropic, on the other hand, saw a surge in demand for Claude and support from the industry, including researchers and employees from OpenAI and Google.
Anthropic has filed a lawsuit challenging the US government’s move to designate it a supply chain risk.
In his statement on the scuffle with the US Defense Department, Anthropic CEO Dario Amodei said: “We do not believe, and have never believed, that it is the role of Anthropic or any private company to be involved in operational decision making -- that is the role of the military. Our only concerns have been our exceptions on fully autonomous weapons and mass domestic surveillance, which relate to high-level usage areas and not operational decision-making.”
Incidentally, Amodei joined OpenAI in 2016, but left over differences regarding the company’s future direction, and founded Anthropic in 2021 alongside other former senior employers of OpenAI.
In a post on X (formerly called Twitter), OpenAI’s Sam Altman admitted it had rushed to seal the deal and looked “opportunistic and sloppy”.
While the saga continues to play out, it has highlighted a longstanding discussion about AI safety and ethics and where lines should be drawn.
Some five years ago, for instance, a handful of tech giants said they would restrict the sale of facial recognition software to law enforcement agencies.
IBM stopped selling facial recognition tools altogether over concerns of mass surveillance or racial profiling, while Microsoft said its facial recognition technology would not be made available to US police “until strong regulation, grounded in human rights, have been enacted”.
Discussions about AI ethics go beyond national security and is an issue all organisations that use AI, basically everyone, will eventually -- if not already – have to grapple with.
What it means for organisations
I spoke with some tech lawyers and the obvious consensus is that companies that develop the technology own it, and by extension, control it and get to decide how it’s use and who can use it.
Companies concerned about the ethics and safety of AI vendors, whether for regulatory compliance or other reasons, should look to build and use their own AI models and LLMs (large language models).
At the very least, they should opt for paid enterprise versions of AI software, which usually would ensure their data won’t be used to train these AI models.
Some governments already have plans to move away from using AI applications from specific countries. France said it would replace US platforms Microsoft Teams and Zoom with its own locally developed video conferencing tools, which will be used by all its government agencies by 2027. The decision is part of France’s aim to stop using foreign software, particularly those from the US, and take back control of its critical digital infrastructures.
AI sovereignty increasingly has come up in business conversations as geopolitical tensions show no sign of easing.
And just as a tech vendor’s origins have become a focus in procurement decisions, so too are its principles, as evident by the exodus of OpenAI subscribers and surge in Claude demand following the Anthropic-US Defense saga.
In our chat, the lawyers noted that we should take comfort in knowing that some AI vendors have chosen to take principled decisions, despite the potential loss in revenue.
There is a public face to the issue and a company’s stance and values do matter, at least, to its users. Cross a line and it will result in a swift shift in public view.
And this doesn’t apply just to AI vendors, but also to any company that uses AI.
No turning back doesn’t mean push ahead blind
Remember that AI is a double-edged sword. Use it well and you reap the rewards. Use it unwisely and you risk getting cut.
And that has been my apprehension over where so many may be heading in their rush to grab the AI baton.
As it is, 63% of companies have highlighted moderate to large gaps between their AI goals and current capabilities, with 33% citing regulatory and compliance challenges as the biggest barriers to scaling AI. Another 31% face issues demonstrating ROI, while 27% struggle with lack of talent, according to a Cognizant study, which polled 600 AI decision makers in Singapore, Germany, Australia, and the US.
Some 52% already are investing at least US$10 million a year on AI initiatives, and 91% expect AI budgets to expand in the next two years.
But while funds may not be a problem for these organisations, a misstep with AI can be extremely costly.
It must be tightly integrated across enterprise systems to be truly effective, and so AI agents can better power workflows.
However, this also means it will be challenging and very complex to decouple when there are oversights or deployment regrets.
And wrong decisions, whether made with good intentions or otherwise, can lead to serious security consequences or, as the Anthropic-OpenAI positions have revealed, dramatic losses.
Does it mean organisations should hold back on their AI adoption? Of course not. Besides, I believe the ship has long sailed on that. Just like we can no longer turn off the internet, there’s no turning back with AI. But it also doesn’t mean we should head forward blind.
The question then really is, which side of the double-edged sword do organisations want to end up holding?
