Growing cybersecurity breaches and use of artificial intelligence (AI), alongside data, have prompted more businesses to reassess their IT service contracts and better safeguard themselves.
Agreements with third-party suppliers, in particular, are more rigorously and regularly reviewed, according to Yuankai Lin, a lawyer and partner at RPC Premier Law, who has handled cases involving, amongst others, data breaches and digital infrastructure-related disputes.
RPC Premier Law is a joint venture between Singapore law firm Premier Law and international law firm RPC, which is headquartered in London.
In some contracts, clients would include specific requirements, for instance, stipulating the type of software their third-party suppliers must install, Lin told FutureCISO.
Others would want their third-party vendors to regularly update their software and implement new security measures whenever these are deemed necessary, he said. The Singapore-based lawyer advises clients in various sectors, including technology and healthcare.
His law firm has been receiving queries on including terms and conditions that go beyond what would be regarded baseline, or "reasonable", measures in the country.
This places onus on third parties to implement security measures that go beyond what are legally required, if they want to establish a business relationship with the company.
These third parties then will have to decide if they want to invest more, to meet those requirements, to ink the service agreements, Lin said.

The closer scrutiny comes amidst growing cybersecurity incidents and data breaches globally, including Singapore.
At least 70% of companies in the city-state had experienced negative impacts from cybersecurity breaches within their supply chains over the past year, according to a BlueVoyant report. Globally, this figure clocked at 81%.
Singaporean organisations reported an average of 3.97 breaches last year, down from 4.42 in 2023, which BlueVoyant suggested was due to enhanced cybersecurity measures, including increased board oversight and more frequent monitoring of third-party vendors.
High-profile breaches such as SolarWinds, have put a spotlight on supply chain vulnerabilities and the risks they can bring to enterprises, including legal implications.
This has pushed more companies to review their service contracts, particularly those involving third-party vendors, Lin noted.
Such agreements typically will state the steps these suppliers must take should they suspect a data breach has occurred, including a specific period of time within which they have to notify the client as well as provide updates.
He added that some companies also will include the right to conduct an audit in the event of a breach, where they can send their own team to run the investigation.
These obligations are in place for most of the contracts Lin has handled.
He noted that if the third party is found to have fallen short of any of the agreed measures or failed to comply with local laws, it then is exposed to potential claims from customers. This is on top of any action local regulators may take, he added.
For instance, there are varying regulatory obligations within the Asian region that companies must fulfil to protect personal data, he said.
Pointing to ransomware attacks, he noted that most threat actors would target a company’s IT infrastructure and extract personal data, with the threat to publish these on the dark web.
In Singapore, organisations are required to take “reasonable measures” depending on the type of data, to safeguard against such risks. Such baseline measures are continuously assessed by regulators, given the increasing sophistication of ransomware attacks, on what should constitute as reasonable measures.
According to Lin, organisations often are found to fall short of what is required by law. Multi-factor authentication, for instance, is regarded a standard step companies should take, but often would not be enforced, he said.
AI likely to further complicate risks
Organisations that fail to comply with regulatory and contractual requirements will face legal implications. And these are likely to increase as they turn to AI, as most businesses in Singapore and across the globe already are.
As it is, companies already are bypassing security and governance in their adoption of AI, suggests an IBM report released last month. According to the study, 63% of organisations that experienced a security breach either did not have an AI governance policy or were still developing one.
In addition, one in five reported a breach as a result of shadow AI and just 37% had policies in place to manage AI or detect shadow AI. Organisations that saw high usage of shadow AI experienced an average of $670,000 in higher breach costs, compared to those with a low level or no shadow AI.
"The data shows that a gap between AI adoption and oversight already exists, and threat actors are starting to exploit it," said Suja Viswesan, IBM’s vice president for security and runtime products. "The report revealed a lack of basic access controls for AI systems, leaving highly sensitive data exposed, and models vulnerable to manipulation. As AI becomes more deeply embedded across business operations, AI security must be treated as foundational. The cost of inaction isn't just financial, it's the loss of trust, transparency, and control."
It also can potentially expose companies to further legal risks, especially as AI powers more workflows across an organisation with the emergence of AI agents.
“If not implemented properly, and if AI governs most of your daily functions, the amount of exposure organisations face due to malfunctioning AI can be quite limitless,” Lin said. “Imagine if you can’t draft a bill or contract properly, for reporting purposes or [regulatory] requirements.”
Highlighting the CrowdStrike debacle, he noted that a poorly executed update had caused the global mayhem that ensued.
Organisations could face more serious risks and implications if one particular AI software that is responsible for governing their operations goes awry, he said.
For now, most of the AI-related queries Lin sees involve the investment in AI or development of AI, such as whether it is fit for purpose when published.
These queries are similar to software implementation or development disputes, where the eventual product built deviates from the specifications that had been initially agreed upon, he said.
With the rise in companies rushing to adopt AI, Lin anticipates this may lead to more AI-related disputes in future.
He advised organisations to set up protocols and policies to address new risks from the technology, such as deepfakes. For instance, additional rounds of approval should be enforced for funds transfers, including requests that must be made only in person -- and not via video calls -- before the finance team is permitted to approve the transaction.
There also should be regular training so employees can better identify deepfakes and phishing email, he said.
He added that companies should have established guidelines, including an incident response playbook and team comprising legal counsel and forensic investigators, that clearly state the steps that must be taken in the event of a breach.
This helps minimise the need to stop and think when they should already be in fire-fighting mode, he said.