Organisations will need to adjust to new norms and review their data protection practices, as artificial intelligence (AI) technologies advance and the associated risks widen.
Significant changes in both the technology realm and global operating environment have disrupted workplaces and societies, said Josephine Teo, Minister for Digital Development and Information and Minister-in-charge of Cybersecurity and Smart Nation.
“It is inevitable that we must adjust our practices, laws, and even our broader social norms,” Teo said in her opening address at the Personal Data Protection Summit 2025, held this week in Singapore.
She noted that GenAI (generative AI) models are built on massive amounts of data, the latter of which is critical throughout the AI development lifecycle, spanning pre-training, finetuning, testing, and validation.
This momentum more recently has expanded to sector-specific applications that are built on customised or proprietary datasets, she said. Singapore’s Changi Airport, for instance, trained its chatbot on a large language model (LLM) that leverages the airport’s data repositories.

However, there are data challenges, including biased training data that can result in downstream problems with model outputs, and developers are running out of internet data, Teo said.
“Most of the LLMs are already trained on the entire corpus of internet data. What then should model providers do to improve their models? They are turning to more sensitive and private databases to augment their models, which brings its own set of challenges,” she said.
“Increasingly, we need a way to train models, while protecting sensitive information,” she added.
AI has accelerated the velocity of change and attacks, but it also serves as a shield against such attacks, helping to reduce triage fatigue and providing real-time visibility, said Clarence Cai, commander of Defence Cyber Command and defence cyber chief of digital intelligence service for Singapore Armed Forces.
Whether AI tips in favour of offence or defence will depend on how the technology should be applied, said Cai, during his keynote at the ST Engineering Cybersecurity Summit held in Singapore.
National security used to mean drawing a clear line between soldiers and civilians. The landscape has since changed, with cybercrimes now taking on the spectre of stealth activities, he noted.
The old model of defence where soldiers take up positions between the threat and centre of value is being rewritten, as modern frontline now exists where vulnerabilities are.
These now take the form of cyber threats, such as deepfakes and malware that are created by GenAI to bypass conventional cyberdefence, he said.
See attacks the way hackers do
It highlights the need for new approaches to cybersecurity.
Cai cited a post by John Lambert, Microsoft’s Security Fellow and corporate vice president of security research: “Biggest problem with network defence is that defenders think in lists. Attackers think in graphs. As long as this is true, attackers win.”
Cai noted that this shapes how AI is trained, where companies can shift the playing field if they build graph-native defence and use AI to see attacks the way hackers do.
This is what his team is working towards at the Cyber Defence Test and Evaluation Centre (CyTEC), where specialists are developing AI workflows for penetration testing (pen testing).
For instance, the centre uses AI to generate attack paths in pen testing exercises, with AI agents researching the internet to identify vulnerabilities and deciding which can be used in a targeted network.
These LLM agents work together to construct an attack graph and establish an intrusion path.
The CyTEC team then tweaks this before it is executed for the pen test, saving weeks in preparation that would have been required without the use of AI agents, according to Cai.

Teo said: “To ensure the reliability of GenAI apps before release, it is important to have a systematic and consistent way to check that the app is functioning as intended, and there is some baseline safety.”
She added that both model developers and app developers must deal with data inadequacies, since AI models often are linked with internal company databases to ensure applications can support the organisation’s specific needs.
She noted that the process of finetuning and retaining AI models -- to correct erroneous or bias content -- after it has “learnt” something, often is costly and imprecise.
This has driven a new field, called “machine unlearning”, to figure out whether there can be techniques to identify variables that contribute most to the shortcomings in output, and push out targeted model corrections at scale.
While there are no easy answers, Teo said there is need for various solutions to ensure AI can continue to advance safely. These should range from improvements in business processes to new techniques in risk mitigation.
Singapore tunes up focus on AI data protection
Singapore this week announced that its Data Protection Trustmark would now be a national standard, making it a benchmark for robust data protection.
Tagged SS 714:2025, the Singapore Standard is aligned with global data protection benchmarks and international best practices, according to Infocomm Media and Development Authority (IMDA).
Singapore Standards are published by Enterprise Singapore and recognised locally as documents that list procedures and specifications for various components, including processes, systems, services, and products.
The Data Protection Trustmark provides clearer data protection requirements around areas, such as third-party management and cross-border data transfers, IMDA said.
The industry regulator added that the Singapore Accreditation Council will guide certification bodies on assessing applicants for the Data Protection Trademark standard.
Organisations that believe they have accountable data protection practices can apply to be certified under the new Singapore Standard. Consumers then can look for businesses that have been issued the certificate to ensure they are transacting with companies that adhere to robust data protection practices, IMDA said.
Singapore this week also expanded its Global AI Assurance Sandbox to include new use cases, such as agentic AI, and risks such as prompt injections.
In addition, the sandbox will be open to sector regulators that want to develop and obtain feedback on their AI governance and testing guidelines.
The AI sandbox taps IMDA’s Starter Kit for Safety Testing of LLMs, which aims to offer a standardised and structured pathway for testers.
The sandbox was launched as a pilot earlier this year to provide a platform for developers and adopters of GenAI applications to have their products assessed by specialist technical testers.
The initiative aims to reduce testing-related barriers to GenAI adoption and generate feedback for future technical testing standards for GenAI applications, according to IMDA.
Testing is critical to demonstrate AI applications have addressed key risks, Teo said.
Pointing to the AI Assurance Sandbox, she noted that the initiative provides a learning environment to help AI developers, businesses, and governance teams to develop solutions together, including build better guardrails for GenAI applications.
The various sandboxes also will provide some consensus on what “good looks like”, whether in relation to AI governance or data protection, she said.
“Much like traditional fields of product safety or pharmaceuticals, we need subject matter experts to agree on the standards to uphold, and testers to assure us that the standards are being met,” Teo said.