Emerging technologies, particularly artificial intelligence (AI), are driving more sophisticated cyber attacks that will be increasingly tough to detect and businesses that are neglecting the fundamentals face the biggest risks.
In Singapore, for instance, 13% of phishing email that were analysed in 2023 contained AI-generated or assisted content, according to the country’s cybersecurity regulator Cyber Security Agency (CSA). Its Singapore Cyber Landscape 2023 report found that these AI-generated email messages were more grammatically sound and showed better sentence structure.
“They also had better flow and reasoning, intended to reduce logic gaps and enhance legitimacy,” CSA said, adding that these AI-generated phishing email could adapt to various tone. This enabled them to better exploit the victim’s emotions, making them more convincing and dangerous, the agency said in the report, which was released last July.
The CSA report highlighted AI as a trend to watch, where improvements and adoption of the technology would continue to scale. Malicious actors also were likely to benefit, leveraging AI to enhance social engineering attacks.
Furthermore, AI models would produce higher quality output as they continued to be trained on growing volumes of data.
Threat actors also could use generative AI (GenAI) tools to recreate and operationalise research findings, incorporating these to improve their attacks, CSA noted, pointing to more advanced cyber attacks such as AI-proliferated worms and automated hacking.

“The use of GenAI has brought a new dimension to cyber threats,” said CSA’s chief executive and commissioner of cybersecurity, David Koh. “As AI becomes more accessible and sophisticated, threat actors will also become better at exploiting it.”
Enabling scams to achieve scale and authenticity

As it further advances, AI will enable cybercriminals to scale their attacks, where victims can be targeted via automation and in mass numbers, said Thanh Tai Vo, Asia-Pacific director of fraud and identity at LexisNexis Risk Solutions.
AI also can be used to mimic human behaviour that will be increasingly convincing, Thanh said in an interview with FutureCISO.
British engineering company Arup last year fell victim to a deepfake scam when an employee was tricked into transferring HK$200 million to fraudsters. The scammers had used AI to create deepfakes of the company’s senior executives and, via video calls, “instructed” the employee to make the fund transfers.

Arup’s global CIO Rob Greig had noted that the company was subject to frequent attacks, including phishing scams, WhatsApp voice spoofing, and deepfakes. “What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months,” Greig said.
With many people have taken to sharing their photos freely online, it is not difficult to create deepfakes, noted Thanh. All it takes are a few photos and minutes of audio to generate a basic deepfake version of an individual.
Such clones will be increasingly difficult to distinguish from genuine ones as AI continues to advance, further fuelling scammers’ goal to impersonate a targeted individual in order to build trust with their victims, before eventually executing the scam.
Thanh suggested that companies adopt a multi-layered fraud management strategy to mitigate such risks, including focusing on key areas such as digital intelligence, identity authentication, and behavioural analysis. Establishing digital intelligence, for instance, would allow organisations to better assess the risk of certain activities and distinguish genuine users from fraudulent ones.
AI also could be used to enhance capabilities in analysing and detecting anomalies in images, such as identification cards, and documents. In addition, behavioural analysis would help determine if a transaction is made under coercion.
Back to basics for cyber resilience

Ultimately, too, the fundamentals matter, said Brendon Laws, COO of cyber incident response vendor, Blackpanda.
Often when things go awry, the problem usually points back to the same issues where organisations still lack understanding of their infrastructure and fail to maintain it properly, Laws said in a video interview.
“It feels like it all boils down to the same thing,” he said. “It’s back to the basics [and] and I haven’t seen that change in 20 years...people [still] aren’t doing what they need to do.”
Two-factor authentication (2FA) solutions, for example, have been in the market for a long time and can effectively detect a significant chunk of phishing attacks. However, there still are businesses that have yet to deploy 2FA as a basic layer of authentication and access control, Laws said.
They probably also are not deploying and updating security patches regularly, he added.
He underscored the need for organisations, as custodians of people’s data, to uplift their infrastructure by adopting security best practices and better ride the tide even as the threat landscape evolves.
“There’s always a storm, so it’s about whether your house is built to standard,” he said. It calls for organisations to understand their infrastructure and determine the tools and procedures to put in place, so they can weather the storm in a more viable way and return to a steady state, he added.
Noting that threat adversaries were not constrained by budget or ingenuity, Laws urged businesses to leverage emerging technologies including AI to bolster their cyber resilience.