A large‑scale analysis of domain‑based scam signals by OXIL Research, supported by Google.org, recommends adopting a safeguarding framework—long used in health, education and social care—as a new paradigm for online scam prevention. The study argues that responsibility for protection should shift from individual users to the wider ecosystem of organisations they interact with.
Researchers analysed 28.6 million domain‑name‑based scam and fraud signals drawn from the Global Signal Exchange to map tactics and target selection.
Findings challenge common assumptions about who is targeted and how. The analysis indicates scammers often favour scale over precision, employing high‑volume, generic communications to reach the broadest possible audience.
Working‑age adults, as the most active online group, appear to be the most frequently targeted by these scattergun campaigns; older adults ranked 11th among groups in the dataset.
The study emphasises situational vulnerabilities—such as stress, distraction or significant life events—over immutable characteristics like age or disability. It also identifies more tailored “sniper” attacks directed at groups with protected characteristics, including neurodivergent people and those with mental‑health conditions; these attacks are lower in volume but highly bespoke, sometimes extending to families and support networks.
OXIL Research frames its proposal around “collective defence”: coordinated action across organisations to identify early warning signs, share relevant signals and intervene before harm occurs.
Under a safeguarding model, businesses, public bodies and law‑enforcement agencies would be expected to act on everyday cues and context that indicate heightened risk, rather than placing the onus solely on potential victims to recognise and reject scams.
“This new study is giving us the data insights we need to call time on an era of consumers feeling alone when they are scammed online — with an implied assumption that they have done something wrong,” said Emily Taylor, chief executive at OXIL Research.
Emily Taylor
“Consumers are not the weakest link when it comes to online crime — and education and awareness need to be complemented by additional interventions... Here, all people will be seen as potentially vulnerable to attacks, with situational vulnerability to lures being more critical than immutable characteristics.” Emily Taylor
Haviva Kohl, senior programme manager at Google.org, described the funder’s role as supporting evidence‑based work to shift focus from individual blame to systemic exploitation. “Effective digital safety begins with evidence, not blame,” she said.
The paper calls for increased information sharing, improved detection of early cues and clearer responsibilities for organisations that encounter potential victims. It recommends extending training and intervention capacities across government, law enforcement, private and public sectors so that signals from diverse systems can be acted on in a timely manner to reduce harm.