Startup-led innovations enhance manipulated content detection and strengthen security posture, according to GlobalData.
Vaibhav Gundre, project manager of Disruptive Tech at GlobalData, acknowledged the increasing sophistication and the risks of AI-generated deepfakes.
“Cutting-edge detection methods powered by machine learning are helping to identify and flag manipulated content with growing accuracy. From analysing biological signals to leveraging powerful algorithms, these tools are fortifying defenses against the misuse of deepfakes for misinformation, fraud, or exploitation,” said Gundre.
Startup-led innovations
Startups that spearhead innovations in deepfake detection listed in the Innovation Explorer database of GlobalData’s Disruptor Intelligence Center include Sensity AI, which uses proprietary API to detect deepfake media such as images, videos, and synthetic identities, and Attestiv, which detects deepfakes through advanced ML and heatmapping technology.
Moreover, the list included DeepMedia.AI’s deepfake detection tool, DeepID, which analyses image integrity through modifications, image artifacts, and other signs of manipulation. It also analyses audio authenticity using pitch, tone, and spectral patterns, as well as video authenticity using frame-by-frame analysis of visual characteristics.
“These advancements in deepfake detection are transforming cybersecurity toward ensuring digital content authenticity. However, as this technology evolves, we must critically examine the ethical considerations around privacy, consent, and the unintended consequences of its widespread adoption. Striking the right balance between protection and ethical use will be paramount in shaping a future where synthetic media can be safely leveraged for legitimate applications,” concluded Gundre.