spot_img

Date:

Share:

AI Has Turned Biometric Security Into a Fraud Target, New Data Shows

New data shows nearly 9 in 10 failed identity checks in Southern Africa are linked to AI-driven impersonation.

 The systems designed to verify identity and secure financial transactions are rapidly becoming the weakest link in the fight against fraud, as new data reveals the scale of AI-driven impersonation across Southern Africa.

According to the Smile ID 2026 Digital Identity Fraud Report, nearly 87 percent of rejected biometric verification attempts in the region are now linked to AI-assisted impersonation and spoofing, highlighting a dramatic shift in how fraud is executed and scaled.

For South African businesses and financial institutions, the implication is stark: the very tools relied upon to establish trust – facial recognition, voice verification, and digital identity checks are now being systematically exploited.

“Biometric verification was designed to confirm that a person is who they say they are. It was never designed to confirm that an interaction is authentic of AI,” says Matthew Renirie, Co-Founder of Certified AI Access. “What we are seeing now is not just an increase in fraud, it’s a fundamental shift as the control layer itself has become the vulnerability.”

Matthew Renirie, Co-Founder of Certified AI Access

The report highlights a broader evolution in fraud patterns across the region.

“Fraud is no longer about breaking into systems; it’s about becoming someone else. We’re seeing AI-generated faces and cloned voices pass biometric checks, synthetic identities reused at scale, and attacks moving beyond onboarding into accounts, transactions, and dispute processes,” he adds. “This is the rise of a synthetic identity economy that is structured, repeatable, and industrialised.”

Renirie points out that fraud has effectively become a business model. “Artificial Intelligence has collapsed the cost of deception,” he adds. “What used to take skill, time, and coordination can now be executed instantly, repeatedly, and at scale.”

While many organisations continue to invest heavily in verification tools and cybersecurity systems, experts warn that the challenge is no longer purely technical.

Professor Clifford Shearing, a leading authority on governance and security, argues that the rise of AI-enabled fraud reflects a deeper structural issue.

“We are seeing the limits of governance models that rely on static controls in a dynamic threat environment,” says Shearing. “Systems designed to verify identity once are inherently vulnerable to manipulation over time.”

“This is not simply about better technology, but rather it requires a shift toward continuous oversight, adaptive systems, and new forms of institutional accountability,” he says. “Traditional fraud systems are built to verify once, apply rules, and respond after the damage is done. AI-driven fraud doesn’t follow those rules, it adapts in real time, behaves like a legitimate user and moves straight through static controls.

That’s the gap and it’s widening fast Renirie adds: “Most organisations are still relying on systems designed for a previous generation of threats,” he says. “They are verifying identity once, and then assuming trust persists. That assumption no longer holds.”

A new category of defence that is emerging cautions, one that moves beyond verification toward continuous validation of digital interactions.

“Certified AI Access describes this as trust infrastructure, a layer that continuously analyses whether interactions are real, manipulated or synthetic,” Renirie explains “The future of fraud prevention is not about stronger gates at entry, it’s about continuously assessing trust across every interaction; voice, video, text, and behaviour, all in real time.”

Global estimates suggest AI-driven fraud could cost businesses up to $40 USD billion annually within the next two years. Beyond financial loss, Renirie says that the greater risk may be the erosion of trust in digital systems themselves.

“If organisations cannot reliably distinguish between real and synthetic interactions, the entire foundation of digital commerce is at risk,” he adds. “The question is no longer whether fraud will happen but whether institutions are equipped to recognise it when it does.”

For more information, visit certifiedaiaccess.com.

spot_img
spot_img

━ More like this

Online scam exposure remains widespread despite high levels of self-assurance, Kaspersky reports

A recent Kaspersky survey highlights a considerable gap between consumers’ confidence in identifying online scams and their actual exposure to cyber threats. According to...

Identity under siege: The new order of security in 2026

The threat model has changed as artificial intelligence lowers the barrier to entry for cybercrime. Attack velocity and threat veracity have increased exponentially. Impersonation...

High-severity incidents at a minimum: Kaspersky experts reveal a steady decline over the years

According to the ‘Anatomy of a Cyber World: Global Report by Kaspersky Security Services’, there has been a noticeable decline in the percentage of high-severity incidents...

Kaspersky discovers new SparkCat variant bypassing App Store and Google Play security

 Kaspersky has identified a new variant of the SparkCat Trojan in the App Store and in Google Play — a year after the crypto-stealing...

Kaspersky uncovers CrystalX RAT which steals data and mocks its victims

The new remote access trojan (RAT) is capable not only of stealing information and fully spying on its victims, but also of making fun...
spot_img