
Purdue University built a real-world deepfakes benchmark to stress-test academic, government, and commercial tools on a social-media content dataset.
In that head-to-head, Incode delivered the lowest false-acceptance rate (FAR) on images (2.56%) and the best video accuracy among commercial tools (77.27%), with video FAR at 10.53%, a rare balance of catch rate and precision that minimizes false positives and operational friction.
Purdue also finds that paid/commercial detectors generally outperform free-access models, underscoring the importance of continuously maintained systems.
Combine those results with Deepsight’s behavioral and integrity layers, and you get the world’s most accurate deepfake detection. While the Purdue benchmark focuses on generalized deepfakes, in identity verification specifically, the gap in performance is much higher.
In internal testing for across millions of real IDV sessions, Deepsight is proven to have 68x lower false acceptable rate than the next-best commercial solution and is 10x better at identifying deepfakes than expert human reviewers.
Traditional deepfake datasets are built under lab conditions (clean, frontal faces, controlled lighting) and don’t reflect real-world scenarios, which include heavy compression, sub-720p resolution, post-processing, and heterogeneous generation pipelines.
Purdue’s Political Deepfakes Incident Database (PDID) explicitly targets real incidents from platforms like X/Twitter, YouTube, TikTok, and Instagram, bringing those artifacts into the evaluation.
Purdue curated 232 images and 173 videos and evaluated detectors end-to-end using a common methodology (ACC, AUC, FAR). That mix includes low-resolution content and short, social-media-style clips, which are notoriously challenging in production settings.
The study compares white-box academic and government models, commercial black-box tools, and LVLMs, with the explicit goal of exposing the limits of detectors when moved from lab datasets to political content circulating “in the wild.”
The attack surface for deepfakes is large. Attackers can inject manipulated content via virtual cameras, rooted/jailbroken devices, or emulators during the verification process. Deepsight was built to holistically secure the entire path from device to decision:
Deepsight’s holistic architecture also fingerprints generator “styles” (UMAP clustering), strengthening attribution signals even when a fake looks visually perfect.
As we can see by the color-coded clusters, each synthetic image generation tool has a unique UMAP profile that Deepsight’s AI can recognize.
In other words, it means that even if FaceStudio (the purple cluster at [5,5]) would suddenly generate a completely convincing deepfake, its clear fingerprint would indicate significant likelihood that the image originated from this tool – and is very probably a deepfake, regardless of how perfect it looks on the surface.
Purdue validates Incode’s precision on real political media. Deepsight wraps that precision in behavioral and integrity layer defenses that block injection before deepfakes enter the verification flow, adding layers of protection by flagging deepfake-relevant risk signals upstream.
This holistic approach at protecting against deepfakes translates to market leading real world performance. This aligns with Incode’s own findings: measured across 1.4M real-world sessions in H2 2025, Deepsight caught 24,360 additional fraudulent sessions, which translates to a significant reduction in fraud that no other system or human review could have identified.
1) Lower friction from fewer false alarms
FAR governs escalations, false flags, and abandonment. Incode’s industry-low image FAR (2.56%) and measured video FAR (10.53%) translate to fewer manual reviews and smoother user journeys at scale.
2) Coverage where attacks are moving
PDID captures short, low-res, heavily processed, social-media content, the same conditions that increasingly show up in fraud pipelines. Performing well there is a proxy for real-world readiness.
3) End-to-end mitigation, not just detection
Deepsight stops virtual-camera streams, injected content, and tampered devices upstream, while multi-frame liveness and MMI catch deepfake artifacts downstream. One system, one decision.
4) Independent context for decision-makers
Purdue’s study was designed to “test the corners of the box”, exposing where white-box and black-box detectors struggle in the wild, and it documents the need for careful thresholding and calibration in practice. Your teams get transparent third-party evidence for vendor selection.
5) Built for production complexity
Purdue also shows that commercial detectors generally outperform free-access models and that political video detection remains harder than images, which is exactly the realities Deepsight was engineered to handle.
If you’re evaluating deepfake and injection defenses for KYC, payments, media content, or platform integrity, Deepsight pairs measured precision with full-stack prevention, designed for executive outcomes: lower fraud losses, lower friction, and higher conversion.
If you want to read the complete Purdue University study, please click here.
Incode was named a Leader in the 2025 Gartner® Magic Quadrant™ for Identity Verification. Download the report.
Discover more articles, news and trends.