
There's a question that has quietly haunted the age assurance ecosystem for years, and regulators, platforms, and users are all starting to ask it out loud:
What if compliance isn't enough?
According to recent reports, 88% of users report fear of biometric misuse, while 69% fear insider access. These numbers don't come from bad actors or fringe skeptics. They come from everyday people being asked to verify their age on platforms they use every day. The policies are in place. The agreements are signed. The audits are completed. And still, the trust gap remains.
We believe the answer isn't a stronger policy, but a different architecture.
There is one key distinction that changes the way we think about this complex and growing problem. Today privacy is enforced and turned into a standard pratice, however the posibility of unethical companies breaking this accessing the data or not being careful enough causing a data breach can happen.
But, what if the data was never accessible to begin with?
That's the design principle behind what we've been building. Not a tighter policy, nor an additional layer of compliance. We propose a structural impossibility: an architecture in which Incode itself cannot access the raw biometric data, because the system was never built to allow it.
Age estimation using facial analysis is becoming the industry standard: frictionless, inclusive, and document-free. It works for populations without official ID. It removes friction that causes drop-off.
But mainstream adoption has a ceiling, and that ceiling is trust. Until that trust problem is solved at the architecture level, the industry will keep building more sophisticated systems on a foundation users aren't confident in. Better policies won't move that needle. A fundamentally different design will.

What we're developing operates across two architectural paths, and both share the same core principles:
1. Raw biometric data is never accessible, by design
The first keeps all processing entirely on the user's device. Nothing is transmitted. The face is analyzed locally, and only a result leaves, not the data that produced it.
2. Control and decryption remain exclusively on the user's device
The second goes further for environments where server-side intelligence is needed: the image is encrypted on-device before anything is sent. What reaches the server is not a face. It's an encrypted vector. Inference, including liveness detection, deepfake resistance, and anti-spoofing, runs directly on the encrypted signal. Decryption outside the user's device is not possible.
The result is returned and decrypted on-device. No face, no template, no biometric record is retained at any point in the chain.
Privacy-preserving approaches are often assumed to come at the cost of security. We rejected that framing entirely.
A system that protects privacy but can be defeated by a deepfake or an injection attack hasn't protected anyone. The architecture we're developing preserves accuracy, scalability, and resilience against evolving fraud techniques, while maintaining the highest level of structural privacy. These are not competing requirements in this design. They are simultaneous ones.
For platforms and buyers, the value proposition is concrete:
The statement platforms can make to their users stops being "we protect your data." It becomes "your data is structurally unreachable, including to us." That's a different conversation. And one we think the industry is ready for.

This is the beginning of a longer conversation, one about what age assurance looks like when privacy, security, and accuracy are all treated as non-negotiable at the same time.
More is coming, and it's built differently.
Incode was named a Leader in the 2025 Gartner® Magic Quadrant™ for Identity Verification. Download the report.