In the age of deepfakes, proving what is real matters more than detecting what is fake

By Lakshmi Hanspal, Chief Trust Officer at DigiCert.

Deepfake technology is no longer a curiosity. It is a business risk, measurable, growing, and poorly understood by most organisations.

Generative AI has changed the conditions. Synthetic audio, video, and documents can now be produced with enough fidelity to defeat the instincts we have relied on to judge what is real. An executive's voice can be cloned to authorise a payment. A corporate document can be fabricated — accurate in branding, tone, and format — and no one questions it. Manipulated images and video can circulate across social media faster than any platform or security team can respond.

This is not a misinformation problem. It is fraud at scale. And it is personal. Technology is evolving faster than regulation and human understanding. Most of us experience this daily — scrolling through a feed or watching the news, no longer certain what we are seeing. For individuals, that uncertainty erodes confidence. For organisations, it translates directly into operational and financial exposure.

The response cannot be purely defensive. The future of cybersecurity is not only about protecting systems — it is about protecting the confidence people place in them. Authenticity can no longer rest on appearance. It must be engineered, provable, and not assumed.

The cost and limits of detection

Detection technologies remain important and will continue to improve. However, detection is inherently reactive. It assumes that manipulated content will circulate and that organisations will identify it before meaningful harm occurs.

As AI tools become more accessible and content creation accelerates, that assumption becomes harder to sustain. Fraudulent communications can move rapidly across collaboration platforms, partner networks and public channels. By the time a detection tool flags suspicious activity — or a team manually reviews it — the damage may already have been done.

Detection also carries a cost, as manual review processes consume time and skilled resource to conduct. Automated detection systems require ongoing tuning and oversight. Every investigation represents diverted attention from strategic security priorities.

For CISOs, deepfake-enabled fraud is not limited to reputational management. It can disrupt financial workflows, undermine contractual processes and compromise decision-making. Synthetic invoices, manipulated approval requests, and falsified communications can enter business systems and appear credible long enough to trigger financial loss.

Relying solely on detection means accepting that the organisation will always be responding after the fact and paying the price for doing so. A more resilient approach shifts the focus from detecting what is false to proving what is authentic.

Establishing content trust

Trust has to start where content starts — at creation.

The question organisations should be asking is not "does this look real?" It is "can I verify who made it, and whether it has been touched since?" Those are answerable questions — if you have the right infrastructure in place.

Digital signing does exactly that. When content is cryptographically signed and bound to a verified identity, provenance becomes a fact, not an inference. It does not matter whether the content was created by a human or generated by AI. Whether it is deep fakes or quantum computing, the same trust best practices apply. A video statement, a financial disclosure, an AI-produced report, all of it can carry a verifiable chain of custody. Authenticity becomes something you can demonstrate, not something you have to argue.

For organisations, the operational value is immediate. Advancing automation and transparency helps organisations move faster without compromising integrity. And the inverse matters just as much. Content that lacks a trusted signature is content that cannot prove its origin. The absence of verifiable identity is itself a signal, a clear one. That shift changes how organisations respond to suspicious content: not by trying to detect the fake, but by recognising what proof is missing.

Document trust in regulated environments

The implications are particularly significant in regulated sectors across the UK and Europe, where digital documents underpin core operations. Contracts, compliance filings, invoices and customer communications increasingly exist only in digital form. As AI-generated forgeries become more convincing, traditional review processes - are often reliant on human judgement - are no longer sufficient safeguards and carry an operational cost.

Document trust extends cryptographic assurance into structured workflows. Digitally signed documents supported by robust public key infrastructure provide proof of origin and integrity. If content is altered, the signature is invalidated automatically.

This capability supports both fraud mitigation and regulatory compliance. Frameworks such as eIDAS 2.0, alongside broader operational resilience requirements, emphasise integrity, traceability and accountability in digital records. Cryptographic document signing provides concrete evidence of those controls. Deepfakes expose a structural weakness in how authenticity has historically been signalled online. Document trust addresses that weakness directly through technological means.

Machine identity and AI integrity

As AI systems generate content at scale, another dimension emerges. Reports, automated responses, code and internal communications may originate from machine systems rather than individuals. Organisations must therefore verify not only that content has integrity, but that it was produced by an authorised system operating within defined parameters.

Machine identity becomes central. AI models, APIs and automated pipelines must possess cryptographically verifiable identities, just as websites and users do. Content produced by approved systems should be signed and traceable, enabling organisations to distinguish between authorised automation and unverified sources. To conclude on this, authenticity must be anchored in identity, whether human or machine.

Modernising trust infrastructure

Delivering content and document trust at scale requires modern trust infrastructure. Public key infrastructure has long underpinned encrypted communications and website security. However, many legacy PKI environments were designed for a human-centric internet, not for an AI-driven ecosystem characterised by high-volume content creation and machine-to-machine interaction.

Modernised PKI must support automated certificate lifecycle management, large-scale digital signing and seamless integration into content creation and document management workflows. Trust controls need to extend beyond network security and into the systems where content is generated, approved and distributed.

Without scalable, automated trust infrastructure, content authenticity cannot be enforced consistently or economically.

Designing for verifiable authenticity

Detection will improve. Deepfake capabilities will improve faster. Neither side of that race leads to a solution.

The durable answer is structural. Organisations that embed cryptographic identity into their content pipelines — that digitally sign what matters and govern machine identities with discipline — are not just reducing fraud risk. They are building the kind of operational resilience that holds when the threat landscape shifts again.

Visual evidence is no longer reliable proof. We must stop designing as if it is. Authenticity that cannot be verified is authenticity that cannot be trusted. Build it in or accept that you are leaving the question open.

 


By Dan Petrillo, VP of Product at BlueVoyant.
By Lorri Janssen-Anessi, Director of External Cybersecurity Assessments, BlueVoyant
By Jorge Monteiro, CEO of Ethiack.
By Karthik SJ, General Manager of AI at LogicMonitor.
By Fiona Reid, Director of International Business at Pattern
By Samantha Jennings, Head of Operations, Avella.
By Kirsty Biddiscombe, EMEA Business Lead AI, ML & Data Analytics, NetApp.