
Related
This Time Is Different
10 minute read
Author
Matthew Moynahan
CEO
Date
26/3/2025
Share
As I reflect on my 27 years in cybersecurity, I am thankful for all this industry has given me — and I hope my contributions have given back in equal measure. Throughout my career, I’ve pursued opportunities that I found interesting, driven by a compelling narrative and surrounded by a talented group of founders and executives working tirelessly to secure enterprises and governments around the world.
While the cybersecurity industry should be very proud of how far we’ve come, it is equally important—and perhaps more concerning—to recognize how far we have yet to go. In my late 20s, I spoke with CISOs about script kiddies defaming the White House homepage and viruses corrupting networks—novel threats at the time.
Today, these same CISOs worry that their authority, name, image, and likeness are being weaponized in attacks as a result of developments in Generative AI, turning them into unwitting accomplices in crimes against their own organizations.
My, how the world and threat environments have changed.
Another Shift in the Threat Landscape
When Intel purchased McAfee in 2010, the belief was that security would become embedded into silicon. Instead, the exact opposite happened, as attackers started climbing the IT stack, targeting applications and, eventually, end-users themselves.
Is this really that big of a surprise?
SaaS fragmentation, cloud adoption, and remote work created an environment too rich with vulnerabilities for attackers to ignore. Add to that the rise of social media, where everyone’s biometric data is easily accessible, and humans have now become the ultimate attack surface.
Recent regulatory shifts, like the rescinding of EO 14179 and EO 14149, only compound the issue. And with Gartner predicting an even greater need for disinformation security, this is no longer just another cybersecurity problem — it’s an existential one.
What happens to society when we can no longer trust what we read, hear, or see?
The Perfect Storm
We call this the Display Layer Vulnerability – where you are literally tricked into believing that what you are reading, seeing, or hearing on your screen is real. The emergence of this vulnerability points to how the industry as a whole has fallen behind on one of the critical pillars of the Information Security Triad – Data Integrity.
Cybersecurity has long been guided by the triad of Integrity, Confidentiality, and Availability. And while we’ve made progress in Confidentiality (access controls, encryption, data masking, etc.) and Availability (cloud computing), we have not been able to keep pace on the Data Integrity front. In other words, ensuring that the data we are making decisions on - whether it comes in the form of an image, audio file or a video stream that looks like your work colleague - is actually correct and trustworthy.
We now face the perfect storm: the convergence of Display Layer and Data Integrity vulnerability, with the rise of Gen AI and its ability to create synthetic material that mimics real life. Attackers are no longer just stealing credentials; they’re using faces and voices to make those stolen credentials even more lethal by compromising organizations in novel ways. One CISO recently mentioned he doesn't worry about deepfakes because their organization uses MFA—yet their biggest source of compromise was credential theft. But what if stolen credentials now come with a matching voice or face of the legitimate owner of those credentials? This is a new class of threat — and it is very personal — as adversaries no longer have to operate in the shadows in this new world of deepfakes.
Humans remain the last frontier in security. It’s why notaries still exist – to provide in-person verification and authentication of identity and intent. Evidential proof has always been required, and it hasn’t gone away – the need has simply gone digital.
The Next Frontier: Verifying the “What”
Just as Palo Alto Networks built a next-generation firewall to secure SaaS traffic, the industry must evolve as the market moves toward AI. The threats have moved from packets to pixels and the content layer.
And as enterprises, governments, and media organizations are increasingly relying on images, audio, and video for critical decision-making, there is a vested interest in ensuring the information they act on is accurate. Whether you are pressing a button to wire funds or to fire a missile – important decisions must be made on real information. Our focus must shift from only authenticating the who is accessing a system to also verifying what digital content is being consumed when executives and employees are making important decisions – it must be deemed trustworthy and correct.
Our Mission: Restoring Trust in Times of Critical Decision-Making
I wasn’t expecting to go back to a start-up again, but this mission is perhaps the most compelling of my career. What made the difference was the incredible team surrounding this opportunity, from Ted Schlein who incubated the company with Ballistic Ventures to Co-Founder Hany Farid, the world’s leading authority in digital forensics, to each GetReal team member dedicated to this cause.
Our mission is to protect organizations from the harm caused by manipulated and malicious digital media.
Our vision is a world of digital integrity.
Our belief is that we can and must reestablish trust in the connections we have in this digital-first world we live in today. We want to provide a service to enterprises, government and media, that unlock the positive and potentially transformative benefits of AI without worrying about whether what we are hearing or seeing is real or not. With GetReal, you’ll know.
Today, we announced $17.5M in Series A funding along with a complete portfolio of products and services designed to do just that – the prevention of deepfakes and impersonation attacks. Combining advanced image, audio, and video content verification with a team of world experts in digital media forensics, we’re restoring trust to digital communications and critical decision-making by authenticating the “what” and the “who.”
These are significant milestones as we continue on our journey, and I want to pause and thank our investors Forgepoint Capital, Ballistic Ventures, Evolution Equity, and K2-Access, as well as strategic partners In-Q-Tel, Cisco Investments, and Capital One Ventures, who are supporting us along the way.
We cannot rewind 20 years of digital transformation. But we can rebuild trust in the content flowing through our organizations. Gartner now estimates that 50% of enterprises will have to deploy disinformation security by 2027. It won’t be enough to simply label content as real or fake — boardrooms, courtrooms, and newsrooms will demand evidence-based assessments to understand why something is “fake.” That’s why we are here. It’s time to GetReal.