Zoom needs to prove you're human
We've officially reached the part of the timeline where your video call app needs biometric proof that you're not a robot. Zoom just announced a partnership with World (the company formerly known as Worldcoin, co-founded by Sam Altman) to let meeting hosts verify that every face on the call belongs to an actual human being. The feature is called Deep Face, and it's exactly what it sounds like: real-time identity verification layered into Zoom meetings. It exists because AI-generated video avatars and voice clones have gotten good enough to fool people in live conversations. Not hypothetically good. Actually good. Good enough that a Hong Kong finance worker at engineering firm Arup was tricked into wiring $25 million to fraudsters after a video call where every other "participant," including the CFO, was a deepfake. That was in early 2024. The technology has only improved since.
How it actually works
World's approach to verification is built on what they call "proof of personhood." The core idea is straightforward: you visit a physical device called an Orb, which scans your iris and creates a cryptographic identity, your World ID. That ID lives on your phone. No biometric data is stored on World's servers or shared with third parties. When you join a Zoom meeting with Deep Face enabled, the system does a three-way check. It cross-references the signed image from your original Orb registration, a real-time face scan from your device's camera, and the live video frame that other participants can see. Only when all three match do you get a "Verified Human" badge on your meeting tile. Hosts can require this verification before anyone enters the meeting (a "Deep Face Waiting Room"), or any participant can request that someone verify themselves mid-call. The privacy model is designed for enterprise adoption: Zoom receives only a high-assurance signal that the expected person is present. No personal data changes hands.
The Turing test moved to your Tuesday standup
There's something quietly absurd about this. The Turing test, that famous thought experiment about whether a machine can be indistinguishable from a human, was supposed to play out in philosophy departments and AI research labs. Instead, it's happening in the most mundane setting imaginable: a Tuesday morning standup where someone shares their screen and half the team is on mute. But that's precisely why it matters. Deepfake fraud isn't targeting dramatic scenarios. It's targeting the ordinary interactions where people let their guard down. The Arup case worked because a video call with colleagues felt routine. Nobody expects the CFO on a Wednesday afternoon call to be synthetic. That assumption of normalcy is the attack surface. According to The Next Web, deepfake fraud cost businesses over $200 million in Q1 2025 alone. And as video generation models improve their temporal consistency, producing stable faces without the flicker and warping that once served as forensic evidence, frame-by-frame detection methods are becoming unreliable. Both Zoom and World have acknowledged that analyzing video frames for signs of AI manipulation is no longer a viable long-term strategy.
The privacy trade-off
Here's where it gets uncomfortable. World's solution to the "are you human" problem requires you to scan your eyeballs with a proprietary device. That's a significant ask. Your iris pattern isn't like a password. You can't reset it if something goes wrong. World's Orb system has already drawn regulatory scrutiny. Spain, Germany, the Philippines, and several other countries have taken action over data privacy concerns. The company maintains that iris data is encrypted, sent to your phone, and permanently deleted from the Orb after verification. The cryptographic approach means World itself doesn't hold your biometric data. But "trust us, we delete it" is a hard sell when the thing being deleted is an immutable part of your body. This is the core tension of proof-of-personhood systems: the more reliable the verification, the more intimate the data required. Passwords can be changed. Documents can be reissued. Your iris is forever. And while World's zero-knowledge approach is genuinely privacy-forward compared to traditional KYC (know your customer) verification, it still requires a leap of faith at the hardware level. Are there less invasive alternatives? In theory, yes. Knowledge-based authentication, device-based attestation, social vouching systems, and cryptographic proof without biometrics all exist in various forms. Privado ID and similar projects are exploring proof-of-personhood infrastructure that doesn't depend on body scans. But none of them offer the same level of uniqueness guarantees. If you want to prove that one person equals one identity at scale, biometrics remain the most robust signal available. Everything else can be duplicated, shared, or faked. The question isn't whether biometric verification is the ideal solution. It's whether it's the only solution that actually works at the scale the problem demands.
Every interaction wants a "human or not" layer
Zoom isn't alone in this. The same announcement included partnerships with Tinder (where an estimated 30% of profiles may be AI-enhanced scam accounts), DocuSign, Okta, and Shopify. World also launched AgentKit, a tool that lets websites verify a real human is behind an AI agent's actions. The pattern is clear: every major digital interaction is moving toward requiring some form of human verification. Video calls today. Document signing and online dating now. Tomorrow it could be code reviews, customer support chats, email correspondence. Sam Altman said at the launch event that there will soon be "more stuff made by AI than is made by humans" online. If that's even directionally true, the need for a "human or not" layer becomes infrastructure, not a feature. This isn't just about stopping fraud. It's about maintaining the basic social contract of digital communication. When you join a meeting, reply to an email, or review a pull request, there's an implicit assumption that a person is on the other end. As that assumption erodes, every platform will need to decide how it preserves, or replaces, that trust.
Maybe the meeting shouldn't have happened anyway
There's a counterpoint worth sitting with: does it actually matter if someone sends an AI avatar to a meeting? If the AI avatar listens, takes accurate notes, asks reasonable questions, and reports back faithfully, maybe the problem isn't the avatar. Maybe the problem is that we're holding meetings that don't require human presence in the first place. The fact that an AI can attend a meeting on your behalf and nobody notices might say more about the meeting than about the AI. We spent years building tools to make meetings more efficient, then building AI to attend them for us, and now we're building verification systems to make sure we're actually attending them ourselves. There's a circularity here that's hard to ignore. But this framing only works for low-stakes interactions. When the meeting involves authorizing a $25 million wire transfer, hiring decisions, or sensitive negotiations, the identity of the person in the room matters enormously. The challenge is that the same platform hosts both kinds of meetings, and you can't always tell which is which until the stakes reveal themselves.
What this signals
Zoom's partnership with World isn't just a product feature. It's an admission that the visual layer of the internet, live video included, can no longer be trusted at face value. The company that became synonymous with video communication is now saying that video alone isn't enough. This will get more normal before it gets less weird. Expect "Verified Human" badges to spread across platforms the same way blue checkmarks did, with all the same debates about access, equity, and what it means to be verified. Expect friction. Expect people to resist scanning their irises. And expect the technology to keep pushing forward anyway, because the alternative, a digital world where you can never be sure who you're talking to, is worse. We built AI that can perfectly imitate us. Now we need to prove we're not it. The irony writes itself.
References
- Zoom teams up with World to verify humans in meetings (TechCrunch, April 2026)
- Sam Altman's World partners with Zoom, Tinder to prove who's human online (Axios, April 2026)
- Zoom and Tools for Humanity advance trust in the age of AI (Zoom Newsroom, April 2026)
- Tinder and Zoom offer 'proof of humanity' eye-scans to combat AI (BBC, April 2026)
- Zoom adds World ID verification to prove meeting participants are human, not deepfakes (The Next Web, April 2026)
- Gazing Into Sam Altman's Orb Now Proves You're Human on Tinder (WIRED, April 2026)
- British engineering giant Arup revealed as $25 million deepfake scam victim (The Guardian, May 2024)
- Cybercrime: Lessons learned from a $25m deepfake attack (World Economic Forum, February 2025)
- Deepfakes leveled up in 2025: Here's what's coming next (The Conversation, January 2026)
- Proof of personhood: What it is and why it's needed (World Blog)