AI Deepfake Fools Binance's CZ With Realistic Voice
  • Home
  • AI
  • Deepfake AI Imitates CZ Of Binance In Disturbing Ways
By Hamza Ahmed profile image Hamza Ahmed
4 min read

Deepfake AI Imitates CZ Of Binance In Disturbing Ways

CZ, former CEO of Binance, did not recognise a deepfake video with his own voice. Artificial intelligence replicates faces and sounds with dangerous accuracy.

Former Binance CEO Changpeng "CZ" Zhao recounted an astonishing experience with a video that perfectly imitated his voice thanks to AI technology.

Zhao revealed in a post posted on Thursday that the Mandarin dubbing in the video was indistinguishable from his real voice due to the extreme accuracy of the reproduction.

The video, in which Zhao appears speaking Chinese in a montage of clips and images created with artificial intelligence, raised strong concerns about the unauthorised use of AI to impersonate public figures. CZ described the accuracy of the voice and lip synchronisation as "disturbing".

0:00
/0:15

Digital duplication of crypto industry executives via generative AI tools is on the rise, and this case is a clear example of malicious impersonation.
After stepping down as CEO of Binance in 2023, Zhao continued to be influential in the cryptocurrency industry and had previously warned the public about the risks of deepfakes.

In the month of October 2024, Zhao had issued a specific warning: do not trust any videos asking for cryptocurrency transfers, due to the spread of manipulated content portraying him.

The deepfakes represent increasingly severe operational risks for the crypto sector

Zhao's latest experience shows how impersonation methods are evolving from simple textual or static image scams to sophisticated audio-visual simulations.

Patrick Hillmann, former Chief Communications Officer of Binance, told in 2023 that some scammers had created a deepfake video to impersonate him during a Zoom meeting with project representatives.

Scammers used a deepfake AI hologram of Binance executive to scam crypto projects
Scammers used a deepfake AI hologram of the Binance chief communications officer for fraudulent activities.

The video had been generated by collecting years of his public interviews and online content, making the fictitious meeting as believable as a real official call from Binance.

Today, advanced voice cloning technologies allow a person to be imitated so well that not even the original can recognise the fake. This represents a danger far beyond identity theft on social networks.

A Million Dollar Scam-Shows the concrete risks of deepfakes

An incident in February highlights the serious financial risks of deepfakes: Arup employees in Hong Kong were scammed into transferring some $25 million during a video call on Microsoft Teams.

Arup lost $25mn in Hong Kong deepfake video conference scam
UK-based engineering group identified as target of fraud that used digitally cloned CFO to trick staff

According to the South China Morning Post, all participants on the call were AI simulations that mimicked the voice and appearance of the UK's CFO and other colleagues.

Vocal Cloning: Technology Increasingly Accessible

The technology behind realistic voice cloning is increasingly accessible and requires very little voice data to work.

Services such as ElevenLabs allow users to create realistic voice clones from less than 60 seconds of recording.

According to a UK financial institution, over 25% of adults in the UK have dealt with scams based on cloned voices in the last year alone.

AI-Driven Deepfake Cyberattacks: A Growing Threat for 2025 and Beyond
AI-driven cyberattacks are rising in 2025. Learn how CyFlare protects healthcare and financial industries with advanced solutions.

Reports from CyFlare show that voice cloning APIs can be purchased in darknet marketplaces for as low as $5. Commercial models usually include watermarking and require consent, while the open-source or illegal ones offer no protection.

EU Requires Explicit Labels for Deepfakes, But Legislation Will Only Arrive in 2026

The EU Artificial Intelligence Regulation, adopted in March 2024, requires any publicly visible deepfake content to be clearly labelled.
However, full implementation of the regulation is only planned for 2026, leaving a significant period of vulnerability.

Article 50: Transparency Obligations for Providers and Deployers of Certain AI Systems | EU Artificial Intelligence Act

In the meantime, hardware manufacturers are starting to incorporate sensing systems into consumer devices.

During the Mobile World Congress 2025 in Barcelona, tools capable of detecting real-time audio and video manipulations embedded in the devices themselves were unveiled - a possible step towards autonomous authenticity verification solutions, though still far from commercial distribution.

By Hamza Ahmed profile image Hamza Ahmed
Updated on
AI Crypto Scams
Consent Preferences