Hong Kong Privacy Commissioner issues guidance note on AI deepfakes
The Office of the Privacy Commissioner for Personal Data (PCPD) has issued its first guidance note on the threat of deepfakes.
The “Abuse of AI Deepfakes: Toolkit for Schools and Parents” (the “Toolkit”)11 published on 17 December 2025 explains the risks associated with the deepfake technology and provides tips on risk mitigation and protection of personal data privacy.
The Toolkit additionally warns of potential contraventions of the Personal Data (Privacy) Ordinance, Cap. 486 (PDPO) or other offences which may be associated with deepfakes.
Importantly, while the Toolkit is specifically intended for schools and parents, the legal principles and guidance are broadly relevant to other organisations and individuals in Hong Kong.
A. Deepfakes and common abuses
What is deepfake technology?
The term “deepfakes” refers to the use of deep learning – a technique of artificial intelligence – to create seemingly realistic but falsified images or audio or video content relating to people and/or objects.
Deepfakes can convincingly mimic and change a person’s face, voice or actions using personal data contained in images, videos or voice recordings. Common types include:
- Face swapping: Replacing one person’s face with another in a photo or video
- Face re-enactment (puppetry): Copying one person’s facial movements (e.g. expressions and/or lip movements) onto another person in real time or a recorded video
- Face generation: Generating realistic images of people who do not exist
- Lip-syncing: Matching a video of a person’s lip movements to an audio track, often cloned or altered, making it appear that the person is saying something that is never actually said
- Voice cloning: Producing speech that closely resembles a person’s voice, including the accent and intonation
Potential and common abusive uses
Common types of abusive deepfakes include:
- Image-based sexual violence
- Cyberbullying and harassment
- Scams and frauds
- Fake news and disinformation
Abusive or malicious use of deepfakes is often designed to cause reputational damage, severe emotional or psychological harm, humiliation, defamation, sensitive personal data extraction, distortion of truth, or loss of money or other property.
Rising trend in personal data fraud
The PCPD received 1,158 enquiries relating to suspected personal data fraud in 2024 – an increase of 46% compared to 793 similar enquiries in 2023.22 The PCPD also noted the emergence of scams using AI deepfake technology to swindle money and/or personal data.
Fraudsters’ use of deepfakes include:
- Manipulating public footage, using photos or audio recordings of government officials or celebrities to produce videos using AI deepfake technology to deceive people into investing in fake investment schemes
- Impersonating scammed victims’ friends, relatives or colleagues by deepfake images or videos using their biometric data such as facial images or voice from social media, video calls or online public footage
- Impersonating other people and pretending to be interested in developing a relationship with the scammed victims
B. Potential contraventions of PDPO and other offences
Abusive or malicious use of deepfakes may fall foul of the PDPO and other laws.
PDPO
Data Protection Principles 1 and 3
The use of personal data to create and/or share deepfake materials may contravene Data Protection Principle 3, which limits use of personal data to the original purpose of data collection (or a directly related purpose). Any use beyond such purposes is only permitted if the individual concerned has given express and voluntary consent.
The requirements of Data Protection Principle 1 on collection of data may also be contravened if personal data is collected on an unlawful or unfair basis.
Doxxing
Creating or disclosing malicious deepfake materials may constitute a criminal offence.
Specifically, sharing deepfake material containing personal data of an individual without their consent may constitute doxxing under section 64 of the PDPO.
This will be the case if the person shared the material with intent to cause or was reckless about whether any specified harm33 (including bodily or psychological harm) would be caused, or would likely be caused, to the individual or their family members.
Other criminal offences
Perpetrators using abusive or malicious deepfakes may commit other criminal offences, including:
- Publication or threatened publication of altered intimate images created by deepfake technology without consent – sections 159AA and 159AAE, Crimes Ordinance (Cap. 200)
- Producing child pornography using deepfake technology – section 3, Prevention of Child Pornography Ordinance (Cap. 579)
- Using deepfake technology to commit fraudulent activities – sections 16A and 17, Theft Ordinance (Cap. 210)
C. Risk mitigation and protection of personal data security
Risks faced by organisations
How organisations may fall victim to fraud and impersonation:
Fraudsters may use cloned voices or videos to impersonate individuals, including customers, employees and business partners to extract sensitive personal data or swindle organisations out of money or other property.
How organisations may also face the risk of facilitating use of deepfakes:
Organisations involved in abusive or malicious use of deepfakes may be exposed to legal, compliance, reputation risk and other liabilities.
Practical tips for organisations:
- Strengthen personal data security and enhance identity verification process, particularly for high-risk transactions and communications
- Put in place a response plan with clear procedures for responding to deepfake incidents, with a designated crisis management team for handling these incidents
- Raise awareness within the organisation by providing suitable training on a regular basis to staff and personnel, including fraud detection techniques
For further advice or assistance, please contact our lawyers Sara Or and Michelle Ng.
- Abuse of AI Deepfakes – Toolkit for Schools and Parents, by the Office of the Privacy Commissioner for Personal Data, Hong Kong (17 December 2025)
- Fraud Enquiries Soar by Over 40% Privacy Commissioner’s Office Offers Six Tips to Prevent Fraud (16 January 2025)
- According to Section 64(6) of the PDPO, “specified harm”, in relation to a person, means (a) harassment, molestation, pestering, threat or intimidation to the person; (b) bodily harm or psychological harm to the person; (c) harm causing the person reasonably to be concerned for the person’s safety or well-being; or (d) damage to the property of the person.
Related content
Hong Kong’s regulatory framework for non-securities virtual assets: Exchange, dealing, custody, advisory and management
China unveils new framework for digital yuan (e-CNY) operations and ecosystem
Hong Kong: Intellectual Property Financing Sandbox launched to help pilot sectors leverage IP assets for financing
What Hong Kong stablecoin issuers need to know about the latest AML/CFT compliance?
Hong Kong: Licensing regime for stablecoin issuers goes live on 1 August 2025
Related capabilities
Subscribe
Follow our insights



