Deepfake Intelligence Glossary

The most comprehensive glossary of synthetic media and identity fraud, your authoritative resource for every deepfake threat, detection tool, and compliance term, built to help fraud, IDV, and compliance teams stay ahead.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

#

3D Mask Attack
A spoofing attack using a realistic 3D face mask. Can bypass basic facial recognition systems. Exploits lack of depth and liveness checks. Requires advanced detection and verification methods.
Read more
5th AML Directive (AMLD5)
An EU directive strengthening anti-money laundering rules. Expands compliance to new sectors like crypto. Enhances identity verification and transparency. Requires stronger due diligence and monitoring.
Read more

A

Account Takeover (ATO)
Unauthorized access to a legitimate user account. Often caused by stolen or reused credentials. Leads to fraud, data theft, and misuse. Prevented with MFA and continuous monitoring.
Read more
Active Liveness
A liveness method requiring user interaction. Includes actions like blinking or head movement. Prevents replay and deepfake attacks. Balances security with user experience.
Read more
Anti-Money Laundering (AML)
Regulations to prevent illegal money from entering systems. Relies on identity verification and monitoring. Detects suspicious financial activity. Essential for financial compliance and fraud prevention.
Read more
Anti-Spoofing
Techniques to detect and prevent spoofing attacks. Protects biometric and identity systems from fraud. Counters deepfakes and synthetic inputs. Uses liveness and multi-layered verification methods.
Read more
Authentication
The process of verifying a user’s identity. Uses passwords, devices, or biometrics. Essential for secure access control. Strengthened with multi-factor authentication.
Read more
Authorization
Determines what a user is allowed to do. Follows authentication in access control. Limits access based on roles or permissions. Prevents misuse and unauthorized actions.
Read more

B

Behavioral Biometrics
Authentication based on user behavior patterns. Includes typing, movement, and interaction signals. Enables continuous and passive verification. Hard for attackers to replicate accurately.
Read more
Biometric Template
A digital representation of biometric features for matching. Used instead of storing raw biometric data. Improves privacy and system efficiency. Requires secure storage and protection mechanisms.
Read more
Biometrics (Biometric Identifier)
Unique physical or behavioral traits used for identification. Includes fingerprints, face, voice, and more. Widely used in authentication systems. Requires strong protection and anti-spoofing measures.
Read more
Blockchain Identity
Use of blockchain to manage and verify identities. Enables decentralized and user-controlled identity systems. Provides tamper-resistant and verifiable credentials. Requires strong key management and ecosystem adoption.
Read more
Biometric Authentication
Authentication using biometric traits like face or fingerprint. Provides secure and convenient identity verification. Vulnerable to spoofing without proper safeguards. Requires liveness detection and secure data handling.
Read more

C

CCPA (California Consumer Privacy Act)
A California law protecting personal data rights. Gives users control over how data is used. Requires transparency and secure data handling. Applies to identity and biometric data processing.
Read more
Challenge-Response Test
A method requiring users to complete real-time tasks. Verifies identity, presence, or human interaction. Prevents replay and deepfake-based attacks. Uses dynamic prompts for stronger security.
Read more
Cheapfake (Shallowfake)
Manipulated media created with simple editing techniques. Easier to produce than deepfakes. Used in misinformation and fraud scenarios. Requires verification and contextual analysis.
Read more
Credential Stuffing
An attack using stolen credentials to access accounts. Relies on password reuse across platforms. Leads to account takeovers and fraud. Prevented with MFA and strong security controls.
Read more
Customer Due Diligence (CDD)
A process to assess customer identity and risk. Part of KYC and AML compliance frameworks. Helps detect fraud and illicit activity. Requires ongoing monitoring and verification.
Read more
Continuous Authentication
An approach that verifies identity continuously during a session. Detects changes in behavior or user presence. Reduces risks like session hijacking. Enhances security beyond initial login.
Read more

D

Digital Avatar
A virtual representation of a person in digital environments. Can be static images or AI-driven realistic personas. Used in communication, VR, and online platforms. May enable impersonation if not properly verified.
Read more
Digital Identity
A digital representation of a person or entity. Used to access systems and perform transactions. Can be compromised or fabricated for fraud. Requires strong verification and security controls.
Read more
Deepfake
AI-generated media that mimics real people’s appearance or voice. Used for impersonation, fraud, and misinformation. Becoming more realistic with generative AI. Requires detection to ensure authenticity and trust.
Read more
Deepfake Detection
Methods used to identify AI-generated or manipulated media. Critical for preventing fraud and impersonation. Evolves alongside advancing deepfake technologies. Uses AI, forensics, and verification techniques.
Read more
Device Fingerprinting
A method to identify devices using unique configurations. Helps detect suspicious or inconsistent access patterns. Used in fraud detection and risk analysis. Can be spoofed, requiring additional verification signals.
Read more
Decentralized Identifier (DID)
A user-controlled digital identifier without central authority. Stored and verified using decentralized systems like blockchain. Supports secure and portable identity management. Relies on strong cryptographic control and key security.
Read more

E

eKYC (Electronic Know Your Customer)
A digital process for verifying customer identity remotely. Uses documents, biometrics, and automated checks. Enables fast and scalable onboarding. Requires strong detection to prevent fraud.
Read more
Electronic Identification, Authentication and Trust Services (eIDAS)
An EU regulation for digital identity and trust services. Enables cross-border recognition of electronic identities. Defines assurance levels for secure authentication. Supports trusted and legally valid digital transactions.
Read more

F

Face Swapping
A technique replacing one face with another in media. Often used in deepfake videos and manipulation. Can enable impersonation during verification. Requires detection of visual inconsistencies.
Read more
Face Recognition Vendor Test (FRVT)
A NIST program testing face recognition systems. Measures accuracy and fairness. Helps benchmark different algorithms. Guides selection of reliable solutions.
Read more
Fingerprint Recognition
A biometric method using fingerprint patterns for identity verification. Widely used for secure and convenient authentication. Can be spoofed without proper protections. Requires liveness checks and secure data storage.
Read more
False Rejection Rate (FRR)
A metric measuring how often real users are rejected. Indicates usability and user experience of systems. Higher security settings can increase rejection rates. Requires balance between security and accessibility.
Read more
False Acceptance Rate (FAR)
A metric measuring how often imposters are accepted. Indicates the security level of biometric systems. Higher risk with deepfakes and synthetic inputs. Requires strict thresholds and additional verification.
Read more
Federated Identity
A system enabling access across multiple services. Uses one trusted identity provider for authentication. Reduces login friction and centralizes control. Requires strong security to prevent broad compromise.
Read more
FIDO2 and WebAuthn
Standards enabling passwordless authentication. Use cryptographic keys stored on devices. Resistant to phishing and credential theft. Improves security and user experience.
Read more
Facial Recognition
A biometric method using facial features for identity verification. Widely used in authentication and onboarding processes. Vulnerable to deepfakes and spoofing attacks. Requires liveness detection and accuracy improvements.
Read more

G

Generative Adversarial Network (GAN)
An AI model where two networks generate and evaluate data. Used to create realistic synthetic content. Powers many deepfake technologies. Also used to improve detection systems.
Read more
Generative AI
AI systems that create new content like text, images, or audio. Used in many applications and industries. Also enables deepfakes and synthetic identities. Requires detection and safeguards.
Read more
General Data Protection Regulation (GDPR)
An EU regulation protecting personal data and privacy. Gives users control over their data. Requires secure and transparent processing. Applies to many global organizations.
Read more

H

HIPAA (Health Insurance Portability and Accountability Act)
A U.S. law protecting health-related data. Requires secure handling of patient information. Includes strict access and authentication rules. Ensures privacy in healthcare systems.
Read more

I

Iris Recognition
A biometric method using patterns in the iris. Highly accurate and stable over time. Used in secure environments. Requires liveness and privacy protection.
Read more
Identity Verification
The process of confirming a person’s identity. Uses documents, biometrics, or data checks. Essential for onboarding and access. Prevents fake or stolen identities.
Read more
Identity Proofing
The process of validating identity information during onboarding. Ensures the person is real and legitimate. Includes document and biometric checks. Builds trust at the start.
Read more
Identity Fraud
The misuse of identity data to commit fraud. Includes theft, synthetic identities, and account takeover. Impacts individuals and organizations. Requires strong identity controls.
Read more
Impersonation Fraud
Fraud where attackers pretend to be someone else. Used to gain access or deceive others. Often involves social engineering or AI. Highly effective and growing.
Read more
Identity Provider (IdP)
A system that verifies and manages user identities. Provides authentication to connected services. Central to SSO and access management. Must be strongly secured.
Read more
Identity Theft
The use of stolen personal data to impersonate someone. Used for financial or criminal gain. Often starts with data breaches. Causes major financial and personal damage.
Read more
ISO 27001
A standard for managing information security systems. Ensures protection of sensitive data. Includes risk management and controls. Demonstrates strong security practices.
Read more
ISO/IEC 30107 (Biometric Presentation Attack Detection)
A standard for evaluating biometric anti-spoofing systems. Defines testing methods for presentation attacks. Ensures system reliability. Used for certification and benchmarking.
Read more

J

No items found.

K

Know Your Customer (KYC)
A process to verify customer identity and risk profile. Required in financial services. Prevents fraud and money laundering. Includes onboarding and ongoing monitoring.
Read more

L

Lip Sync Deepfake
A manipulated video syncing fake audio with real visuals. Makes people appear to say things they didn’t. Used in misinformation and fraud. Requires audio-visual consistency checks.
Read more
Liveness Detection
A method to ensure biometric input comes from a real person. Prevents spoofing with photos or videos. Can be active or passive. Essential for secure identity verification.
Read more

M

Media Forensics
The analysis of media to detect manipulation or AI generation. Uses algorithms and forensic techniques. Helps identify deepfakes and edits. Supports trust in digital content.
Read more

N

NIST Digital Identity Guidelines (SP 800-63)
A framework defining identity assurance and authentication standards. Widely used for secure identity systems. Defines different assurance levels. Guides best practices globally.
Read more

O

No items found.

P

Palm Vein Recognition
A biometric method using vein patterns in the palm. Highly secure due to internal features. Difficult to replicate or steal. Requires specialized hardware.
Read more
Presentation Attack
An attempt to fool biometric systems with fake inputs. Includes photos, masks, or recordings. Targets sensor-level vulnerabilities. Requires anti-spoofing detection.
Read more
Presentation Attack Detection (PAD)
Techniques used to detect fake biometric inputs. Includes hardware and software methods. Essential for secure biometric systems. Prevents spoofing and fraud.
Read more
Passive Liveness
A background check verifying a real human presence. Does not require user interaction. Analyzes subtle biometric signals. Improves user experience but faces advanced spoofing risks.
Read more
Phishing
A fraud technique where attackers impersonate trusted entities. Used to steal credentials or sensitive data. Often delivered via email, SMS, or calls. A major entry point for identity fraud.
Read more
PSD2 (Revised Payment Services Directive)
An EU regulation improving payment security and competition. Introduces strong authentication requirements. Enables open banking through APIs. Enhances user control over financial data.
Read more

Q

No items found.

R

Retina Scan
A biometric method analyzing blood vessel patterns in the eye. Highly accurate and difficult to fake. Used in high-security environments. Limited by hardware and usability constraints.
Read more
Risk-Based Authentication (RBA)
An authentication approach based on real-time risk analysis. Evaluates behavior, device, and context. Adjusts security requirements dynamically. Improves both security and user experience.
Read more

S

Synthetic Data
Artificially generated data that mimics real-world data. Used for training AI models and simulations. Can improve privacy by avoiding real data use. Also enables synthetic identity fraud.
Read more
Synthetic Media
AI-generated or manipulated visual, audio, or video content. Can appear highly realistic. Used in both creative and malicious contexts. Requires detection for trust and security.
Read more
Synthetic Identity
A fabricated identity combining real and fake information to create a new person for financial fraud.
Read more
Step-Up Authentication
An additional authentication step triggered by risk or sensitive actions. Used during transactions or unusual behavior. Improves security without adding friction everywhere. Balances user experience and protection.
Read more
Strong Customer Authentication (SCA)
A regulatory requirement for multi-factor authentication in payments. Requires at least two independent identity factors. Helps reduce fraud in digital transactions. Commonly used in banking and financial services.
Read more
Self-Sovereign Identity (SSI)
A model where individuals control their own identity data. Credentials are stored and shared by the user. Reduces reliance on centralized systems. Improves privacy and data ownership.
Read more
Selfie Verification
A biometric method where users take a selfie to verify identity. Compared against ID documents or stored data. Often includes liveness detection. Used in digital onboarding and KYC.
Read more
Social Engineering
A manipulation technique used to trick people into sharing sensitive information. Exploits trust, urgency, or fear. Common methods include phishing and impersonation. Targets the human element of security.
Read more
Single Sign-On (SSO)
An authentication method allowing access to multiple systems with one login. Simplifies user experience across platforms. Centralizes authentication and access control. Requires strong protection to avoid single-point failure.
Read more
Spoofing (Biometric Spoofing)
The use of fake biometric data to deceive systems. Includes photos, videos, or synthetic inputs. Targets identity verification processes. Requires strong anti-spoofing measures.
Read more

T

No items found.

U

No items found.

V

Voice Cloning
The use of AI to replicate a person’s voice realistically. Requires only a small sample of original audio. Can be used for impersonation and fraud. Poses risks to voice-based authentication systems.
Read more
Voice Recognition (Speaker Recognition)
A biometric method that verifies identity based on voice characteristics. Analyzes tone, pitch, and speaking patterns. Used in call centers and remote authentication. Requires protection against spoofing and voice cloning.
Read more
Verifiable Credential
A digital credential that proves identity claims using cryptographic verification. It can be issued by trusted organizations and verified instantly. Prevents tampering and forgery of identity data. Supports secure and privacy-preserving identity sharing.
Read more

W

No items found.

X

No items found.

Y

No items found.

Z

Zero Trust
A security model that assumes no user or system should be trusted by default. Every access request must be verified continuously. It reduces risks from internal and external threats. It is a core principle in modern cybersecurity.
Read more
FAQ

We have got the answers to your questions

Are deepfakes illegal?

Deepfakes themselves are not inherently illegal, but their use can be. The legality depends on the context in which a deepfake is created and used. For instance, using deepfakes for defamation, fraud, harassment, or identity theft can result in criminal charges. Laws are evolving globally to address the ethical and legal challenges posed by deepfakes.

How do you use deepfake AI?

Deepfake AI technology is typically used to create realistic digital representations of people. However, at DuckDuckGoose, we focus on detecting these deepfakes to protect individuals and organizations from fraudulent activities. Our DeepDetector service is designed to analyze images and videos to identify whether they have been manipulated using AI.

What crime is associated with deepfake creation or usage?

The crimes associated with deepfakes can vary depending on their use. Potential crimes include identity theft, harassment, defamation, fraud, and non-consensual pornography. Creating or distributing deepfakes that harm individuals' reputations or privacy can lead to legal consequences.

Is there a free deepfake detection tool?

Yes, there are some free tools available online, but their accuracy may vary. At DuckDuckGoose, we offer advanced deepfake detection services through our DeepDetector API, providing reliable and accurate results. While our primary offering is a paid service, we also provide limited free trials so users can assess the technology.

Are deepfakes illegal in the EU?

The legality of deepfakes in the EU depends on their use. While deepfakes are not illegal per se, using them in a manner that violates privacy, defames someone, or leads to financial or reputational harm can result in legal action. The EU has stringent data protection laws that may apply to the misuse of deepfakes.

Can deepfakes be detected?

Yes, deepfakes can be detected, although the sophistication of detection tools varies. DuckDuckGoose’s DeepDetector leverages advanced algorithms to accurately identify deepfake content, helping to protect individuals and organizations from fraud and deception.

Can you sue someone for making a deepfake of you?

Yes, if a deepfake of you has caused harm, you may have grounds to sue for defamation, invasion of privacy, or emotional distress, among other claims. The ability to sue and the likelihood of success will depend on the laws in your jurisdiction and the specific circumstances.

Is it safe to use deepfake apps?

Using deepfake apps comes with risks, particularly regarding privacy and consent. Some apps may collect and misuse personal data, while others may allow users to create harmful or illegal content. It is important to use such technology responsibly and to be aware of the legal and ethical implications.

Stop Deepfake-Driven Attacks Before They Take Over

Deploy sub-second deepfake detection across any app or workflow with us. Stop synthetic attacks others miss, without slowing down your systems.