Feb 3, 2026
Content
Executive Summary
Deepfake-driven identity fraud has rapidly emerged as one of the most critical digital trust threats facing Southeast Asia, with Indonesia and Vietnam at the epicenter of this evolution. Fuelled by advances in generative AI, fraudsters are now able to convincingly replicate human faces, voices, and identities at scale, undermining traditional digital onboarding, biometric authentication, and remote verification systems across banking, fintech, telecom, and digital services Deepfake identity fraud.
The region has experienced an unprecedented surge in AI-enabled fraud, with deepfake incidents in Asia-Pacific increasing by over 1,500% in a single year. High-profile cases, from executive impersonation via deepfake video conferences to AI-cloned voice scams and large-scale synthetic identity creation, demonstrate that these threats are no longer theoretical. They are operational, financially damaging, and increasingly difficult to detect using legacy controls.
Indonesia and Vietnam are particularly exposed due to rapid digitalization, mass adoption of mobile banking and fintech services, expanding use of biometric eKYC, and the availability of compromised identity data. While national digital ID initiatives and biometric mandates aim to strengthen security, they have simultaneously raised the stakes: facial biometrics themselves have become a primary attack surface for deepfake and injection-based fraud.
Banks, fintechs, crypto platforms, and telecom operators now face heightened regulatory, financial, and reputational risk. Regulators across ASEAN have responded with stricter eKYC, biometric verification mandates, SIM registration reforms, and enhanced AML requirements, signaling a clear shift toward higher-assurance digital identity frameworks. However, regulation alone is insufficient against adversaries leveraging AI at industrial scale.
The article concludes that defending digital trust in Southeast Asia will require an AI-vs-AI strategy, combining advanced liveness detection, injection attack detection, biometric forensics, behavioral analytics, and strong governance frameworks. Technology providers, regulators, and enterprises must collaborate closely to close systemic gaps, align standards, and ensure that digital transformation proceeds without eroding trust.
Ultimately, Southeast Asia stands at a pivotal moment: the same technologies driving inclusion and growth can either enable large-scale fraud or, if deployed responsibly and intelligently, form the foundation of a more secure and resilient digital economy.
Introduction
Southeast Asia is grappling with a surge of high-tech identity fraud fueled by deepfakes, AI-generated synthetic images, video, or audio that impersonate real people. Recent trends show an alarming rise in such fraud across the region. The Asia-Pacific saw a 1,530% increase in deepfake cases between 2022 and 2023, the second-highest surge globally. Notably, Vietnam has experienced the region’s highest jump in deepfake-related fraud incidents (25.3%), underscoring how acutely this threat is hitting certain markets. These deepfake-driven scams undermine digital security by hijacking identities and deceiving both people and verification systems.
From bank accounts opened under false identities to scam calls using AI-cloned voices, the integrity of identity in digital transactions is under attack. This rising menace comes at a time when Southeast Asian nations are rapidly digitizing. Millions of consumers in Indonesia, Vietnam, and neighboring countries are embracing online banking, fintech apps, and digital services, often for the first time. This digital boom brings convenience and financial inclusion, but also creates fertile ground for fraudsters. Cybercriminals are exploiting advanced AI to create “perfect forgeries” of identities, tricking facial recognition logins, impersonating officials on video calls, and spreading misinformation.
The result is a new class of fraud that is harder to detect and potentially far more damaging than traditional identity theft. In this article, we examine real-life cases of deepfake and identity fraud in Southeast Asia, explore why markets like Indonesia and Vietnam are especially vulnerable, and review how regulators and technology providers are responding with solutions to protect businesses and the public.
Real Cases Highlighting the Threat
Real-world incidents across Asia illustrate how deepfakes and identity fraud are no longer theoretical, they are actively being used in scams today. In late 2023, videos of Singapore’s Prime Minister Lee Hsien Loong and Deputy PM Lawrence Wong were circulated promoting a cryptocurrency investment and were later exposed as deepfakes. These AI-doctored clips stole the likeness of public figures to lend credibility to a fraudulent scheme, duping viewers who trusted the source.
Another disturbing example occurred in Hong Kong, where the local branch of a multinational firm was swindled out of HK$200 million (US$25.6 million) after staff were tricked by a deepfake video conference. In that 2024 case, the first of its kind in Hong Kong scammers created realistic video avatars of the company’s CFO and other executives, then, on a multi-person Zoom call, “ordered” an urgent fund transfer. The employees saw what looked and sounded like their CFO instructing them, and complied, only later discovering every person on the call (except the victims themselves) was an AI-generated impostor.
Such incidents are not isolated. Tech-savvy criminals in Thailand have used deepfake videos to impersonate police officers in live video calls, extorting victims by making it appear as if an official is demanding money. In one scam, fraudsters took publicly available footage of a real police officer (for example, from a press conference) and digitally grafted it onto video calls, so the officer’s face seemingly spoke the scammer’s words. Thai police issued warnings in 2022 about this call-center scam tactic, cautioning that people might be easily deceived if they don’t scrutinize the image closely. Meanwhile in Vietnam, authorities have warned of a rise in deepfake scam calls targeting bank customers and citizens.
There have been reports of fraudsters using AI to mimic the voice and face of relatives or officials over video, attempting to trick victims into urgent money transfers. This trend led Vietnam’s Ministry of Information and Communications to alert the public about sophisticated spoofing attacks using deepfakes, voice cloning, and other AI.
Beyond deepfakes, more conventional identity fraud is also rampant. Identity theft and synthetic identities (fake identities pieced together from real and fictitious data) have plagued Southeast Asian businesses, enabling crimes from credit card fraud to unearned loan approvals.
For instance, Indonesia, Hong Kong, and Cambodia each saw identity fraud rates more than double from 2021 to 2023 according to a global fraud study. In one global case highlighting the scale of the threat, investigators found a single deepfake algorithm had generated over 400 fraudulent “customers” on video KYC calls for a bank – essentially an assembly line of fake identities to open accounts. While that case was outside the region, it underlines the kind of industrialized fraud methods now emerging. Taken together, these examples show that Southeast Asia’s fraudsters are innovating rapidly. They are weaponizing AI to impersonate trusted voices and faces, bypassing remote identity checks and social engineering victims, often with devastating consequences.
Why Indonesia and Vietnam Face a Growing Risk
Among Southeast Asian countries, Indonesia and Vietnam stand out as particularly vulnerable to identity and deepfake-driven fraud, due to a confluence of factors. Both nations have large, young populations that are enthusiastically adopting digital services – Vietnam’s “online-native” population and booming digital economy make it an especially appealing target for fraudsters. Millions of Vietnamese and Indonesians have come online in recent years via mobile banking, e-wallets, cryptocurrency trading, and e-commerce. This digital uptake, while positive, expands the attack surface for scammers, many of whom operate transnationally.
At the same time, these markets are undergoing transitions in their identification systems and financial infrastructure that savvy criminals seek to exploit. Vietnam, for example, is pushing an ambitious digital ID program (VNeID) and cashless payments drive, and its banks historically relied on face-to-face verification. The rapid shift to online onboarding during the pandemic opened new doors for fraud. In Indonesia, the government has issued over 200 million biometric national ID cards (e-KTP), and banks/fintechs tap into a central ID database for e-KYC, yet data breaches and leaks of ID numbers (NIK) have occurred, feeding the underground market for identity data. Fraud rings compile stolen personal data and use AI “face swap” or voice synthesis tools to create credible false identities, knowing that many companies’ verification checks may not catch an AI-generated selfie.
Statistics bear out the rising risk.Vietnam now ranks among the highest in the world for deepfake fraud prevalence, alongside technologically advanced Japan. And Indonesia has faced a wave of identity fraud, with incidents more than doubling in recent years. Fraudulent activities range from applying for loans or credit cards online under someone else’s identity, to SIM card registration scams, to mass phishing campaigns augmented with deepfake audio. Notably,AI-driven fraud has become the single largest challenge across industries globally, and in 2023“AI-powered fraud” (including deepfakes) overtook traditional fake IDs and account takeovers as the top identity threat. Southeast Asia’s dynamic economies unfortunately provide plenty of incentive and opportunity for these new scams: a huge user base new to digital finance, varying levels of security maturity across companies, and patchy awareness among the public. All these factors makeIndonesia, Vietnam, and their neighbors fertile ground for fraudsters testing the latest AI deception techniques.
Impact on Banking and Fintech
The banking and fintech sector is at the forefront of this battle. Banks, digital lenders, and payment platforms in Southeast Asia have embraced digital onboarding – allowing customers to open accounts or borrow money simply by submitting selfies and ID photos through an app. This convenience, however, comes with a glaring challenge: How do you know the person on the other side of the screen is real? Deepfake technology is directly attacking the biometric security that banks rely on.
In Vietnam, for instance, the central bank now mandates facial authentication for many online banking and card transactions, reflecting how critical face biometrics have become for security. Yet criminals are responding with spoofing attacks and deepfakes that can fool basic facial recognition systems. There have been cases (and many more attempts) where fraudsters present a video of a face, either an edit of the victim’s face or a wholly AI-generated face to trick a bank’s “selfie” verification. If the bank’s system lacks robust liveness detection, it may accept the fake face and create an account for a fraudster. The implications are severe: that account can then be used for money laundering, illicit transfers, or defrauding lenders and customers.
Financial regulators are increasingly alert to these risks.Vietnam’s State Bank has been tightening eKYC rules year by year, requiring stronger verification to shore up trust. Since 2024, Vietnamese banks must implement end-to-end biometric verification for significant online transactions. And starting in January 2026,Vietnam will make biometric identity checks mandatory for opening any new bank account or payment card – banks will need to verify a customer’s face in person or via a trusted biometric database before activating services. These moves came after a series of fraud incidents and demonstrate that regulators see biometric authentication as both necessary and in need of improvement. Banks in Vietnam have already stepped up: for example, Cake Digital Bank became the first digital-only bank in Southeast Asia to pass rigorous iBeta Level 2 tests for face biometric spoofing detection. Cake’s in-house developed “Face Authen” now uses passive liveness detection on millions of users to ensure a real person is present, meeting stringent State Bank of Vietnam security requirements. This kind of investment is vital for thwarting deepfake attempts.
Fintech startups and crypto platforms which often have less legacy infrastructure are similarly in the crosshairs. Crypto exchanges have been a prime target of deepfake-enabled identity fraud, accounting for 88% of all deepfake cases detected in 2023. Fraudsters use stolen ID documents and synthesized faces to bypass exchange KYC and then trade illicit funds.
Southeast Asia’s vibrant fintech scene (from e-wallets to P2P lenders) likewise faces constant identity spoofing attempts. Some have started layering additional defenses; for instance, behavioral biometrics (monitoring a user’s unique keystroke or usage patterns) are being explored to catch imposters who might pass face ID but behave differently. As Feedzai noted, Vietnamese banks that rely only on facial recognition should consider behavioral biometrics as a backstop precisely because deepfakes and stolen credentials can defeat one-time face checks. In sum, banks and fintechs are learning that AI-based fraud is an ever-evolving adversary. The cost of failure is high, from direct monetary losses to reputational damage and regulatory penalties so this sector has become both a target and a testing ground for anti-fraud innovations in the deepfake era.
Threats in the Telecom Sector
While finance often grabs headlines, the telecommunications sector in Southeast Asia is another critical arena for identity fraud – one that increasingly intersects with deepfake concerns. The humble SIM card and mobile account have outsized importance: scammers often obtain SIM cards under false identities to carry out fraud (for example, to receive banking OTP codes or make scam calls anonymously). In Indonesia, this problem reached a tipping point with criminals exploiting loopholes in the SIM registration system by using stolen ID numbers, allowing one person to register dozens of prepaid SIMs for shady uses. In response, the Indonesian government is moving to drastically tighten telecom ID checks.
In October 2025, Indonesia began trialing facial recognition for SIM card registration in collaboration with its largest mobile carrier, Telkomsel. Under this pilot program, new mobile subscribers must undergo an on-site face scan with a liveness detection step (meeting ISO 30107 anti-spoofing standards) to prove they are physically present and alive. The system then matches the live face against the national ID database in real time. If the government rolls this out nationwide, it means “one person, one SIM” – greatly curbing the ability of fraud rings to use fake or multiple identities in the mobile ecosystem. Regulators noted this could sharply reduce SIM-related scams (such as SMS phishing and spam calls), improve compliance, and build public trust in telecom security. Indonesia’s effort mirrors a similar requirement in Thailand, which already mandates face scans for activating new SIM cards as a fraud prevention measure.
Another emerging issue is how deepfakes intersect with telecom-facilitated scams. Many classic frauds are carried out over phone calls or video chats – and here, deepfake audio and video can turbocharge the deception.
As mentioned, Thai police encountered scammers making WhatsApp/LINE video calls with deepfake videos of officers. Similarly, across the region there is concern that “vishing” (voice phishing) scams will employ AI-generated voices to impersonate bank officials or family members in distress. A victim who believes they recognize the caller’s voice is far more likely to follow instructions. Telecom networks thus inadvertently carry these deepfake scam attempts. Vietnam’s government, for one, is looking at broader solutions: authorities have considered requiring social media and online accounts to be tied to verified real identities – possibly via biometric checks – to deter anonymous abuse and deepfake misinformation.
This proposal, floated by Indonesia’s Ministry of Communication and Digital Affairs, would use face verification to ensure each social media account corresponds to a real person. While it raises privacy questions, it underscores how seriously governments view the threat of deepfakes spreading via telecom and internet platforms.
Lastly, telecom companies themselves are upping security for customer interactions. Some operators are exploring voice biometric verification for customer service calls to prevent impersonation. And as deepfake detection tools improve, call centers may deploy AI that can flag if a caller’s voice or video feed is likely synthetic. In summary, telecom in SEA is both a target of identity fraud (through fraudulent SIMs/accounts) and a conduit for deepfake scams. The sector’s response from Indonesia’s biometric SIM registration to regional discussions on identity-linked internet use will play a key role in reducing the reach of these frauds.
Regulatory Responses Across Southeast Asia
Across Southeast Asia, regulators are increasingly confronting the reality that identity fraud has entered a new phase. The rise of deepfakes and AI-generated identities has exposed structural weaknesses in digital onboarding models that were designed for a very different threat landscape. As a result, regulatory responses across the region have begun to shift from focusing primarily on access and inclusion, toward reinforcing trust, assurance, and accountability in digital identity systems.
While the pace and form of regulatory action varies by country, a consistent pattern is emerging. Authorities are tightening electronic KYC requirements, elevating expectations around biometric verification, and reassessing the adequacy of remote identity checks that rely on static documents or basic facial matching.
In Malaysia, this shift is visible in the evolution of Bank Negara Malaysia’s eKYC framework. The regulator has made it clear that remote onboarding must be supported by multiple layers of verification, including document integrity checks, biometric matching, and effective liveness controls. Importantly, responsibility for these frameworks is pushed up to the board level, signaling that digital identity risk is no longer viewed as a purely operational issue, but one with governance and supervisory implications.
Thailand has taken a similarly pragmatic approach. Regulators permit remote onboarding but require additional safeguards when physical presence is absent. Biometric verification is complemented by stronger document validation and enhanced due diligence, particularly for higher-risk customers. These measures reflect a recognition that identity assurance must increase as onboarding becomes more digital and less personal.
In Indonesia, the regulatory response has been shaped by both scale and complexity. With one of the world’s largest populations and a rapidly expanding digital economy, regulators have sought to strengthen identity verification while remaining mindful of privacy and proportionality. Financial institutions are expected to integrate electronic KYC processes with the national population database, while also maintaining transaction monitoring capable of detecting identity misuse and mule activity.
At the same time, Indonesia’s Personal Data Protection Law has introduced clear constraints around the collection and use of biometric data, classifying it as sensitive personal information. This has influenced how regulators approach initiatives such as biometric SIM registration or facial verification for digital services. The regulatory stance is not anti-biometric, but cautious: stronger identity controls are encouraged, provided they are legally grounded, transparent, and subject to appropriate safeguards.
Vietnam stands out for the decisiveness of its recent regulatory actions. Faced with a sharp rise in identity-related fraud, Vietnamese authorities have embedded biometric verification more deeply into the regulatory framework. Amendments to AML and banking laws have reinforced customer due diligence obligations, while the central bank has moved to require biometric authentication for certain digital transactions.
Most notably, Vietnam has announced that biometric identity verification will become mandatory for all bank account and payment card openings from 2026. This marks a clear policy choice: digital banking growth must be anchored to higher-assurance identity verification, often linked to national identity infrastructure. While Vietnam does not yet have legislation specifically targeting deepfakes, regulators have shown a willingness to use existing fraud, cybersecurity, and data-protection laws to address AI-enabled abuse, supported by public awareness initiatives.
Beyond individual jurisdictions, regulators across Southeast Asia are increasingly aware that identity fraud does not respect sectoral or national boundaries. Banking, fintech, telecom, and digital platforms are now part of the same risk ecosystem. Fraud that is blocked in one channel is often displaced into another. This has prompted greater attention to cross-sector coordination and information sharing, particularly where SIM misuse, social engineering, and digital payments intersect.
There is also growing interest in international regulatory developments. Authorities in the region are closely observing measures introduced elsewhere, such as requirements to label AI-generated content, explicit criminalisation of malicious deepfake use, and supervisory guidance focused on protecting digital channels from synthetic identity attacks. In some markets, regulators have already begun issuing targeted advisories to financial institutions, urging heightened scrutiny of biometric authentication failures and stronger escalation processes for suspicious identity events.
Taken together, these developments point to a clear regulatory direction. Digital identity systems are expected to deliver higher assurance, not just greater convenience. Biometric verification is increasingly treated as a baseline control, but one that must be supported by liveness detection, environmental and device checks, and ongoing monitoring. At the same time, regulators remain conscious of privacy, data protection, and proportionality, seeking to ensure that stronger controls do not undermine public trust.
Southeast Asian regulators are actively recalibrating their frameworks to reflect the realities of AI-enabled identity fraud. The emphasis is no longer on whether digital identity can be trusted, but on how that trust is earned, maintained, and enforced as digital economies continue to scale.
Technology Solutions and the Way Forward
As fraudsters arm themselves with AI, businesses and solution providers are responding in kind, deploying advanced technologies to detect fakes and verify identities with greater assurance.
A key defense is liveness detection: techniques to confirm that there is a real, live person in front of the camera during a biometric check, not a deepfake or a recording. Companies like Oz Forensics have pioneered AI-driven liveness and deepfake detection solutions, which can be integrated into banks’ and fintechs’ onboarding workflows. For example, Oz Forensics’ “Oz Liveness” uses sophisticated algorithms to spot signs of spoofing in a video feed, from unnatural blinking patterns to discrepancies in light reflection on skin – all in a matter of seconds during a selfie capture. By early 2025, Oz Forensics even launched its liveness detection as a cloud-based SaaS service in Indonesia, highlighting demand from Indonesian businesses to quickly bolster their defenses against deepfakes and presentation attacks.
With such a service, a fintech app in Jakarta can, for instance, verify that a new user’s selfie is genuinely “live” (not a stolen photo or AI avatar) by simply calling Oz’s API, no complex hardware needed, since it leverages standard smartphone cameras and cloud AI.
Another breakthrough is in injection attack detection (IAD) – essentially catching when a fraudster tries to “inject” a fake video or image feed to the verification system. Instead of looking purely at biometric traits, IAD techniques monitor the software/hardware environment: is the video coming from a real camera, or from a virtual camera driver? Is the device possibly rooted or running an emulator to push synthetic media? These are tell-tale signs of deepfake or bot-driven attempts.
Independent tests have shown the effectiveness of Oz Forensics’ IAD technology on this front. In 2025, BixeLab (a reputable biometric testing lab) evaluated Oz Forensics’ system using a range of attack simulations, from static photos and masks to pre-recorded videos and AI deepfake videos and found that Oz’s solution blocked 100% of injection attacks, with a 0% false acceptance rate. In other words, none of the fake feeds, including deepfakes and even clever “face morphing” images, fooled the system. This level of performance, confirmed against emerging ISO standards for biometric anti-spoofing, gives a glimpse of how technology can stay one step ahead. By detecting the method of delivery for a fake (e.g., a virtual camera or an abnormal network feed), IAD acts as a powerful complement to visual analysis. It means even if an AI-generated face looks incredibly real, the act of injecting it into a verification session can be caught and stopped.
Beyond liveness and IAD, a comprehensive anti-fraud arsenal includes document forensics (to spot forged or manipulated ID cards and passports) and database cross-checks. For instance, a user’s selfie can be matched not only to the ID photo they submit but also, where possible, to a trusted government source, many Indonesian services now ping the Dukcapil (population database) to verify that the face and ID number match a real citizen. Oz Forensics and similar vendors offer automated ID document verification that can detect if an ID has been tampered with (such as edited text or a replaced photo).
They also provide face biometric matching at scale (1:1 to confirm one person’s identity, or 1:N to ensure the same face isn’t reusing multiple identities). These technologies help tackle synthetic identity fraud, where elements of real and fake data are combined a growing issue also flagged in recent fraud reports.
Crucially, the human element is not forgotten. Training and awareness are key parts of the solution. Companies are educating their compliance teams to recognize signs of deepfake content (for example, odd facial movements or distortion when a video quality shifts) and to perform manual reviews for suspicious cases. Governments and banks in the region have also run public awareness campaigns. Thai police went on record explaining deepfake call scams to warn citizens, and banks in Vietnam regularly send advisories to customers about verifying any strange call purportedly from the bank. The aim is to inoculate the public against social engineering, so that even if a scammer uses a convincing deepfake, the target knows to double-check via official channels.
Looking forward, the fight against identity fraud in Southeast Asia will likely become a high-stakes “AI vs AI” contest. On one side, criminals will use ever-more advanced generative AI to craft fake identities; on the other, financial institutions and tech providers will deploy AI to detect anomalies and verify legitimacy. Collaboration will be vital. Industry players like Oz Forensics, international cybersecurity firms, and local regulators need to share intelligence on the latest attack methods and jointly establish standards. In fact, the need for “robust regulations and policy guidelines” around AI usage is paramount, as experts note.
Such frameworks would ensure AI is used responsibly and that there are legal consequences for malicious deepfake use. Encouragingly, we see more dialogue between businesses and regulators in ASEAN on this issue, aiming to align innovation with compliance.
Southeast Asia’s digital economies stand at a crossroads. The same technologies that promise efficiency and inclusivity – digital IDs, facial biometrics, AI automation are being twisted by bad actors to perpetrate fraud on an unprecedented scale. However, as we’ve explored, the region is not passive in the face of this challenge. Indonesia, Vietnam, and their neighbors are actively fortifying their defenses through tighter regulations and cutting-edge tech solutions.
By investing in proven tools like anti-deepfake liveness detection, enforcing stricter identity verification laws, and fostering public awareness, they are beginning to turn the tide. The battle is by no means easy; deepfake and identity fraud techniques continue to evolve. Yet, with vigilance and innovation, Southeast Asia can strike a balance where digital trust and security grow in step with digital transformation. The lesson for the world is also clear, as deepfake-driven fraud looms, proactive measures the kind now being rolled out in SEA’s banking, fintech, and telecom sectors will be essential to protect consumers and the integrity of our connected economy.
References
1. Natnicha Surasit. "Rogue replicants: Criminal exploitation of deepfakes in South East Asia." Global Initiative, 29 Feb 2024.
2. Harvey Kong. "‘Everyone looked real’: Multinational firm’s Hong Kong office loses HK$200 million after scammers stage deepfake video meeting." South China Morning Post, 4 Feb 2024.
3. Siriwat Deephor. Thai Police Warning on Deepfake Video Calls, reported in The Nation Thailand, 29 Apr 2022.
4. Team Feedzai. "Beyond the Face: Why Vietnam’s Banks Need Behavioral Biometrics to Fight the Rising Tide of Fraud." Feedzai Blog, 2 Aug 2024.
5. Sumsub. "Identity Fraud Report 2023 – APAC Deepfake Incidents Surge 1530%." Press Release via PR Newswire, 28 Nov 2023.
6. Lu-Hai Liang. "Oz Forensics launches its SaaS liveness detection in Indonesia." Biometric Update, 8 Jan 2025.
7. Chris Burt. "BixeLab test shows Oz Forensics’ biometric IAD protects against deepfake fraud." Biometric Update, 23 Sep 2025.
8. Bureau.id. "Navigating the Complex World of KYC in 2025 – Southeast Asia." Bureau Blog, 2025.
9. Cass Kennedy. "Indonesian Government Weighs Facial Scanning to Verify Social Media Users." ID Tech, 19 Sep 2025.
10. ID Tech Editorial Team. "Indonesia Trials Face Recognition for SIM Registration to Curb Scams." ID Tech, 14 Oct 2025.
11. BiometricUpdate.com. "Vietnam has big digitalization and biometrics ambitions for 2026." Biometric Update, 12 Dec 2025.
Bibliography
· Rogue replicants: Criminal exploitation of deepfakes in South East Asia | Global Initiative (https://globalinitiative.net/analysis/deepfakes-ai-cyber-scam-south-east-asia-organized-crime/)
· ‘Everyone looked real’: multinational firm’s Hong Kong office loses HK$200 million after scammers stage deepfake video meeting | South China Morning Post ( https://www.scmp.com/news/hong-kong/law-and-crime/article/3250851/everyone-looked-real-multinational-firms-hong-kong-office-loses-hk200-million-after-scammers-stage?campaign=3250851&module=perpetual_scroll_0&pgtype=article )
· Don’t fall for ‘Deepfake’ video calls from scammers: police ( https://www.nationthailand.com/in-focus/40015069)
· Why Vietnamese Banks Need Behavioral Biometrics | Feedzai (https://www.feedzai.com/blog/beyond-the-face-why-vietnams-banks-need-behavioral-biometrics-to-fight-the-rising-tide-of-fraud/)
· APAC Deepfake Incidents Surge 1530% in the Past Year Amidst Evolving Global Fraud Landscape (https://www.prnewswire.com/apac/news-releases/apac-deepfake-incidents-surge-1530-in-the-past-year-amidst-evolving-global-fraud-landscape-301999070.html)
Tags:
Biometrics
Liveness
KYC
Digital Authentication
Deepfakes
Spoofing
Onboardings
Stay up to date with the latest trends in technology and identification.
Your source for knowledge on technology, identity and the future of trust.





