From AI-generated scams to deepfake identity fraud and blockchain abuse, the digital threat landscape is evolving at breakneck speed. Authors from Alvarez & Marsal share how organisations can strengthen their defences through smarter detection systems, robust investigative protocols and ongoing vigilance.

Highlights

  • AI is transforming the tactics of fraudsters and can enable personalised, large-scale scams that are increasingly difficult to detect
  • deepfakes and biometric spoofing are now being used to bypass identity verification and to defraud companies of millions of dollars
  • digital investigations must adapt, incorporating mobile chat data, forensic expertise and AI-driven tools to preserve evidence and respond swiftly

In today’s rapidly evolving digital landscape, we find ourselves in a relentless battle to keep pace with increasingly sophisticated fraudsters. As cybercriminals harness cutting-edge technologies such as artificial intelligence (AI), deepfakes and advanced social engineering, the tactics they employ become more complex and challenging to detect. Staying informed about the latest developments in fraud technologies and understanding effective mitigation strategies are crucial for safeguarding organisations and consumers.

Artificial intelligence

AI refers to the capability of computer systems to perform tasks that typically require human intelligence, such as learning, reasoning, problem-solving, perception and decision-making. AI broadly encompasses machine learning, enabling systems to learn from data; deep learning, providing powerful neural network models for complex tasks; and generative AI (GenAI) that creates new content from vast datasets.

While AI offers tremendous benefits, it also introduces new risks, particularly in the realm of fraud. AI-enabled fraud schemes are growing rapidly, with reports of such scams rising over 450% between May 2024 and April 2025. These scams leverage large language models and AI agents that operate with minimal human oversight to personalise attacks and scale fraud operations across multiple platforms. The use of AI makes these scams harder to detect and more believable, posing significant challenges to individuals and organisations alike. Moreover, fraudsters continuously evolve their tactics using AI, necessitating ongoing innovation and collaboration to stay ahead.

“The use of AI makes these scams harder to detect and more believable, posing significant challenges to individuals and organisations alike.”

Biometrics and deepfakes

Biometrics refers to the automated identification and authentication of individuals based on their unique physical characteristics. Common biometric traits include fingerprints, facial features, iris patterns and voice. Biometric authentication systems capture these unique traits and compare them against stored data to verify a person’s identity. This method is generally more secure than traditional passwords or PINs because biometric traits are difficult to replicate or share.

GenAI tools enable scammers to fool biometric authentication systems by creating highly convincing deepfake videos, voice samples and AI-powered social media bots that impersonate real people. For example, deepfake scams have been used to manipulate individuals into transferring large sums of money by impersonating executives, as seen in a high-profile case in Hong Kong where a company was defrauded of $25 million in this way.

Blockchain technology

Blockchain technology is a decentralised, distributed digital ledger that securely records and stores data, most notably ownership records of digital assets, across a network of computers, or nodes. Each transaction is grouped into a block, encrypted and linked chronologically to the previous block, forming an immutable chain. This immutability means data on the blockchain cannot be altered or deleted without consensus from the network, ensuring transparency, security and trust, without relying on central authorities like banks or governments.

Despite its strong security features, blockchain technology is increasingly exploited by fraudsters using various sophisticated tactics. The pseudonymous nature of blockchain transactions makes it difficult to identify fraudsters, allowing them to operate with relative impunity to launder stolen funds through decentralised finance (DeFi) platforms, amongst other methods.

Regulatory gaps mean that fraudulent schemes may slip through with less scrutiny compared to traditional financial markets that operate with more oversight. Moreover, many investors have a limited understanding of blockchain technology and cryptocurrency, making them more susceptible to misleading claims and fraudulent schemes. Although knowledge of blockchain and cryptocurrency amongst investors has improved in recent years, this problem is still prevalent.

Blockchain and cryptocurrency fraud: red flags to look out for

  • Promises of unrealistic returns. Offers guaranteeing extremely high or quick profits, which are often too good to be true.
  • Unable to withdraw money. Investors are advised to either wait or invest additional funds before they can retrieve their money, for example by paying additional fees, taxes or deposits, or are locked out after initial small withdrawals.
  • Fake or impersonated platforms. Fraudulent websites or apps that closely mimic legitimate crypto exchanges or wallets, but are designed to steal your personal information, login credentials or crypto assets. These sites may lack contact details, with no physical address and no customer service phone line, or only offer unreliable contact methods like chatbots or web forms. Legitimate platforms provide verifiable contact information and responsive support.
  • Fake social media posts and celebrity endorsements. Posts or ads featuring celebrities or influencers endorsing crypto products or platforms, which may be digitally altered or AI-generated (deepfakes) to appear authentic.

Key steps to take when a fraud hits: digital fraud investigations

Once an organisation is hit by a fraud, following structured steps to perform a digital fraud investigation is important and ensures thoroughness, legal compliance and integrity, enabling organisations to effectively detect, analyse and respond to the fraud.

1. Define the scope of the investigation

  • Identify relevant jurisdictions. Determine the legal territories and regulatory environments applicable to the case to ensure compliance and proper authority for the investigation.
  • Identify persons to be investigated. List all individuals, employees, third parties and entities suspected to be involved in the fraud (if any). Often, as the investigation expands, additional persons will be included so it is important not to narrow down the persons of interest.
  • Identify data sources. Catalogue all potential evidence sources, including paper documents, electronic files, cloud storage, emails, transaction logs and communication records.
  • Identify who will conduct the investigation. Assemble a multidisciplinary team that includes fraud experts, forensic accountants, forensic technology specialists, legal counsel and possibly external consultants or law enforcement.

2. Practical considerations

  • Custodians. Identify and secure custodians of relevant data, employees or systems responsible for maintaining or controlling access to critical information. Identifying who has or had access to data prior to the fraud incident is very important.
  • Data location. Determine where data resides, including on-premises servers, cloud platforms, personal devices or external storage, to plan for secure data acquisition and preservation.
  • Types of data. Understand the nature of data involved, such as structured financial records, unstructured emails, system logs, multimedia files or encrypted data, to select appropriate forensic tools and methodologies.

Key dos and don’ts with digital investigations

  • Do understand the law. Do factor in and take legal advice on data privacy regulations or laws governing cross-border data transfer.
  • Do obtain proper authorisation. Ensure necessary permissions to access digital evidence have been obtained, particularly to access personal devices or accounts. Should there be an absence of a bring-your-own-device (BYOD) company policy for using personal devices for work, it is best practice to obtain individual written consent from the individual where data preservation is required over such devices. Assistance from the HR team may be helpful in obtaining such consent or permission from employees.
  • Do use qualified experts. Asking an organisation’s internal IT team to perform the investigation may not be best for preserving important information. For example, if the incorrect process is conducted to make copies of data, the metadata of such data may be altered, which can taint and ruin critical data and information for the investigation.
  • Do preserve confidentiality. Particularly with internal investigations, keeping the findings of the investigation in a small, closed-loop team is important to ensure an effective investigation.
  • Do ensure the integrity of the evidence chain. To preserve the chain of custody, do not tamper with original evidence. For example, a file containing the smoking-gun critical evidence could be tampered with in the process of collection, thus affecting the chain of custody and undermining its admissibility in court as evidence. While it may seem a good idea for HR or IT teams to go through evidence at the beginning of an investigation, there is a chance that doing so prior to forensic data preservation may ruin the integrity of the evidence.

New forms of data: mobile devices and ephemeral chat data

The way we communicate both at work and in our personal lives has changed drastically in recent years. Organisations are increasingly adopting chat applications for internal communications, driven by the need for faster, smarter and more integrated collaboration tools in remote and hybrid work environments. Chat applications like WhatsApp, WeChat and Microsoft Teams have become a primary form of business communication, often replacing email.

Given that fraudsters exploit communication channels to conduct scams, phishing, social engineering and impersonation attacks, chat messages have become critical evidence sources in digital fraud investigations. The growing reliance on chat apps in organisations correlates with an increased volume of chat message data being scrutinised in fraud investigations, underscoring the importance of monitoring and securing these communication channels.

Challenges of investigating ephemeral chat data

Investigating chat communications presents significant challenges due to the transient and often encrypted nature of these data types.

Automatic deletion

Ephemeral chat messages are designed to disappear after a set time (from seconds to minutes), making it difficult or impossible to preserve relevant communications before they vanish permanently. Audio, video and image files sent via ephemeral chats may also self-destruct, removing crucial multimedia evidence from investigations.

Lack of backups

Many ephemeral platforms like Signal or WhatsApp do not back up messages by default, while end-to-end encryption further complicates data recovery. In such cases, it is important to investigate other sources of backups such as on-cloud data or PC versions of chat apps.

Complex file types

Gone are the days of merely sending basic texts. Now chat users are also sending audio and video messages, emojis and GIFs. To investigate such data, the latest technology and AI tools are now being used to transcribe audio and video into text for easier review – and even for translating the data into multiple languages. By transcribing and translating the files, investigators then gain the ability to search through the data and find the evidence in a far more efficient manner than listening to many minutes and hours of recordings.

Best practices for preventing digital fraud

By combining AI-powered fraud detection with strengthened biometric frameworks and continuous human oversight, organisations can significantly enhance their defences against digital fraud. This multilayered approach not only improves fraud detection accuracy and speed, but also fosters trust and compliance in an increasingly complex fraud landscape.

Enhance fraud detection with machine learning–based systems

AI systems learn from vast data inputs such as inventory records, expenses and transaction histories to establish a baseline of normal business activity. Any deviation or suspicious pattern is flagged automatically, enabling early detection of fraudulent behaviour.

Real-time behavioural analysis

Modern AI models analyse behavioural patterns and contextual anomalies in real time, moving beyond static rule-based systems to detect evolving fraud tactics quickly and accurately.

Deploy AI tools to detect AI-generated fraud

With the rise of AI-generated deepfakes and synthetic phishing, AI tools can now analyse voice, text and video to identify synthetic content, helping combat sophisticated fraud schemes.

Strengthen biometric and voice control frameworks

Use multifactor authentication by combining biometrics with additional authentication factors (such as passwords or tokens) to increase security layers.

User education

Despite AI’s sophistication, human judgement remains critical. Training employees to recognise red flags indicating fraud and to be aware of what steps to take in these situations is key. For example, a potential fraud victim could verify an unusual transaction by directly contacting the requester via phone or by arranging a face-to-face meeting.

“By combining AI-powered fraud detection with strengthened biometric frameworks and continuous human oversight, organisations can significantly enhance their defences against digital fraud.”

Conclusion

Understanding the different types of digital deception, from AI-generated fraud to blockchain-enabled fraud scams, is crucial. While AI and cryptocurrency hold great promise, their misuse by fraudsters demands vigilant adaptation and robust safeguards to protect individuals and institutions in an increasingly digital world. By leveraging AI-powered fraud detection tools and having a well-thought-out plan in case a fraud is encountered, organisations can build stronger defences and foster trust in an environment where fraud threats are constantly evolving. Ongoing vigilance and constant innovation are essential to outsmart fraudsters and to protect organisations from fraud in 2025 and beyond.

Davin Teo and Henry Chambers, Managing Directors and Co-leaders of Disputes and Investigations Asia 

Alvarez & Marsal

Read More