LANSING – A Michigan State University Federal Credit Union (MSUFCU) deployed AI powered deepfake detection in its call center to combat rising fraud risks linked to synthetic voice attacks. The system, developed in partnership with cybersecurity firm Pindrop, was launched in August 2024 and has since delivered measurable results. By September 2025, the credit union reported preventing approximately 2.57 million dollars in fraud exposure, while also improving operational efficiency and customer satisfaction.

The institution, which serves more than 367,000 members and manages over 8.26 billion dollars in assets, introduced the technology as part of a broader effort to strengthen its fraud prevention capabilities. The implementation comes at a time when deepfake scams are rapidly increasing across the financial sector.

AI Adoption Expands Across Fraud Prevention Systems

Financial institutions are increasingly adopting artificial intelligence to strengthen fraud detection across digital channels. AI is used to monitor suspicious activity, support anti money laundering checks, and identify automated or robot interactions in real time. These systems process large volumes of data and flag anomalies that may indicate fraudulent behavior.

Beyond banking, the use of AI extends into other digital sectors where trust and security are essential. In e-commerce, Amazon uses AI-powered proactive controls to block suspected counterfeit listings before brands need to report them. Other online marketplaces also employ machine learning to identify fraudulent merchants, detect refund abuse, and combat so-called “friendly fraud”, where customers file illegitimate chargeback claims. Some of the best-rated casino sites in Hong Kong have likewise incorporated AI to detect irregular activity such as bonus abuse, account takeovers, and multi-account fraud.

Industry data from Pindrop shows that deepfake calls have recorded a 1300% year over year rise in 2024, with approximately one in every 106 calls identified as machine generated.

MSUFCU Deploys Pindrop for Real Time Call Risk Scoring

The partnership between MSUFCU and Pindrop implements Passport and Protect solutions within its call center operations. The system evaluates incoming calls and generates a real time risk score based on multiple indicators, including voice characteristics and call metadata.

In the past, call center agents relied on manual authentication questions to verify customer identity. With the new system, authentication is performed passively before the call is connected to an agent. This allows staff to access risk insights immediately and adjust their response based on the level of threat identified.

The credit union recognised the deployment for reducing fraud without increasing friction for members and improving efficiency in call handling, thus enhancing the overall customer experience. By improving visibility into fraud exposure, the new system also allows the institution to track and measure risks associated with incoming calls more accurately.

Measurable Impact on Fraud Reduction and Efficiency

Since the introduction of AI-based call analysis, the credit union reported preventing US$2.57 million in fraud exposure linked to deepfake attacks during the period between August 2024 and September 2025. This equates to a significant monthly reduction in potential losses.

Operational performance also improved following the implementation. The average authentication time per call decreased from 90 to 45 seconds, as the system replaced manual verification steps with automated background checks. As a result, call handling became faster and more efficient within the contact centre.

The shorter call times and a more streamlined verification process further led to better customer experience metrics during the same period. The credit union’s Net Promoter Score increased from 55 to 63 shortly after the system went live and remained stable in subsequent months.

Rising Deepfake Threats Reshape Financial Security Strategies

The deployment of AI fraud detection tools reflects a broader response to the rapid growth of deepfake-related threats. At MSUFCU, fraud incidents have increased by 38% since 2020. Smaller institutions are increasingly being targeted by attackers due to their less advanced security infrastructure.

Experts have noted that human detection alone is insufficient for identifying deepfake audio. As AI is becoming hackers’ favorite tool, in controlled scenarios, a majority of individuals were unable to distinguish between real and synthetic voice recordings. This limitation has accelerated the adoption of automated detection systems within financial services.

Regulators Warn Deepfakes Are Becoming a Systemic Fraud Risk

Regulatory bodies have increasingly raised concerns over the growing use of deepfakes in fraud schemes, particularly within financial services.

In November 2024, the US Department of the Treasury’s Financial Crimes Enforcement Network (FinCEN) issued an alert warning financial institutions about fraud involving deepfake media created with generative AI tools. The agency said it had seen a rise in suspicious activity reports linked to fake identity documents, identity verification bypass attempts, and other scams targeting banks and their customers.

FinCEN also provided red flag indicators and reminded institutions of their reporting obligations under the Bank Secrecy Act.

UNESCO has also described deepfakes as part of a broader “crisis of knowing”, arguing that synthetic media does not just spread falsehoods but also weakens public trust in evidence itself. Industry leaders now see the issue as an emerging crisis driven by rapid advances in generative AI.