Deepfaked Fiduciary

By Neil Plein

The Very Real AI Threat Nobody’s Talking About

The rapid rise of AI-driven deepfake technology has introduced a dangerous new frontier for 401(k) fiduciaries. These fiduciaries often have the authority to authorize the release of participant funds from a plan and under the Employee Retirement Income Security Act (ERISA), fiduciaries have personal liability for participant losses that result from their actions. Few fiduciaries fully realize the depth of their potential vulnerability, especially with the emergence of advanced deepfake attacks.

Should this even be written?

This article straddles a fine line, in terms of whether or not it should even be written. The area of risk discussed has not yet materialized in the 401(k) space, but it is only a matter of time before it does and therefore, the intent is to equip 401(k) fiduciaries with leading insights that will hopefully help them avoid being a “first example” of this coming to fruition. On the other hand, it could be argued that breaking ground on this topic offers a sort of roadmap for bad actors. For this reason, while examples of existing technology will be referenced, the decision was made to remove details about specific providers of various deepfake technologies, as well as remove certain key steps in the process a bad actor might follow; so that the full execution path is not completely illuminated.  

The Deepfake Threat: A New Era of Sophisticated Fraud

Deepfake technology leverages artificial intelligence to produce convincingly realistic but entirely fraudulent video, audio, or textual communications. In recent years, this technology has rapidly evolved from experimental to a powerful tool for criminal enterprises targeting financial transactions.

Consider these troubling examples:

  • In 2019, the CEO of a U.K.-based energy firm authorized a €£220,000 transfer based solely on a deepfake voice impersonating his parent company’s executive. The impersonation was so sophisticated that even voice intonations and speech patterns were replicated with astonishing accuracy (certain providers now offer voice replication in minutes, with generation even possible in 30+ different languages).
  • In February 2024, criminals used deepfake technology to successfully impersonate senior executives of a multinational corporation in Hong Kong. During a video conference, scammers posing as the CFO and other top executives tricked employees into transferring approximately HK$200 million (around $25 million USD). This sophisticated operation leveraged real-time video interactions, previously thought to be a secure form of communication (as this shows, current technology allows this to be accomplished in real time).

The pace of advancement in these cases should raise red flags for retirement plan fiduciaries, given the authority they have to release plan funds, based on the requests they receive from plan participants.

However, to really understand the magnitude of this new threat landscape, so that best practices can be implemented to mitigate the risk, three basic, but often misunderstood aspects of plan operation and fiduciary liability need to be addressed.  

1. Many fiduciaries don’t realize they’re fiduciaries.

When a 401(k) plan is established, at least one individual is named in the Plan Document as the Trustee and/or Plan Administrator- this is known as being a “named fiduciary.” However, under ERISA, the definition of fiduciary is a functional oneanyone who exercises control over the administration or assets of the plan, is considered a fiduciary of that plan.

Therefore, someone who works for a company and helps manage the company’s 401(k) plan, may not be named in the plan document, but if they exercise control over the plan in some way- such as being the person who ultimately decides if a distribution or loan should be approved, they are considered a fiduciary of the plan, with respect to that action.

2. Many fiduciaries are mistaken about the fiduciary role of their service providers.

Given the complexity of operating a 401(k) plan, outside service providers are often hired to help run things (hiring a service provider is a fiduciary action). The three primary service providers are:

  1. Recordkeeper: Maintains participant accounts, hosts websites, facilitates transactions, provides cybersecurity protections, and generally serves as the primary interface for employees.
  2. Third-Party Administrator (TPA): Manages compliance functions, administration, and required regulatory reporting.
  3. Investment Advisor: Provides fiduciary guidance on investment choices.

Frequently, the recordkeeper and third-party administrator roles are combined (this is known as a bundled service provider). Importantly—and often misunderstood by fiduciaries—these providers primarily operate in a ministerial capacity, meaning they perform administrative functions under explicit direction, rather than discretionary decision-making (i.e. fiduciary action).

Here’s the critical distinction:

  • Ministerial Role: Acting purely administratively, essentially following explicit instructions from the applicable plan fiduciary. The provider does not carry fiduciary liability.
  • Fiduciary Role: Exercising discretion over the plan’s assets or administration. The individual(s) or entity acting with discretion has personal liability.

Because most functions provided by service providers (except Investment Advisors) are performed in a ministerial capacity, fiduciaries retain personal liability for decisions—even when they rely heavily on service providers. Most service providers are not especially eager for fiduciaries to realize this and make clarity even harder to achieve through the use of things like “fiduciary warranties,” which, in general, if reviewed in detail (such as the following excerpt from a major national recordkeeper), confirm that the service provider is not acting in a fiduciary capacity:

While we are not acting as a fiduciary for the plan in selecting and monitoring the investment options in our offering, we stand behind our comprehensive fiduciary program.

3. The big cybersecurity gap.

In recent years, retirement plan service providers have significantly bolstered their cybersecurity measures, partly driven by the Department of Labor’s recent AI and cybersecurity guidance. Many recordkeepers now prominently advertise robust cyber-protection guarantees or warranties promising reimbursement if participant accounts are compromised due to a breach, however, where the breach occurs is extremely important; here’s the critical distinction:

  • On-System Breaches: If a cyber breach occurs within the recordkeeper’s controlled environment—such as unauthorized logins or direct account hacks—most recordkeepers’ cyber policies will typically cover resulting losses.
  • Off-System Breaches: But what happens when a security compromise takes place outside the recordkeeper’s digital infrastructure – such as when a fiduciary approves a transaction based on fraudulent information?  Once the fiduciary exercises discretion—approving a distribution via a deepfake-induced scam—this action becomes an ‘off-system’ breach. In these instances, the recordkeeper’s cybersecurity policy likely would not apply, as the breach did not occur within their systems.

This has the potential to create a compounded fiduciary risk scenario:

  • First, the recordkeeper’s function is non-discretionary (ministerial), so fiduciary responsibility/liability traces back to the applicable plan fiduciary.
  • Second, by approving distributions (or other actions which cause assets to leave the plan) themselves, fiduciaries directly engage their fiduciary responsibility, further solidifying their personal liability to restore any losses caused by their fiduciary acts or omissions.

Seeing the Complete Picture

Despite significant financial implications, many fiduciaries have not fully grasped the degree of risk exposure that results from these 3 key areas:

  1. Unaware of Fiduciary Status: Many individuals managing 401(k) plans are not aware that they are fiduciaries, subject to personal liability under ERISA.
  2. Misplaced Reliance on Service Providers: Even fiduciaries aware of their role often wrongly assume that the service providers they’ve hired—experts in their respective areas—are acting in a fiduciary capacity and therefore carry that liability. They rarely do.
  3. False Sense of Security from Provider Cybersecurity Guarantees: Highly advertised cyber guarantees create the illusion of comprehensive protection, which can make it challenging for fiduciaries to identify and address critical gaps, such as possible deep-fake exposure.

Jason Roberts, a prominent ERISA attorney recognized by 401(K) Wire as one of the 100 Most Influential in Defined Contribution, by Super Lawyers Magazine as a Rising Star, and was selected by Investment News as one of the Top 40 Advisors and Associated Professionals under 40 said, “There exists a significant liability gap here about which few in the retirement industry seem to be talking. While we may not have seen a fiduciary breach occur as a result of a deepfake yet, we believe it’s only a matter of time. Just look at the pace of AI evolution in the last couple years; deepfakes are getting easier to execute and accuracy is already at a level where it can pass human detection. At Fiduciary Law Center, we are increasingly being asked by our clients to review their agreements with service providers to confirm what’s covered and what isn’t in terms of cybersecurity and, unfortunately, plan sponsors are generally left exposed with respect to this sort of fraud.”

Best Practices for Fiduciaries to Guard Against Deepfake Liability

Given these threats, fiduciaries should consider adopting enhanced vigilance:

Understand and Clarify Fiduciary Roles Clearly

  • Ensure explicit understanding of roles between ministerial and fiduciary functions.
  • Document clearly who has discretionary authority—this helps clarify personal liability.

Identify Vulnerability Points + People 

  • Consider your service provider, what options do they allow to complete various participant requests in your plan- and which options are you using that allow money to exit the plan (i.e. are any distributions or loans processed with paper forms?).
  • Recognize that your plan information is publicly available on the Department of Labor (DOL) website and database. Specifically, if you are a large plan who files a Form 5500 along with an audit report, a great deal of information about your plan is available to the public (who signs your Form 5500 and their contact details, your service providers, average balance, plan provisions such as distribution options and whether loans are available, etc.).
  • Work-related social media contains a wealth of information (employees routinely advertise work history- length of employment, retirement), these characteristics may cause bad actors to focus in on individuals who have a high probability of holding significant assets in the plan.
  • Exercise elevated care when red flags appear- address changes, high balances, longer tenure, requests for withdrawal while still employed, all should prompt additional investigation to verify legitimacy.

Consider Implementing Multi-layered Authentication + Verification Processes

  • Require multiple verification methods—voice, video, biometric authentication—before approving sensitive transactions or releasing paper forms. Keep good records as well.
  • If approaching verification through a question-and-answer approach, be aware that personal data such as social security number and date of birth are fairly available online, as are prior addresses and even previous employers. Further, with LinkedIn, dates of hire or dates of termination may be available. A better type of question to consider (in addition to verifying some of the details previously mentioned) might be something like, “who was your first (or last) supervisor?” or perhaps even something as far as “what was the slogan we had over the main door when you walked in the office?” Something a little more creative, that may not be available information online and therefore something that higher chance of only being known by the actual person.

Comprehensive Education + Regular Training

Train all fiduciaries, plan participants, and involved parties to recognize signs of deepfake or social engineering attempts. These are becoming more and more challenging to detect (social engineering is an attempt to manipulate someone into revealing sensitive or useful information). For example, familiarize yourself with how the Cyrillic alphabet is used in scam communications or how listening for breath can help detect if a voice is real or AI-generated.

As Josh Axelrod, Chief Operating Officer of the Aldrich Group of companies recently shared “AI has caused the ‘common’ cybersecurity risks to become unbelievably obsolete; we’re not talking about someone calling you to try and get money, we’re talking about someone calling you to keep you on the line just long enough so that you say certain key phrases that allow a complete, real time AI voice modulation to be generated which passes all sorts of screens that would otherwise prevent financial harm from happening to you. So training really needs to get much bigger than it has been in the past – and the urgency for this is very high.”

Conduct Thorough Insurance Audits to Identify Coverage Gaps

You can implement the strongest controls, best practices and things could still get through. Review fiduciary liability insurance coverage, fidelity bonds, and cybersecurity policies for explicit exclusions related to deepfake-induced fraud. Since claims can be raised under state law, standard ERISA fiduciary liability insurance may not cover deepfake-related cyber events. Other types of coverage, such as directors’ and officers’ coverage, may have exclusions. ERISA bonding coverage does not cover thefts of assets by criminal hackers. If necessary, have an expert review your current coverage and needs.

Vendor Management + Cybersecurity Audits

Regularly audit vendors’ cybersecurity practices, policies, and incident response preparedness. To start, ask your service providers to provide you with a written response to the Department of Labor’s Cybersecurity Guidance “Tips for Hiring a Service Provider With Strong Cybersecurity Practices.

Incident Response Planning

Establish and regularly update response plans explicitly addressing deepfake scenarios, ensuring rapid action to mitigate losses. Strongly consider including your IT or cyber security experts in each step of this overall process for your plan.

Deepfake Risk is Real + Already Here

The threat of deepfake-enabled fraud is no longer theoretical. For retirement plan fiduciaries, the convergence of emerging AI capabilities, publicly available participant data, and legacy plan processes (like paper forms) creates an urgent need to reevaluate both operational safeguards and fiduciary awareness. The unfortunate reality is that many fiduciaries remain unaware of the personal liability they shoulder or the specific vulnerabilities that exist within their plan’s administrative framework.

While technology evolves rapidly, proactive defense is still possible—if fiduciaries are prepared to approach risk differently. This means looking beyond traditional vendor assurances and cybersecurity warranties to understand the true delineation of liability, the mechanics of off-system fraud, and the real-world implications of discretionary decision-making.

The most effective defense starts with clarity: understanding who holds fiduciary responsibility, where the weak points lie, and how to implement processes that prevent fraudulent requests from reaching the point of approval. But clarity isn’t always easy to achieve on your own—particularly when service providers themselves may blur the line between administrative and fiduciary roles.

Working with advisors who specialize in 401(k) fiduciary oversight, such as Aldrich Wealth, can be a strategic advantage. We stay current with evolving best practices, regulatory shifts, and emerging threats—including those powered by AI. More importantly, we help you stay forward-thinking, so you’re not just reacting to today’s risks, but anticipating tomorrow’s.

Because in this new era of digital deception, the biggest liability isn’t just falling for a deepfake—it’s assuming you’re protected when you’re not.

Aldrich Wealth’s Corporate Retirement Plan (CRP) team helps plan sponsors understand and manage fiduciary risk in today’s evolving cybersecurity landscape. We provide targeted education and guidance to help protect your plan and participants from emerging threats like deepfakes.

Disclosure: This article is intended for educational and informational purposes only and does not constitute investment, legal, or fiduciary advice.

Meet the Author
Lead Retirement Plan Consultant

Neil Plein, CPFA, AIF®

Aldrich Wealth LP

Neil is a Certified Plan Fiduciary Advisor (CPFATM) and Accredited Investment Fiduciary (AIF®) who acts as the quarterback of a retirement plan. He guides employers through the overall plan management with the knowledge to do a deep dive into any aspect of plan operation. Neil connects the dots between internal staff and external service providers… Read more Neil Plein, CPFA, AIF®

Neil's Specialization
  • Corporate retirement plans
  • Recordkeeper selection
  • Strategic planning and consultation
  • One-to-one consulting participant meetings
  • Certified Plan Fiduciary Advisor (CPFATM)
  • Accredited Investment Fiduciary (AIF®)
Connect with Neil

Looking for support or have a question?

Contact us to speak with one of our advisors.

"*" indicates required fields

Search
Get in touch