Quick Hits
- Some scammers are using AI to fake their voice and image during video interviews.
- This trend raises the risk that employers could experience poor performance by unqualified workers, cyberattacks, theft of sensitive data, or embezzlement.
- Careful hiring strategies can help employers prevent these schemes.
The last thing employers want to do is to hire a person with a fake identity—whether the person’s goal is to obtain a job for which they are not qualified, steal data or money, or install spyware or ransomware on company devices. If the hiring process is rushed or inconsistent, it is easy for companies to fall victim to this kind of scheme. In January 2025, the Federal Bureau of Investigation (FBI) warned employers about the growing threat from North Korean IT workers infiltrating U.S. companies to steal sensitive data and extort money.
Online job postings have made it easier for employers to reach a wide pool of candidates across the United States, but it also has led to an environment where one job posting might draw thousands of applications, making it more difficult for hiring managers to sort through and find the best talent. The rise of remote work since 2020 has further complicated matters, as it can make it more difficult to detect when a new hire previously faked his or her voice or image during the interview process.
Risk Reduction Strategies
To reduce the risk of hiring someone with a fake identity, employers may wish to consider these strategies:
- relying on in-person interviews whenever possible; otherwise, using live video with cameras and applying simple, neutral authenticity checks, such as turning the head, waving a hand, or reading a randomly selected sentence to detect overlay artifacts;
- conducting multiple interview rounds with role-specific questions designed to elicit concrete details;
- asking interview questions designed to elicit specific details about the applicant’s location and personal background (while, of course, avoiding questions prohibited by employment discrimination laws);
- scrutinizing resumes and applications for typos, unusual terminology, and inconsistencies with public profiles;
- verifying identity, work authorizations, education, and employment history through legally compliant methods, and making job offers contingent upon successful verification;
- contacting and verifying the applicant’s professional references; and
- training hiring managers to spot red flags in video interviews (e.g., lip-sync issues, abnormal lighting, or lagging inconsistent with audio).
Ironically, there are AI tools that can help employers spot fake job applicants, but employers may want to use those tools cautiously with vendor diligence and human review.
Employers may want to ensure that any screening, background checks, and AI-assisted tools are used in compliance with applicable federal, state, and local laws. This includes “ban-the-box” rules on criminal history inquiries and timing; background check disclosures, authorizations, and pre-adverse/adverse action procedures; automated decision-making regulations; and biometric identifier rules. In addition, employers may wish to coordinate recruitment policies and practices with IT security and privacy professionals.
Ogletree Deakins will continue to monitor developments and will provide updates on the Background Checks, Cybersecurity and Privacy, Employee Engagement, and Technology blogs as new information becomes available.
Rebecca J. Bennett is a shareholder in Ogletree Deakins’ Cleveland office.
This article was co-authored by Leah J. Shepherd, who is a writer in Ogletree Deakins’ Washington, D.C., office.
Follow and Subscribe
LinkedIn | Instagram | Webinars | Podcasts