This guest blog post is written by one of our approved companies for cybersecurity training.
AI in the HR and L&D world
Artificial intelligence (AI) and deepfakes are hot topics for employees in all industries, not least for recruitment and training professionals.
As it develops, AI is changing the future of HR, allowing professionals to streamline work processes and supporting decision-making processes. At the same time, employee training is benefitting from the AI revolution, with L&D teams harnessing its powers to create engaging, personalised training programmes and to analyse the effectiveness of existing training.
But what about the dangers of AI for HR and L&D teams? In particular, how can AI and deepfakes be leveraged by cybercriminals to manipulate HR processes, and how can we defend ourselves against them?
Deepfake job candidates
In July 2022, the FBI Internet Crime Complaint Center (IC3) released an advisory warning of a new social engineering technique in which fraudsters are using fake identities to steal company data.
Seems pretty standard, I hear you say – what’s new about this?
Well, while identity theft has been around for as long as you can remember, the difference here is that the fraudsters are using deepfake technology to misrepresent themselves in employment interviews. Their aim is to successfully navigate the recruitment process to commit cyber attacks.
How it works
According to the FBI, these fraudsters typically target work-from-home positions, enabling them to continue their attacks for as long as possible before being discovered.
The fraudsters use stolen personal information and deepfake videos to attend interviews under a fake identity. Once they are offered the job and given access to a company’s systems, the next phase of their attack begins, whether that involves stealing sensitive company data or deploying ransomware.
Detecting the deepfakers
All those involved in the recruitment process should be aware of the potential for deepfake attacks, especially as technology continues to improve and they become harder to spot.
To help protect against attacks:
- Be alert to signs of deepfakes during remote interviews. These can include lip movements not coordinating with speech, lighting changes or strange shadows and unnatural head or facial movements.
- Ensure background checks are rigorous and that inconsistencies between the persona presented at interview and the identity presented in the application are thoroughly investigated.
- Consider involving a face-to-face element in the onboarding process, even for fully remote workers.
- Ensure access for new employees is given on a need-to-know basis and that the principle of least privilege is strictly followed. Think about restricting access to more sensitive systems or data until later in the onboarding process, even waiting until after the probation period has been successfully completed, if reasonable and practical.
The future is AI
According to cybersecurity experts, the use of AI and deepfakes in cyber attacks will continue to rise as the technology becomes more advanced and more affordable.
That’s why it’s vital we all keep our eyes open and continually re-educate ourselves on the evolving cyber threat landscape.
The author of this article is Lauren Groom who has been an SME Content Creator for The Security Company (TSC) for over 5 years.
TSC specialises in boosting data privacy and cyber awareness, targeted training, customised projects and role-based solutions for over 20 years. From their tailored subscription services, to bespoke eLearning, awareness materials and behavioural assessments, they’re committed to helping organisations like yours instil long-term, security-conscious behaviours.
To discuss this blog in more detail or to explore your current cybersecurity learning needs please contact firstname.lastname@example.org.
 Federal Bureau of Investigation, Alert Number I-062822-PSA, 28 June 2022