Job search platforms targeted by cybercriminals to attack companies and candidates

job search through platforms like LinkedIn or InfoJobs It is increasingly widespread among companies that, thanks to them, can alert a greater number of users about a new vacancy, as well as group the applications in a simple and orderly way.

Because users share personal data on these services, such as name, email, what company they work for, or their profile pictures, cybercriminals have found these platforms the perfect target to perpetrate their attacks.

These are aimed both at companies seeking to fill their available jobs, through techniques such as ‘deepfake’, and at the workers themselves, who end up being victims of ‘phishing’ by allegedly legitimate business accounts.

One of the most recent cases is that of the group of cybercriminals known as Lazarus, attributed to the main intelligence agency of North Korea, which uses this type of social network to generate a first contact with their victims.

Its ‘modus operandi’ when collecting data is consistent with that carried out by other hacker groups with the aim of deceiving job seekers through job search platforms.

In the first place, these cybercriminals carry out a study on the objective profile to find out, among other factors, their interests, the environments in which they move, their contacts or the company they work for, among other details.

Next, the attackers carry out a tailored approach, that is, they personalize the first contact with their victims according to their interests with the target to earn their trust.

Once they have succeeded, they take advantage of this contact with job applicants to send ‘malware’ or harmful code to their victims. These ‘phishing’ attacks may include files or links intended to seize the Full or partial control of your devices.

The deployment of this malicious ‘software’ and remote access tools (RATfor its acronym in English) are two of the methods most used by cybercriminals to spy on and monitor infected computers.

In this way, they can not only access, steal and share their victims’ data, but also have access to passwords and credentials for other services, such as bank accounts or digital wallets.

DEEPFAKE TECHNOLOGY AT THE SERVICE OF REMOTE WORK

This is not the only strategy used by cybercriminals in the workplace, since in recent years ‘deepfake’ technologies have become popular, which allow the appearance or voice of people in images or videos to be modified.

This is possible thanks to technologies based on Artificial Intelligence (AI), whereby fraudsters can make it appear that certain individuals have done or said unrealistic things and create videos with images of candidates to pass them off as real people.

The rise of ‘deepfake’ is mainly due to the implementation of new remote work environments, a decision made by most companies worldwide after the start of the pandemic.

In this context, some organizations were forced to opt for a new virtual format for recruiting candidates. Thus, they had to video call job interviews in order to protect the health of its employees.

In this way, imposters use videos, images, recordings and stolen identities of legitimate users and pose as other people to obtain a remote job.

Once hired and within the organizational chart of a company, they have access to the company’s passwords and credentials, as well as personal documents that they can use for their own benefit, such as to perpetrate blackmail or get some other type of economic revenue.

In this sense, the cybersecurity company can differentiate between two types of attacks with ‘deepfakes’: ‘deepface’ (which are made through images and videos) and the ‘deepvoice’ (through audios and recorded voices).

Both methods rely on so-called deep learning. (‘deep learning’), a specific field of AI that is based on the functioning of the human neurological system, and on global public databases, which contain images, videos and audios.

In the first place, the ‘deepface’ consists of creating images and videos that imitate objects or faces of real people thanks to antagonistic generative neural networks (GANs, for its acronym in English). These GANs are a type of AI algorithm capable of generating photographs that look authentic to the human eye.

For their part, cybercriminals use ‘deepvoice’ to replicate the voice of a real person from their audio fragments, a format that, according to the cybersecurity firm Panda Security, has already been used to impersonate CEOs of “big enterprises”.

However, thanks to ‘deepfake’, cybercriminals can evade robust and widespread defense systems today, such as biometric security, which verifies the identity of a subject based on the recognition of their iris or fingerprint.

HOW TO DETECT A POSSIBLE CASE OF ‘DEEPFAKE’

Despite the fact that fraudsters use increasingly sophisticated techniques to perpetrate these attacks on companies and institutions of various kinds, there are certain variables that allow detect possible cases of ‘deepfake’.

Panda Security recommends that companies or people with public projection carry out daily monitoring of their social networks, in order to control any content that could go viral and could harm their reputation.

In relation to job interviews, it is advisable that companies take into account details such as the possible abruptness of the person’s posture or blinking, which can reveal if it is a false candidate.

Another aspect to take into account during these virtual encounters is the brightness, if it changes from one frame to another and the way in which it is reflected in the skin tone.

The tone of a person’s voice can also be a good indication that a deepfake is being used., in the event that the voice of the candidate is previously known. Otherwise, it should also be verified that what the interviewee answers is consistent with the discourse that he tends to project.

By Editor

Leave a Reply