Skip to main content

Deepfake technology utilizes machine learning and artificial intelligence (AI) to manipulate or create synthetic audio, video and images that appear authentic. Deepfakes are commonly featured in entertainment and politics to spread false information and propaganda. For instance, deepfake has been used to show a celebrity or leader saying something that they didn’t, and this creates fake news.

Unfortunately, in deepfakes, cybercriminals have found a new tool for cyberattacks. Cybercriminals are now using deepfakes to pose a variety of enterprise risks.

How Cybercriminals Are Using Deepfakes

Deepfake technology is now used to create scams, hoaxes and false claims that undermine and destabilize organizations. For instance, a manipulated video might show a senior executive associated with fake news, such as admitting to a financial crime or spreading misinformation about a company’s products. Such corporate sabotage costs a lot of time and money to disprove and can impact a business’s reputation.

Another way businesses can be negatively impacted is through social engineering attacks such as phishing, which relies on impersonation to compromise an email. Similarly, social engineering using deepfakes can feature voice or video impersonations. A good example of such an impersonation was reported in The Wall Street Journal, in which fraudsters used AI to mimic a CEO’s voice. This incident happened in March 2019, when criminals impersonated a chief executive’s voice to direct a payment of $243,000.

Cybercriminals are able to execute social engineering attacks by accessing readily available information online. They can research a business, employees and executives. The criminal will even use an actual event picked from social media – for instance, a financial director who is just returned to work from a holiday – to sound more legitimate.

This emerging security threat is also made possible by the development of video editing software that can swap faces and alter facial expressions. Such developments have enabled deepfakes to fool biometric checks (like facial recognition) to verify user identities.

The deepfake cybersecurity threat has become such a concern that the Federal Bureau of Investigation (FBI) has issued a Private Industry Notification (PIN) cautioning companies of the possible use of fake content in a newly defined cyberattack vector referred to as Business Identity Compromise (BIC).

How to be Prepared and Protect Against Deepfakes

Deepfake videos and images can be recognized by checking for unnatural body shape, lack of blinking in videos, unnatural facial expressions, abnormal skin color, bad lip-syncing, odd lighting, awkward head and body positioning, etc. However, cybercriminals keep evolving and creating more convincing deepfakes.

Other measures introduced to combat deepfakes include creating solutions that detect deepfakes. There also was an introduction of deepfake legislation in the National Defense Authorization Act (NDAA) in December 2019.

Unfortunately, this has not been enough, and enterprises have the task of helping reduce the impact of these attacks. The following measures can help:

  1. Use anti-fake technologies
    Businesses should explore automated technologies that help identify deepfake attacks. They should also consider watermarking images and videos.
  2. Enforce robust security protocols
    Implement security protocols to help avoid deepfakes, such as automatic checks for any procedure involving payments. For instance, putting systems that allow verification through other mediums.
  3. Develop new security standards
    As security threats keep evolving, so should security standards within a company. For instance, introduce new security standards involving phone and video calls.
  4. Training and awareness
    Enterprises should enforce regular training and raise awareness among employees, management, and shareholders on the dangers of deepfakes to businesses. When all involved parties are trained to identify deepfake social engineering efforts, this will help reduce the chances of falling victim.
  5. Keep user data private
    Deepfake attackers use the information found in public domains such as social media. Although not a failsafe procedure, company profiles can be made private. Users also should avoid adding or connecting with strangers they don’t know and posting too much personal information online.
  6. Disinformation response policy
    Some deepfake incidents are out of control for an enterprise, such as fake videos purporting to be from top management. However, establishing a disinformation response plan will help in cases of a reputation crisis. This should include monitoring and curating all multimedia output – which will help present original content to the public as authentic content.

Conclusion

Deepfake is an emerging cybersecurity concern that requires enterprises to be aware of its potential threats and stay prepared. Although it might be possible to identify a poorly generated deepfake with the naked eye, the technology continues to advance. In response, countermeasures must keep pace.