Cybercrime Evolves: Deep Fake AI Is Used to Steal Money

// March 5, 2020
Reading Time: 3 minutes

Cybercrime continues to be a vexing problem for businesses. Criminals are using a variety of techniques, some sophisticated, some very basic, to penetrate networks, steal money and generally cause confusion in the workplace. In particular, losses from social engineering continue to mount.

Social engineering is an attack on a person or an organization involving manipulation or deception via a trusted method of communication to steal data, money or goods. Currently, the most common attack vector is via email. Criminals can hack trusted, authentic email addresses and use them to deceive their victims. Or they don’t need to hack at all; instead, they mock up a fake email to look like an authentic one.

Businesses can protect themselves from these attacks by being wary of changes to payment instructions or unusual emails and by verifying any changes with a phone call to the person who originated the request. This method of dual verification frequently thwarts simple email attacks.

However, cybercrime is a cat-and-mouse game, and the criminals have been staying busy. A new threat is emerging that jeopardizes businesses and defeats current methods of risk management. Criminals are reportedly using “deep fake” technology to trick businesses. Deep fakes are an artificial-intelligence-assisted method of copying someone’s face or speech. Any executive whose speech is recorded at some length in a public forum is vulnerable to such an attack. Software analyzes the recordings and creates a sophisticated fake of that person’s speech pattern that then can be used to socially engineer victims.

While once the realm of science fiction, there was a case in the United Kingdom last year that involved deep fake technology. The CEO of a UK-based firm thought he was speaking to his boss, the CEO of the firm’s parent company in Germany. He was instructed to wire $243,000 to a Hungarian supplier. The criminals used artificial-intelligence software to create a deep fake of the German CEO’s voice. The UK-based executive thought he was talking to his boss, so he promptly wired out the money. It was only after attempts to repeat the fraud that the UK-based executive became suspicious and uncovered the criminals. (The Wall Street Journal)

While this technology is in its infancy, it is clearly mature enough to be used to commit cybercrime. This shows the importance of working with a quality cyber insurer for two main reasons. First, when underwriting your risk, we assess the applicant’s risk management and offer tools to prevent such a claim from occurring. Good risk management includes a culture of security where employees are regularly trained to spot and alert others of any suspicious activity.

Second, as threats evolve, even companies with the best security can’t anticipate every type of threat. Brand new exploits called zero-day attacks — previously unknown vulnerabilities that hackers use to access systems — can hit even the most prepared companies. Insurance ultimately provides a financial backstop if all the best risk management techniques fail.

At Devon Park, we offer up to $500,000 in cybercrime limits on our Errors and Omissions, Media and Privacy (EMP) product. Contact your Devon Park Specialty underwriter today for more information or a quote.


As always, thank you for your support and business.

Jeff EstabrookContact and Written By Erik Tifft
Second Vice President, Underwriter | 844-438-6775, ext. 2354
March 5, 2020