Deepfakes in the financial world: Why banks should act now

2. December 2020 Agnieszka M. Walorska (@agaw)

Financial firms must deal with deepfakes – realistic video and audio recordings that have been manipulated or produced with the help of artificial intelligence. The danger to banking is very real.

Fake pictures, videos and audio recordings have been around for a long time, but the rapid development of so-called deepfakes is now creating a completely new risk potential. Deepfakes (the term is derived from ‘deep learning’ and ‘fake’) are the highly complex products of artificial intelligence and – in simple terms – are two algorithms working together in a network to generate new data from existing data sets. This may be one of the most groundbreaking innovations in the field of artificial intelligence to date. At the same time, it is still far from reaching the peak of its potential but continues to grow at a rapid pace.

With deepfakes, fraudsters can put fake messages into the mouths of real people and also show the people concerned in a completely false context. For example, the software can find numerous recordings of Angela Merkel on the Internet, analyze them and then merge them into a new video with the original voice and freely invented content. With huge amounts of data and the right apps, even non-experts are now able to create video, audio and voice material that looks deceptively real to the untrained eyes and ears.

 

What does it mean for banks?

So far, there has not been a deepfake related scandal in the financial world. Rather, the novel forgeries have been largely used to denigrate women, including numerous female celebrities, with pornographic photos and videos.

Why is this topic nevertheless urgent for banks? Because deepfakes have the potential to cause damage at very different levels within the financial industry. Whether it is fake instructions that fraudulently manipulate employees and customers into certain actions or the distribution of fake news that mislead customers, investors and the public could be facing serious consequences.

It is therefore time to prepare employees, customers and internal structures for this risk. Already today, for example, it is necessary to alert employees to so-called social engineering. This term refers to the targeted influencing of people by fraudsters in order to gain access to confidential information or financial resources. This is also one of the main dangers of deepfakes.

One type of such crime is vishing, a phishing method in which a call is used as a fraud instrument instead of an email (voice + phishing = vishing). The use of deepfakes for voice generation makes this method more powerful than classic phishing, because the deceptive authenticity of the voice makes the fake more difficult to detect. Vishing, for example, carries the risk of enticing customers or employees over the phone to divulge sensitive information or data or to perform specific actions such as money transfers. Even such classics as the ‘grandchild trick’ (defrauding the elderly) can be made easier and more successful than before with the help of the real voice of the petitioner.

 

Last year, the first case of AI-based voice fraud became public, costing the company concerned 220,000 euros. The software had imitated the manager’s voice in English so successfully, including the slight German accent, that his British colleague complied with the caller’s request and transferred the requested amount. So far, this is a known isolated case, but it can be assumed that many companies affected by vishing have not made similar cases public and that there will be similar attempts in the future.

In extreme cases, fake messages spread by the media would be capable of influencing stock market prices and trade movements. For example with a ‘recording’ of an allegedly secret or sensitive conversation between top managers and politicians (or other instances where deepfakes are used as blackmail, e.g. to create fake visits to the red-light district or false connections with extremist groups) – in principle, there is no limit to the criminals’ imagination. Damaging the reputation of an institution could be just as easy as blackmailing an individual. In the long term, there is also a concern that identity theft could be used to hijack the online authentication of customers during onboarding. For the time being, however, the biometric identification procedures are still considered forgery-proof.

 

Concrete measures are needed: Training, process optimization, knowledge transfer

Some institutions are already investing in technological measures against digital forgery attempts, often in cooperation with fintechs. This is timely and important. But it is not enough. The priority now is to raise awareness among employees and customers as well as designing processes that make social engineering more difficult.

Effective knowledge transfer to create awareness of fakes in everyday work environment already exists, but it must be utilized before the phenomenon has become an acute threat. One deepfakes countermeasure that banks should already be taking is education and Employee Experience. In concrete terms, this means offering training for employees to share the current knowledge about deepfakes and also informing customers how to detect fraud.

Fraudsters target people. And people need the right education about digital experience, which will be the decisive factor in understanding dangers and recognizing them in everyday life.

How strong the effects of deepfakes will become is not yet foreseeable. However, the most important thing for the financial world now is to realize that it is at the beginning of a long development and to prepare. The age of fake news has already begun, and with the increasing presence of deepfakes in business, politics and society, disinformation is reaching new levels.

####

Related posts