Deepfakes and online identity – A threat for the Identity Management System?
Nowadays, any video can be a lie. The so-called deepfakes look like realistic recordings, which are faked. Anyone can use the appropriate software to create deceptively real-looking videos in which people’s faces are swapped, people saying things they never said or doing things they never did. In today’s world, fraudsters try to find more ways of commiting fraud due to advancing digitization. Deepfakes show a reality that has crept into televisions and mobile devices through social media.
What does deepfakes mean?
The term is made up of the English words “deep learning” and “fake”. Deep learning is an artificial intelligence technique that creates standard models and automated formats from a huge data source. Deepfakes are personal images and videos made by artificial intelligence and high computational effort. Deepfakes spread issue hoaxes and fake news by the falsified identity of a celebrity or opinion leader easily and believable. On top of that, companies and some organizations believe that these forged video or audio files may be used to falsify identity and impersonate the identity of customers and users to commit fraud.
Identity theft with deepfakes?
These techniques could represent a real risk to the eKYC processes, which many organizations and companies are afraid of. Due to the possibility of creating videos of personalities to issue fake news and informative hoaxes they wonder if a video identification in a habitual customer onboarding processes could also be tampered.
Identity protection possible while deepfakes attacks?
There are key elements, which ensure the identity of a person while being recorded during a video identification. The biometric processes, which are capable of verifying the identity of people, play a huge role. With advanced techniques and exact mathematical models, which meet high-security criteria, it is guaranteed whether the input they receive is a natural person, who is recorded in real-time, or if it is a forged video generated by a computer. Selfie identification does not meet standard regulations and lacks of security in identity verification. Whereas a video identification guarantees the level of security due to the timestamps and validity of the identification, which are checked in real-time.
The original ID document, which must be valid and shown while being recorded is very important. The key feature is the hologram, which verifies the originality and integrity of the ID document. The so-called two-factor authentification presents an additional security element.
Commiting fraud through a deepfake would be impracticabale in a real-time video identification process, especially with a two-factor authentification method.
Deepfakes do not present a threat to the identity management system due to the high-security, which a real-time video identification process offers. In addition to that, all systems that comply with the standard regulations regarding digital identity such as eIDAS and AML5 are protected against fraud.
Also, it is nearly impossible to obtain samples of pictures, audio or video files from a person who is not exposed to the media.
- Wir melden uns bei Ihnen -