Automation is instrumental in helping Synapse scale. Rather than hiring a large back-office work force, we’ve been able to speed up results and reduce costs by investing in technology that streamlines traditionally manual processes.
A perfect example of this is our computer vision stack, which helps us automate Government ID verification. Not only does this reduce our costs, but it improves accuracy and enhances customer experience.
Automated Government ID verification is only part of the puzzle. Across the board we have seen automation lead to improved operational efficiency and end-user experience. Because of this, we sincerely believe that an automated back office is the future of banking.
One such area that is ripe for automation is remotely verifying an identity when additional verification is needed. For example, simple tasks like opening a deposit account remotely or initiating a high risk ACH transaction often requires additional identity verification. Traditionally this is handled by a bank’s customer service agent, who reaches out to the customer to verify identity through video calls (like BBVA), uploading selfies (like HSBC) or requesting additional documentation. Although these are viable options, they require some form of manual intervention which results in increased costs and a slow customer experience.
To remedy this problem we built Video Auth.
Video Auth is a simple and automated way to verify customer identity via a 5 second video recording.
Here’s how it works:
Step 1: User submits their Government ID:
Step 2: User then submits a 5 second video with an authorization message:
And, in about 20 seconds we verify the identity of the user. All without any human intervention.
The verification step consists of two parts:
1. Facial Recognition
Traditionally, facial recognition techniques such as eigenfaces, Fisherfaces, or LPBH would be applied to large training datasets and used to predict the identity of new face images. These techniques work, but they require labeled datasets, time to train, and consistency in environmental conditions such as lighting and camera angle. More recently, deep neural networks have been applied in facial recognition to create models that are invariant to environmental changes. However, these require even larger, and quite diverse, datasets.
Our challenge from the outset was clear: create a face model which could quickly differentiate faces using only the small amount of information available in our 5 second user authorization videos.
We began by splitting each video into 5–10 frames, saved as .png files. This, in conjunction with our facial detection algorithms, gave us a small, environmentally homogenous, labelled dataset on each user. Because we don’t have time in an API call to train or even update a neural network with output classes corresponding to every user, we decided to use a deep neural network, which encodes the 128 best features for distinguishing human faces generally. We apply this model to each extracted face from the user’s authorization video, as well as to the extracted faces from the user’s government ID, and make mathematical comparisons. These comparisons are fast, accurate, and invariant to environmental changes.
2. Voice to Text
After we have verified a user’s face, we need to know what service they wish to authorize. For this, we split out the audio from the user’s video authorization. We then use speech recognition algorithms to identify and transcribe their words. Speech recognition is based on a dictionary of phonemes, the basic sound components that build words. As a simple example, a word such as “Account” can be deconstructed phonetically as A-KOW-NT. Phonemes provide a low dimensional vocabulary from which all words are built. Therefore, we can use phonemes to predict words just like we use facial features to predict faces. Now that we know our user’s identity and what they want to say to us, we can process their authorization for financial services.
What are good use-cases for this service?
This is yet another tool that we offer to verify identities. We find it particularly helpful for those wishing to open deposit accounts remotely or those involved in high risk industries. If you believe your platform could benefit from this service, please reach out to us to discuss.
When is this available?
Currently Video Auth is in beta. To utilize it, you need to first supply a GOVT_ID on the user. Once that is submitted, you will need to supply VIDEO_AUTHORIZATION as a physical document to trigger video auth verification. You can see the full list of Physical documents at our API docs.
Our next planned update for this service will include a machine learned lip reading routine, followed by foreign language support.
We recommend reaching out to us at help@synapsefi.com before enabling this feature.