Fri | Apr 3, 2020

Artificial Intelligence and the Economy |19-yr-old applies facial recognition AI in improving fraud detection

Published:Monday | June 18, 2018 | 12:00 AMJordan Micah Bennett/ Contributor
Leon Wright, a 19-year-old Jamaican artificial intelligence researcher and programmer from Ctrl-IT Inc.

Artificial Intelligence and the economy features machine-learning computer models in Jamaica. These models are computer algorithms, or smart apps that seek to give computers the ability to learn, like children, to do a variety of tasks. Here, we highlight how an author's work may solve a particular set of real-world tasks or problems. By doing this, we aim to encourage more local research and development in artificial intelligence.

Today, we will highlight machine learning applied to automatic facial recognition and improving fraud detection. This is work being done by Leon Wright, a 19-year-old Jamaican artificial intelligence researcher and programmer from Ctrl-IT Inc. Intriguingly, Wright is one of the brightest, most resourceful and reliable employees at Ctrl-ITInc, but he is yet to complete his university degree.

Bennett: What is the most significant thing you've used machine learning to do at your company?

Wright: Among a few projects, I worked to build an account-opening application for a local financial institution. It's an application where a person would open an account at the institution and the staff would take out a tablet and ask for the person's identification, then scan the person's ID. Our algorithm would then grab the TRN from the ID, locating the face in the ID. If the picture on the identification card is good enough in quality, it is stored as a part of a database of face-images as an image we can later compare to the face on the person's identification card.

So, we then compare the image on the ID with a selfie the person takes. The selfie would be used as a way to ask our learning algorithm, if the selfie matched the face on the ID. The algorithm would have gained the ability to detect the person the next time the person came in to the institution, through the camera there. In this way, when the person next comes in with his or her identification card, our algorithm would try to take an image of the person from the institution's camera or a selfie, and try to match it with data belonging to users that had already signed up. Thus, we're able to quickly verify if the identification card the user brings in, indeed belongs to the correct user, instead of perhaps some impersonator. In this way, we've sensibly applied machine learning to build towards a type of fraud prevention when it comes to quickly verifying peoples' identities.

We're still working to roll out more products that concern more ways to prevent fraud. For example, we've already composed a video-based application in relation to call centres. This application is equipped with facial-detection algorithms like the one I discussed above, that would enable a similar level of security against fraud, where the person that calls in would likely not be able to fake [his/her] identity, given that we would have had the correct person's face on file, and given that our algorithm would be able to quickly return whether the person calling was actually a person in our database, or really, it would detect if that person was who he/she claimed to be. This is an added layer of security or verification, where we would facilitate video calls so that callers could be seen and verified with our learning algorithms.

What type of learning algorithm did you use, for example, did you use a convolutional neural network, or something else? Also, remind us why we don't need to 're-create the wheel' when it comes to applying these machine learning models.

We essentially used a class of learning algorithms called convolutional neural networks (CNNs). Convolutional neural networks are loosely inspired by actual brains. We used a library called TensorFlow, that already has the CNNs packaged as models. These models are flexible, and we adjust the TensorFlow CNN representations to our particular needs. With these models, we don't need to start from scratch, as the models that comprise thousands of lines of computer code, are already composed by PHDs in the field through Google, and released in the form of TensorFlow libraries we can utilise with few lines of computer code.

Tell us a little more about the convolutional neural network, such as what it is, what goes on in your application of the CNN, how many layers you used, etc.

CNNs are basically a type of mathematical sequence of operations called convolutions that form artificial layers of calculations. CNNs enable us to compose learning algorithms that do well on machine-learning tasks involving processing images.

The model is moderately large, with 132 layers of computation. CNNs can be trained in a way that the model will learn how to do things like detect faces. We trained the CNN by feeding to it labelled images of faces that belong to persons from the financial institution. We employed something called a triplet loss that enables us to match faces to persons. We 'query' the CNN, asking it if it thinks it's seeing a particular person's face. (Like, say, when somebody walks in and we capture their face on camera, and we want to see if he/she is in the database.) When the query happens, the CNN outputs an array or collection of values that represent each face.

Each collection or values that represent an object, such as a face, is called an embedding in machine learning. Embeddings of persons' faces are generated by the CNN, and we store those somewhere for later use. When a person comes into the financial institution, we take the input picture or selfie and ask the CNN if the person exists in the database. The query happens when the camera image of the person is taken and passed through the CNN's structure of artificial neurons and synapses. A new embedding is made that represents the face of the person that just walked in. We then compare the new embedding to prior embeddings generated at sign-up time. When we do this, we calculate the distance between a database record belonging to a person, and a person's camera image taken when he/she walks in. Close distances signify that the camera and database image pairs likely belong to the same person, where a decision is made based on a predefined threshold that represents whether the faces match or not.




Any problems with the machine-learning, face-detection model you guys would like to improve?

There are problems with facial recognition. For example, there is an employee here named Varij, who currently wears a big beard. In most of his pictures, he's not wearing any beard and his face appears skinnier. So, his face almost looks completely different than it does in his pictures. So there was a quite high error rate when it came to trying to match his current face to the face pictures of him we had on file. In this type of problem, the two things we're trying to match, although pertaining to some singular object, may be so different that it causes errors. In our problem scenario above, the distance for Varij was quite high, and that's difficult to solve without more representative images of his face in the database. This type of distance algorithm is good enough most of the times for scenarios when data is lacking.

What methods could be used to improve the learning algorithms in paper?

We could work to improve how much data we can feed the algorithm. The more data we have, the more opportunities that the algorithm may get to train on.

Tell us briefly about some societal impacts of your application?

Our algorithms can help to reduce a lot of fraud, and crimes.

What types of smart apps or machine learning models do you plan to work on soon?

I plan on continuing my work on facial recognition while improving the accuracy of my current algorithms. I also plan on using natural language processing and sentiment analysis to aid me in building my very own stocks and cryptocurrency platform. Also, I plan on using machine learning in route planning in a logistics application I am conceptualising at this time.

I'm looking forward to collaborating with you on machine-learning projects.

Next week, we will highlight more Jamaican persons applying machine learning.

- Jordan Micah Bennett is inventor of the Supersymmetric Artificial Neural Network and author of 'Artificial Neural Networks for Kids'. Send feedback to, or