With the iPhone X, Apple introduced face recognition to the iOS operating system. Face recognition is a form of biometric authentication that can be used to secure the phone or to safeguard different operations on the phone.

The face recognition technology on the iPhone X is a major step forward in bringing this software to consumer products and other mobile designs. Creative digital agencies that have attempted to release this innovation in the past have had trouble with reliability and security, and it has been a long journey to develop this new feature for iOS. With that in mind, we’re taking a look at some of the developments that have helped to make this the new standard for mobile security.

Sensors

Apple’s FaceID works by building a 3D model of the face. It then compares that model to a known template of the user’s face and gives it a score based on the similarity of the two models. Depending on the score, the device will then provide or deny access to the person holding the phone.

Since building a 3D model of the face is one of the keys to face recognition, the first hurdle is to provide the phone with the ability to accurately determine depth. One method that has been used in the past is to analyze RGB values, but this technology does not perform well in adverse lighting. In bright sunlight, the sensors get overwhelmed with light and in low-light conditions, there is a loss of pixel information.

Earlier versions of the iPhone used stereo cameras to determine depth in an image. The phone would compare the two images from the stereo cameras to create a disparity map, and this would allow the device to determine the depth of objects in a photograph.

With the iPhone X, the phone uses structured light cameras to determine depth. With the structured light camera, thousands of infrared dots are projected onto the surface. The phone then uses factors like the time of flight to create a 3D map of the face. This is a method that can reliably determine depth, and it can also work well under adverse lighting.

Neural Networks

The iPhone X needs to be able to reliably compare face templates for the facial recognition to work. It also needs to be able to do this quickly for it to be a service that people will want to use. In the past, this was a major challenge for face recognition systems. Thanks to advances in neural networks, these problems are being solved.

AlexNet was one such development that helped to demonstrate the capabilities of neural networks for image classification. With this Convolutional Neural Network, researchers were able to achieve a high level of accuracy for visual recognition.

Hardware

Convolutional neural networks are going to be the key to bringing technologies like face recognition and augmented reality to the next level. Recognizing this fact, manufacturers are now in competition to develop the processors that are going to power the Deep Neural Networks of the future.

To power the iPhone X, Apple developed a custom GPU. To run the complex algorithm behind FaceID, the GPU will use a “Neural Engine”. This is a pair of processing cores that are tuned to perform a range of algorithmic functions including the operations behind the FaceID system.

These technologies have helped to make face recognition a reality for a handheld device, but they can be applied to much more. With the advanced sensors and the application of Deep Neural Networks, we are going to see developers looking for new and inventive ways to integrate these technologies in with their apps for the ultimate, secure experience.

 

Written by Serena Garner, Guest contributor.

LEAVE A REPLY

Please enter your comment!
Please enter your name here