FaceApp and the state of machine learning

A good rule of thumb when evaluating free online services: When the service is free, you are the product. How does FaceApp intend to monetise its free app?

Moshe Kranc
Updated: Jul 24, 2019 03:43:08 PM UTC
SM_FaceApp_shutterstock_1452231542
Image: Shutterstock

Unless you’ve been on a deserted island for the past month, you have been inundated on social media by amusing pictures of what your friends would look like as octogenarians or as members of a different gender, all generated by a viral application called FaceApp. You may even have downloaded the app and tried it yourself. FaceApp provides an excellent springboard to examine the state of machine learning in 2019 at the nexus of topics such as technology, virality and privacy.

The technology
In Machine Learning terminology, a machine that creates images, for example, of human faces, is a neural network called a generator. The generator takes in a noise vector, i.e., a list of random numbers, and uses it to generate an image. The noise vector ensures variety; otherwise the machine would generate the same face every time.

Training a generator to create faces that look realistic would require millions of instances of training data, where the generator generates an image, a human being critiques the results by indicating which parts of the image are not realistic, and the generator then adjusts its model based on the human feedback and tries again.

In 2014, Ian Goodfellow came up with a seminal idea: instead of requiring a human critic, why not pit two neural networks against each other? We call such a machine a Generative Adversarial Network (GAN), where one neural network acts like a forger, attempting to generate realistic images, while the other neural network acts like an art critic, providing feedback on which parts of the generated image are flawed. Eventually, the generative network produces an image that passes muster with the adversarial network; i.e., we have generated a realistic image.

A conditional GAN extends this concept by adding a categorical input to the GAN; e.g., generate only elderly faces. This requires training data which is labelled by age, so the GAN knows the characteristics of an elderly face. The generator uses this training data to generate faces of various age categories, and the discriminator uses the training data to judge whether the face is sufficiently elderly.

A conditional GAN can generate realistic faces of a specific category, but FaceApp does something more—it generates categorised faces that look like a specific person. This requires an Identity-Preserving condition GAN, where the generative GAN must start not with a random noise vector, but rather with a stripped-down version of a specific input face image, unpolluted by category information. This “non-noisy” noise vector can then be fed to the conditional GAN as above.

Barriers to entry
Everything described so far is well known in the machine learning community and is implemented in Open Source algorithms that are available in easy-to-use software libraries. Presumably, FaceApp started with these algorithms and added some “secret sauce;” i.e., they seem to do a good job of identifying the central features of the input facial image, and they seem to have gathered a lot of training data categorised by conditions such as age and facial expressions.

That’s one of the important takeaways from FaceApp: in 2019, there is no barrier to entry for Machine Learning. Any Python programmer can create a world-class application that harnesses the latest academic breakthrough algorithms without having to understand the details of those algorithms. This creates tremendous opportunities for innovation; e.g., how you combine those algorithms, what problems you solve using those algorithms, where you find training data.

Privacy and virality
A good rule of thumb when evaluating free online services: When the service is free, you are the product. How does FaceApp intend to monetise its free app? Several possibilities have been suggested, from upselling more advanced photo-editing features, to creating a database of images that could be sold as training data to other companies.

Whatever the underlying business model, there is no doubt that you give up some of your privacy when you agree to FaceApp’s terms and conditions. You give FaceApp the right to access your camera and your photos, and you give them the rights in perpetuity to whatever images you upload. This information could fall into the wrong hands and be misused, e.g., to create a “deep fake” image of you at a crime scene to fool facial recognition software that is increasingly being used as a form of identification.

Despite these potential risks, millions of users have signed up and derive great pleasure from seeing how they might look in 30 years. This is another indication of where we are in 2019: we have become increasingly comfortable with giving up our privacy in return for free online services. Enterprises may be facing increasing privacy regulation such as GDPR and the California Consumer Privacy Act, but at home, our personal online data remains largely unprotected.

The author is Chief Technology Officer at Ness Digital Engineering.

The thoughts and opinions shared here are of the author.

Check out our end of season subscription discounts with a Moneycontrol pro subscription absolutely free. Use code EOSO2021. Click here for details.

Post Your Comment
Required
Required, will not be published
All comments are moderated