Back

Let's Discuss - DeepFaceDrawing

Created 2 years ago
78 Views
0 Comments
Dhayanshariff
@Dhayanshariff
Dhayanshariff
@DhayanshariffProfile is locked. Login

Hey, readers? How are you?

Has it been a month already since my last blog? Anyway, I know you are waiting for my next blog and yeah, this time I have a really interesting one.

So, without further ado, Let’s discuss - DeepFaceDrawing: Deep Generation of Face Images from Sketches.

Do you remember The Magic Pencil from the old TV series, Shaka Laka Boom Boom? (How can anyone not?). In the series, the protagonist finds a Magic Pencil, which brings his drawing to reality. How awesome would it be to get such a pencil?

Well, a set of 5 scientists have come up with something similar which converts our not-so-good, novice, free-hand sketches into pretty realistic images of people. Don't believe me? Take a look below.

To explain without going too technical, this DeepFaceDrawing system takes the user's sketch input (it even takes an incomplete input) and derives input features. And then it compares those features with the features of trained sample data. Then it takes the matching image from sample data and applies the minute variations (guidance) given by you in your sketch to the matching image and voila! You got your output image.

To explain a little deeper, an individual set of autoencoders splits the input sketch into multiple different components (like, right eye, left eye, nose, mouth and rest of the face) and then these components are compared with corresponding component samples. The derived component samples combined, define a manifold which is then projected to the Feature Mapping Module. This module decodes the features and passes them to the Image Synthesis module, which gives us the output image we need. A huge data set of face sketch and image pairs are used to train the network. (The images trained now are taken straight from the front and without accessories like masks or glasses.)

Also, by varying relevant parameters, we can make the output image resemble too close or a bit far from our sketch.

Another great thing about this is with few intentional variations (guidance) in the sketches, we can create facial attributes in whichever way we desire.

Do you want a cute woman winking at you? Here you go!

Do you want a guy with one eye & eye brow smaller than the other? You got it.

Do you want to draw his not-so-identical twin? At your service.

(I got to admit! It is 02:00 in the morning and I am exhausted to the core, but I have never had so much fun writing a blog as I am having right now.)

Applications:

  • Criminal investigation - This can be very beneficial in drawing the accused's or offender's face with the witnesses' support.

  • Face Morphing - This system can be tweaked to have applications for morphing an existing image with variations provided by us.

  • Colorize image - By feeding black & white images, we can derive the equivalent colorized and most realistic image.

  • Image sharpening - This can take a blurred image or images with missing pixels and can generate sharp and clear output.

In all of the above applications, the point to note is this - Huge the data sample, the better and more realistic the output generated

Also, there are concerns about the potential misuse of this technology, such as creating fake identities or manipulating images for malicious purposes. It is crucial to use this responsibly and ethically to prevent any harm.

If you find this blog interesting you should definitely check out the below link where another set of scientists really outdid themselves and did something better. I guarantee you won't be disappointed.

https://www.fastcompany.com/90355803/watch-ai-turn-bad-sketches-into-photorealistic-drawings-in-seconds

Image credits: https://arxiv.org/pdf/2006.01047.pdf

Till we discuss something new...

Cheers!!!

Dhayanshariff. A

An aspiring Data Scientist

Comments
Please login to comment.