Back

Old Photo Restoration using Deep Learning

Created 1 year ago
191 Views
1 Comments
Mandava3MtEwO
@Mandava3MtEwO
Mandava3MtEwO
@Mandava3MtEwOProfile is locked. Login

Imagine having the old, folded, and even torn pictures of your grandmother when she was 18 years old in high definition with zero artifacts!

Imagine having the old, folded, and even torn pictures of your grandmother when she was 18 years old in high definition with zero artifacts.
This is called old photo restoration and this paper just opened a whole new avenue to address this problem using a deep learning approach.

Technique overview

Vintage photo restoration is an important field of computer vision. We all want the old pictures of our loved ones to be clear and beautiful.

A picture like this doesn’t honor the person nor brings back as many good memories as a better-looking picture like this one does

We all have at least one old photo that we care about and is looking worse and worse with time. This is why many are working to solve this issue.

In this paper, researchers from the Hong Kong University and Microsoft Research proposed a novel network to address this problem, called “Old Photo Restoration via Deep Latent Space Translation”.

They are using deep learning to restore old photos that suffered from severe degradation, just like this one.

There are many approaches currently available and even commercial applications made for this, but this new technique produces better results than all of them! Just look at what the best commercial applications, such as Remini or Meitu, can do on these pictures and the results the researchers got with their technique:

The main problem with the previous conventional restoration techniques was that they were not able to generalize. This is caused because they are all using supervised learning, which is a problem caused by the domain gap between the real old picture and the ones that are synthesized for training. As you can see there:

Left is the synthesized picture and right the real old picture. You can see that it is already high definition even with fake scratches and color changes. Right

As you can see in these images, there is a big difference between the synthesized old images and the real old ones. You can see that the synthesized image is already in high definition even with the fake scratches and color changes compared to the other one that contains way fewer details.

This translation into latent spaces is learned through synthetic paired data but is able to generalize well on real photos since this same domain gap is way smaller on such compact latent spaces.

The domain gap from the two latent spaces produced by the VAEs is closed by jointly training an adversarial discriminator.
You can see in this image that the new domains from the latent spaces, “Zx” and “Zr”, are much closer to each other than the original old pictures “R” and synthetic old pictures “X”.
The mapping to restore the degraded photos is done in this latent space.

Their network is divided into specific branches that each solve a particular problem, which they called the partial non-local block.
There’s a global branch targeting the structured defects, such as scratches and dust spots by using a non-local block, considering the global context.
Then, they dive deeper into two local branches that target unstructured defects like noises and blurriness by using several residual blocks.

Finally, these branches are fused into the latent space that improves the capability to restore the old photo from all these defects.

There’s one last step in order to produce even better results.
Since the old photos we want to restore are most likely pictures from our loved ones, they will always have a face in them.

They added a face refinement network to recover fine details of faces in the old photos from the picture in the latent space, called “z”, and using the degraded face into multiple regions of the network.
This widely enhances the perceptual quality of the faces as you can see below.

More results

Now that we’ve seen how it works, let’s just see some more results from this amazing new network…

Conclusion

Of course, this was a simple overview of this new paper.
I strongly recommend reading the paper linked below for more information.

Comments
Please login to comment.