Learning Neural Style Transfer with PyTorch
What is Neural Style Transfer?
Neural Style Transfer is a technique to transfer the style of one image to another image. The idea is to create a new image that is similar to the content image but with the style of the style image. The content image is usually a photograph taken by a photographer and the style image is a photograph of a particular style.
Neural Style Transfer includes 3 images: style image, content image and generated image. We take style of style image, apply it to the content of content image and generate a generated image which will have the content of content image but style of the style image.
By Content we mean Objects and their arrangement
By Style we mean Style, Colors, Textures
Link to paper: https://arxiv.org/pdf/1508.06576.pdf
How?
To generate a generated image, we will start with an image full of noise. We will pass the noisy image through network and find out the activation after each convolution layer. Both content image and style image is also passed and their activation maps are also calculated. Then using those activation maps we find the content error and style error. The total error will be the linear combination of content and style error. Next we will change the pixel of generated image such that the total error decreases.
Which model?
We will use pre-trained VGG19 Network to extract content from content image and style from style images. These pre-trained network are trained over very large datasets and for sufficient output labels. Literature have shown that they does excellent jobs of detecting textures, styles and other visual processing tasks. We will freeze the parameters of these trained network so they don't get update later.
These trained neural network will give activation maps after each convolution layer. Lower level activation maps will represent pixel to pixel representation while higher level activation maps will provide high level contents, objects and their arrangement. So to find the content we must see the output from higher level activation maps.
Challenge
The main challenge here is to capture the style of style image. For that we will make a bold assumption that the activation maps produced after each convolution layer characterize the style of style image. The idea is not to use the activation map directly but producing a Gramian Matrix from the activation maps.
Let's say we have an activation map of depth 6, and spatial resolution of 8x8 i.e the activation map will be the size of 6x8x8 i.e there will be 6 8x8 activation map. To create a Gram Matrix we take pairs of each activation map such that first we will take pairs of 0 and 1, 0 and 2 ... 0 and 5, then 1 and 2, 1 and 3 ... 1 and 5, then 2 and 3, 2 and 4, 2 and 5, then 3 and 4, 3 and 5, then 4 and 5. We multiply each activation map pair element wise and add them together. The added value is placed on the position according to filter count i.e if we take pair 0 and 1, then the added result is placed at postion 0 and 1 of a Gram Matrix. Also 0 and 1, and 1 and 0 will have the same value. The reason why Gram Matrix does a fantastic job of capturing style of style image is not really known and the paper also doesn't clearly mention how the author came up with this idea. One argument could be that Gram Matrix will have larger value in those position where the filters overlap. So if the Gram Matrix have larger value then the pair of filters are activated by the same inputs.
As mentioned earlier, we mostly start with white noise image but for this tutorial we will clone the content image as the generated image starter instead of white noise.
Follow on Colab
Neural Network consists of two part. Features and Classifier. For this task we only need Features.
As discussed earlier, we get content and style from the activation maps of CNN. To do that we need to see the architecture of network we are going to use. Since we are using pre-trained VGG network. Let's see the archtiecture of VGG19.
VGG19 consist of 5 blocks of Convolution block, Let con1_1 be the first block the image pass through and conv5_1 be the fifth block the image passed through. Including max-pool and relu, the position of these 5 blocks in above trained cnn is 0, 5, 10, 19, 28. Also we will take content from position conv4_2. conv4_2 is higher enough to include higher level content invariant of position.
Post a Comment