lecture 04
Train a GAN with pytorch π₯
+
Python basics 03
SeAts APp SEAtS ApP SEaTS APP
π§
after today's lecture:
-- train a cool GAN from scratch
-- confidence on knowing what's going on behind the jumbo codes
-- prepare to train your own GAN!
our GAN for today is
here, everything is prepared :)
last week we met caffe, an elder deep learning library/framework
this lecture: Pytorchπ₯
this is the library/framework for today
first run:
run the cell one by one just to see the nice results
(no need to understand the codes)
run the tensorboard cell before running the training cell
fun stuff: go to images tab under tensorboard and smash that refresh button
your turn!
-- 1. save the notebook to your drive or open in playground mode
-- 2. under "Runtime" -> "Change runtime type", make sure "GPU" is selected
-- 3. run the cell one by one EXCEPT:
-- 4. run the tensorboard cell before running the training cell
codes in this notebook would take me days to write,
today we are not aiming for understanding every line of it,
instead we'll be working on conquering the fear of jumbo AI training codes
one new piece of python:
class
takeaway msg:
1. __init__() is just like the initialiser in swift class
2. methods in a class always have "self" as the first argument
one recap on GAN
it is a tom and jerry game between Generator π and Discriminator πΌ
Despite the names, G and D are (usually) nothing more special than smart combinations of layers we have already seen in MLPs, CNNs
Generator gets its name because its layers are set to output a 2D matrix given an 1D vector,
the notion of "generation" partly comes from this dimension expansion process
Discriminator gets its name because its layers are set to output 1 single number (indicating probability)
given an input 2D matrix,
the notion of "discrimination" comes from its objective of making correct guess on the image source
some gentle introduction
here
back to this...
we'll be working on conquering the fear of jumbo AI training codes
part 1 : what are the pytorch specifities and where are the models being defined?
look at the import cell
all the imported pytorch stuff are NOT for memorising,
we get to know more only when necessary and only after we starting using them
second run at the notebook:
there are four classes defined:
- Which class corresponds to Generator?
- Which class corresponds to Discriminator?
- Which class corresponds to assemble Generator and Discriminator together?
Recap from MLOne: the training process big picture
1. Build the model: have an initially guessed model with scaffold and muscle (random and imperfect)
->
2. Prepare the data: feeding data into the model (an epoch means the model has seen every training data point once )
->
3. Forward pass: input the data to the model, let the model do the computations and get the output
->
4. Loss calculation: measure how wrong this output is compared to the correct answer
->
5. Backward pass: using the loss under some computation rules to update the model's muscle (update rules specified by optimisor)
->
back to step 2 and repeat until some satisfaction criteria
conquering the fear of jumbo AI training codes
- part 2 : what does each class do?
third run at the notebook:
there are four classes defined:
- MNISTDataModule
- Generator
- Discriminator
- GAN
Without looking into each class's detail codes, try to assign the role of each class in terms of steps in the big picture
(add a comment above the class definition, me demo)
conquering the fear of jumbo AI training codes
- part 3 : diving deeper: where are the model scaffold (aka layers) being defined?
there are lots of pytorch-specific class/func, don't worry on having to know all of them
here are three important ones
- nn.Linear : fully connnected layer (MLP)
- nn.LeakyRelu : activation func similar to relu
- nn.Sequential() : to stack layers together
fourth run at the notebook:
Just by looking at Discriminator class
- self.model = nn.Sequential(...) is where the model scaffold is defined
- nn.Linear : fully connnected layer (MLP)
- nn.LeakyRelu : activation func similar to relu
- nn.Sequential() : to stack layers together
can we draw out the NN diagram? (me demo)
bonus questions:
-- 1. what is the number of epochs in this notebook?
-- 2. what is the input size of generator in this notebook?
-- 3. what is the input size of discriminator?
new series episode 3
dadabots π€, for your inner metalhead
summary today
- class in python π²
- Recap on GAN and training process πΎ
- GAN training notebook π
- model definition in pytorch (woohoo, connection from MLOne) π₯°
- next: we are going to WRITE a neural network from scratch
play time!
play around with the notebook, improve or deteriorate the generated image quality π
here are some possiblities you could try (mainly in G and D)
- change layers params in discriminator
- Add one or more layers to discriminator
- Change the block parameters in generator
- Try more epochs
- Change (mayber lower) the latent dimension (input of generator)
shoutout when things are broken! that will be fantastic because i like fixing