Week 43: Deep Learning: Recurrent Neural Networks and other Deep Learning Methods. Principal Component analysis
Contents
Plans for week 43
Reading Recommendations
Summary on Deep Learning Methods
CNNs in brief
Recurrent neural networks: Overarching view
Set up of an RNN
A simple example
An extrapolation example
Formatting the Data
Predicting New Points With A Trained Recurrent Neural Network
Other Things to Try
Other Types of Recurrent Neural Networks
Generative Models
Generative Adversarial Networks
Discriminator
Learning Process
More about the Learning Process
Additional References
Writing Our First Generative Adversarial Network
MNIST and GANs
Other Models
Training Step
Checkpoints
Exploring the Latent Space
Getting Results
Interpolating Between MNIST Digits
Basic ideas of the Principal Component Analysis (PCA)
Introducing the Covariance and Correlation functions
More on the covariance
Reminding ourselves about Linear Regression
Simple Example
The Correlation Matrix
Numpy Functionality
Correlation Matrix again
Using Pandas
And then the Franke Function
Lnks with the Design Matrix
Computing the Expectation Values
Towards the PCA theorem
More on the PCA Theorem
The Algorithm before the Theorem
Writing our own PCA code
Implementing it
First Step
Scaling
Centered Data
Exploring
Diagonalize the sample covariance matrix to obtain the principal components
Collecting all Steps
Classical PCA Theorem
The PCA Theorem
Geometric Interpretation and link with Singular Value Decomposition
PCA and scikit-learn
Back to the Cancer Data
Incremental PCA
Randomized PCA
Kernel PCA
Other techniques
Reading Recommendations
Goodfellow et al, chapter 10 on Recurrent NNs, chapters 11 and 12 on various practicalities around deep learning are also recommended.
Aurelien Geron, chapter 14 on RNNs.
«
1
2
3
4
5
6
7
8
9
10
11
12
...
57
»