페이지

2022년 4월 24일 일요일

2. Outline and Perspective

 What we've done so far

- In-depth look at convolution

- CNN architecture

- Use structure of images

- Training + Inference


What we'll do now

- Now that we know the basic pattern, what are some novel architectures that do not follow the basic pattern?(ResNet)

1) We'll also look at VGG and Inception


Transfer Learning

- We've seen that even 3-5 layer nets can take a very long time to train

1) But now we'll be looking at 50 layer nets!

- Researchers today (in machine learning) are committed to openness, and by  sharing their research it is easy for you to do the state-of-the-art at home

Random neural network => TRAINING(ImageNet)=> Neural network <<pre-trained>> on ImageNet =>FINE TUNNG(Your data)=>Trained neural network

1) No other field can archive this:biology, medicine, physics, ... Try building a particle accelerator in your bedroom!

- We can make use of pre-trained weights using transfer learning-significantly reduces training time since we now only need to do fine-tuning

1) Works for completely new/unseen problems, e.g. take inception trained on ImageNet and use it on totally new data


Systems of CNNs

- We tuched on this in Deep Learning pt 8: GANs & Variational Autoencoders

1) System of 2 CNNs -- not real people ->


Object Detection

- SSD

- Both faster and more accurate than previous state-of-the-art

- A prerequisite for any self-driving vehicle



댓글 없음: