Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
data_mining:neural_network:transfer_learning [2018/05/21 20:32] – [Transfer learning] phreazer | data_mining:neural_network:transfer_learning [2018/05/25 21:13] (current) – [Transfer learning] phreazer | ||
---|---|---|---|
Line 3: | Line 3: | ||
Using pre-trained models / their trained weights as a starting point to train a model for a different data set. | Using pre-trained models / their trained weights as a starting point to train a model for a different data set. | ||
- | Use pre-trained net, initialize last layers with random weights. | + | Use pre-trained net, initialize last layers with random weights |
Options: Train new layers of network, or even more layers. | Options: Train new layers of network, or even more layers. | ||
Line 9: | Line 9: | ||
Prereqs: | Prereqs: | ||
- | * Task A and B have same input x | + | |
- | * Much data for existing model/task A, for new model only few data | + | * Much data for existing model/task A, for new model only few data |
- | * Low level features from A could be helpful for task B | + | * Low level features from A could be helpful for task B |
- | * Pre-trained model needs to generalize | + | * Pre-trained model needs to generalize |
+ | |||
+ | Another trick: | ||
+ | |||
+ | * Precompute output of frozen layers for all samples (save computation time later) | ||
+ | |||
+ | For larger set of samples: | ||
+ | |||
+ | * Only freeze first layers, train last few layers (and replace softmax) | ||
+ | |||
+ | For large set of samples: | ||
+ | |||
+ | * Use weights as initialization, | ||
===== Image-based NNs ===== | ===== Image-based NNs ===== | ||