data_mining:neural_network:autoencoder

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
data_mining:neural_network:autoencoder [2017/05/04 17:00] – [Shallow autoencoders for pre-training] phreazerdata_mining:neural_network:autoencoder [2017/07/30 18:01] – [Autoencoder] phreazer
Line 1: Line 1:
 ====== Autoencoder ====== ====== Autoencoder ======
 +
 +  * Unsupervised learning: Feature extraction, Generative models, Compression, Data reduction
 +  * Loss as evaluation metric
 +  * Difference to RBM: Deterministic approach (not stochastic).
 +
 +Encoder compresses to few dimensions, Decoder maps back to full dimensionality
 +===== Comparison with PCA =====
 +
  
 PCA:  PCA: 
Line 79: Line 87:
 ===== Conclusion about pre-training ===== ===== Conclusion about pre-training =====
  
-For datasets without huge number of labeled cases: Pre-training helps subsequenc discriminative larning, espescially if unlabeled extra data is available.+For data sets without huge number of labeled cases: Pre-training helps subsequent discriminative learning, espescially if unlabeled extra data is available.
  
-For very large, labeled datasets: Not necesseary, but if nets get much larger pre-training is necessary again.+For very large, labeled datasets: Not necessary, but if nets get much larger pre-training is necessary again.
  • data_mining/neural_network/autoencoder.txt
  • Last modified: 2017/07/30 18:02
  • by phreazer