data_mining:neural_network:autoencoder

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
data_mining:neural_network:autoencoder [2017/05/04 16:01] – [Shallow autoencodes for pre-training] phreazerdata_mining:neural_network:autoencoder [2017/07/30 18:02] (current) – [Autoencoder] phreazer
Line 1: Line 1:
 ====== Autoencoder ====== ====== Autoencoder ======
 +
 +  * Unsupervised learning: Feature extraction, Generative models, Compression, Data reduction
 +  * Loss as evaluation metric
 +  * Difference to RBM: Deterministic approach (not stochastic).
 +  * Encoder compresses to few dimensions, Decoder maps back to full dimensionality
 +  * Building block for deep belief networks
 +===== Comparison with PCA =====
 +
  
 PCA:  PCA: 
Line 73: Line 81:
  
 Just have 1 layer. RBMs can be seen as shallow autoencoders. Just have 1 layer. RBMs can be seen as shallow autoencoders.
 +
 +Train RBM with one-step constrastive divergence: Makses resconstruction look like data.
 +
 +
 +===== Conclusion about pre-training =====
 +
 +For data sets without huge number of labeled cases: Pre-training helps subsequent discriminative learning, espescially if unlabeled extra data is available.
 +
 +For very large, labeled datasets: Not necessary, but if nets get much larger pre-training is necessary again.
  • data_mining/neural_network/autoencoder.1493906511.txt.gz
  • Last modified: 2017/05/04 16:01
  • by phreazer