data_mining:neural_network:autoencoder

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
data_mining:neural_network:autoencoder [2017/05/04 16:01] – [Learn binary codes for image retrieval] phreazerdata_mining:neural_network:autoencoder [2017/07/30 18:02] (current) – [Autoencoder] phreazer
Line 1: Line 1:
 ====== Autoencoder ====== ====== Autoencoder ======
 +
 +  * Unsupervised learning: Feature extraction, Generative models, Compression, Data reduction
 +  * Loss as evaluation metric
 +  * Difference to RBM: Deterministic approach (not stochastic).
 +  * Encoder compresses to few dimensions, Decoder maps back to full dimensionality
 +  * Building block for deep belief networks
 +===== Comparison with PCA =====
 +
  
 PCA:  PCA: 
Line 70: Line 78:
 Reconstructing 32x32 color images from 256 bit codes. Reconstructing 32x32 color images from 256 bit codes.
  
-===== Shallow autoencodes for pre-training =====+===== Shallow autoencoders for pre-training ===== 
 + 
 +Just have 1 layer. RBMs can be seen as shallow autoencoders. 
 + 
 +Train RBM with one-step constrastive divergence: Makses resconstruction look like data. 
 + 
 + 
 +===== Conclusion about pre-training ===== 
 + 
 +For data sets without huge number of labeled cases: Pre-training helps subsequent discriminative learning, espescially if unlabeled extra data is available.
  
 +For very large, labeled datasets: Not necessary, but if nets get much larger pre-training is necessary again.
  • data_mining/neural_network/autoencoder.1493906467.txt.gz
  • Last modified: 2017/05/04 16:01
  • by phreazer