data_mining:neural_network:tuning

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Last revisionBoth sides next revision
data_mining:neural_network:tuning [2018/05/20 15:16] – [Algo] phreazerdata_mining:neural_network:tuning [2018/05/20 15:20] – [Why does it work] phreazer
Line 256: Line 256:
   * Batch norm reduces amount in which hidden units shifts around, become more stable (input to later layers)   * Batch norm reduces amount in which hidden units shifts around, become more stable (input to later layers)
   * Slight regularization effect: Adds some noise, because it's normed on the mini batch   * Slight regularization effect: Adds some noise, because it's normed on the mini batch
 +
 +==== Batch norm at test time ====
 +
 +Here no mini-batch, but one sample at a time
 +
 +
  • data_mining/neural_network/tuning.txt
  • Last modified: 2018/05/20 15:21
  • by phreazer