data_mining:neural_network:neurons

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
data_mining:neural_network:neurons [2017/08/13 12:24] – [Softmax group] phreazerdata_mining:neural_network:neurons [2017/08/19 17:43] (current) – [Rectified Linear Neurons] phreazer
Line 75: Line 75:
 $z=b+\sum_{i} x_{i} w_{i}$ $z=b+\sum_{i} x_{i} w_{i}$
  
-$y = \begin{cases} z, & \text{if } z > 0 \\ 0, & \text{otherwhise}\end{cases}$+$y = \begin{cases} z, & \text{if } z > 0 \\ 0, & \text{otherwhise}\end{cases} = \max(0,z)$
  
 Above 0, it is linear, at 0 it is 0 Above 0, it is linear, at 0 it is 0
 +
 +Faster computation, since slope doesn't get very small/large.
 +
 +Leaky ReLU:
 +
 +$y =\max(0.01 z,z)$
  
  
Line 94: Line 100:
  
 Switch from Sigmoid to ReLU lead to performance improvement (Slope of Sigmoid gradually shrinks to zero). Switch from Sigmoid to ReLU lead to performance improvement (Slope of Sigmoid gradually shrinks to zero).
 +
 +===== tanh =====
 +Works better than Sigmoid function.
 +
 +$y = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$
 +
 +Centering of data to 0.
 +
 +Exception: Output layer, since output should be in [0,1].
 ===== Softmax group ===== ===== Softmax group =====
  
  • data_mining/neural_network/neurons.1502619874.txt.gz
  • Last modified: 2017/08/13 12:24
  • by phreazer