data_mining:neural_network:neurons

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
data_mining:neural_network:neurons [2017/08/13 12:24] – [Softmax group] phreazerdata_mining:neural_network:neurons [2017/08/19 17:42] – [Rectified Linear Neurons] phreazer
Line 75: Line 75:
 $z=b+\sum_{i} x_{i} w_{i}$ $z=b+\sum_{i} x_{i} w_{i}$
  
-$y = \begin{cases} z, & \text{if } z > 0 \\ 0, & \text{otherwhise}\end{cases}$+$y = \begin{cases} z, & \text{if } z > 0 \\ 0, & \text{otherwhise}\end{cases} = \max(0,z)$
  
 Above 0, it is linear, at 0 it is 0 Above 0, it is linear, at 0 it is 0
 +
 +Faster computation, since slope doesn't get very small/large.
  
  
Line 94: Line 96:
  
 Switch from Sigmoid to ReLU lead to performance improvement (Slope of Sigmoid gradually shrinks to zero). Switch from Sigmoid to ReLU lead to performance improvement (Slope of Sigmoid gradually shrinks to zero).
 +
 +===== tanh =====
 +Works better than Sigmoid function.
 +
 +$y = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$
 +
 +Centering of data to 0.
 +
 +Exception: Output layer, since output should be in [0,1].
 ===== Softmax group ===== ===== Softmax group =====
  
  • data_mining/neural_network/neurons.txt
  • Last modified: 2017/08/19 17:43
  • by phreazer