data_mining:neural_network:neurons

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
data_mining:neural_network:neurons [2017/08/13 12:24] – [Softmax] phreazerdata_mining:neural_network:neurons [2017/08/19 17:43] (current) – [Rectified Linear Neurons] phreazer
Line 75: Line 75:
 $z=b+\sum_{i} x_{i} w_{i}$ $z=b+\sum_{i} x_{i} w_{i}$
  
-$y = \begin{cases} z, & \text{if } z > 0 \\ 0, & \text{otherwhise}\end{cases}$+$y = \begin{cases} z, & \text{if } z > 0 \\ 0, & \text{otherwhise}\end{cases} = \max(0,z)$
  
 Above 0, it is linear, at 0 it is 0 Above 0, it is linear, at 0 it is 0
 +
 +Faster computation, since slope doesn't get very small/large.
 +
 +Leaky ReLU:
 +
 +$y =\max(0.01 z,z)$
  
  
Line 94: Line 100:
  
 Switch from Sigmoid to ReLU lead to performance improvement (Slope of Sigmoid gradually shrinks to zero). Switch from Sigmoid to ReLU lead to performance improvement (Slope of Sigmoid gradually shrinks to zero).
-==== Softmax group ====+ 
 +===== tanh ===== 
 +Works better than Sigmoid function. 
 + 
 +$y = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$ 
 + 
 +Centering of data to 0. 
 + 
 +Exception: Output layer, since output should be in [0,1]. 
 +===== Softmax group ====
 + 
 +Logistic function output is used for the classification between two target classes 0/1. Softmax function is generalized type of logistic function that can output a **multiclass** categorical **probability distribution**. 
  
 Derivates: Derivates:
  • data_mining/neural_network/neurons.1502619855.txt.gz
  • Last modified: 2017/08/13 12:24
  • by phreazer