data_mining:neural_network:neurons

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
data_mining:neural_network:neurons [2017/02/19 14:29] – [Stochastic binary neurons] phreazerdata_mining:neural_network:neurons [2017/08/19 17:43] (current) – [Rectified Linear Neurons] phreazer
Line 70: Line 70:
  
 ===== Rectified Linear Neurons ===== ===== Rectified Linear Neurons =====
 +
 +Aka ReLU (Rectified Linear Unit)
  
 $z=b+\sum_{i} x_{i} w_{i}$ $z=b+\sum_{i} x_{i} w_{i}$
-$y = \begin{cases} z, & \text{if } z > 0 \\ 0, & \text{otherwhise}\end{cases}$+ 
 +$y = \begin{cases} z, & \text{if } z > 0 \\ 0, & \text{otherwhise}\end{cases} = \max(0,z)$
  
 Above 0, it is linear, at 0 it is 0 Above 0, it is linear, at 0 it is 0
 +
 +Faster computation, since slope doesn't get very small/large.
 +
 +Leaky ReLU:
 +
 +$y =\max(0.01 z,z)$
  
  
Line 90: Line 99:
 $\text{lim}_{(z->∞)} \frac{1}{1+e^{-z}} = 1$ $\text{lim}_{(z->∞)} \frac{1}{1+e^{-z}} = 1$
  
-==== Softmax group ====+Switch from Sigmoid to ReLU lead to performance improvement (Slope of Sigmoid gradually shrinks to zero). 
 + 
 +===== tanh ===== 
 +Works better than Sigmoid function. 
 + 
 +$y = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$ 
 + 
 +Centering of data to 0. 
 + 
 +Exception: Output layer, since output should be in [0,1]. 
 +===== Softmax group ====
 + 
 +Logistic function output is used for the classification between two target classes 0/1. Softmax function is generalized type of logistic function that can output a **multiclass** categorical **probability distribution**. 
  
 Derivates: Derivates:
Line 135: Line 157:
  
 Also possible for rectified linear units: Output is treated as the poisson rate for spikes. Also possible for rectified linear units: Output is treated as the poisson rate for spikes.
 +
 +
  • data_mining/neural_network/neurons.1487510957.txt.gz
  • Last modified: 2017/02/19 14:29
  • by phreazer