Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
data_mining:neural_network:neurons [2017/02/19 13:28] – [Sigmoid Neuron] phreazer | data_mining:neural_network:neurons [2017/08/19 15:43] (current) – [Rectified Linear Neurons] phreazer | ||
---|---|---|---|
Line 70: | Line 70: | ||
===== Rectified Linear Neurons ===== | ===== Rectified Linear Neurons ===== | ||
+ | |||
+ | Aka ReLU (Rectified Linear Unit) | ||
$z=b+\sum_{i} x_{i} w_{i}$ | $z=b+\sum_{i} x_{i} w_{i}$ | ||
- | $y = \begin{cases} z, & \text{if } z > 0 \\ 0, & \text{otherwhise}\end{cases}$ | + | |
+ | $y = \begin{cases} z, & \text{if } z > 0 \\ 0, & \text{otherwhise}\end{cases} | ||
Above 0, it is linear, at 0 it is 0 | Above 0, it is linear, at 0 it is 0 | ||
+ | |||
+ | Faster computation, | ||
+ | |||
+ | Leaky ReLU: | ||
+ | |||
+ | $y =\max(0.01 z,z)$ | ||
Line 90: | Line 99: | ||
$\text{lim}_{(z-> | $\text{lim}_{(z-> | ||
- | ==== Softmax group ==== | + | Switch from Sigmoid to ReLU lead to performance improvement (Slope of Sigmoid gradually shrinks to zero). |
+ | |||
+ | ===== tanh ===== | ||
+ | Works better than Sigmoid function. | ||
+ | |||
+ | $y = \frac{e^{z}-e^{-z}}{e^{z}+e^{-z}}$ | ||
+ | |||
+ | Centering of data to 0. | ||
+ | |||
+ | Exception: Output layer, since output should be in [0,1]. | ||
+ | ===== Softmax group ===== | ||
+ | |||
+ | Logistic function output is used for the classification between two target classes 0/1. Softmax function is generalized type of logistic function that can output a **multiclass** categorical **probability distribution**. | ||
Derivates: | Derivates: | ||
Line 134: | Line 156: | ||
Output 0 or 1. | Output 0 or 1. | ||
- | Also possible for rectified linear units: Output is trated | + | Also possible for rectified linear units: Output is treated |
+ |