Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
data_mining:neural_network:types [2017/01/15 18:15] – angelegt phreazer | data_mining:neural_network:types [2019/10/26 12:09] (current) – ↷ Links adapted because of a move operation phreazer | ||
---|---|---|---|
Line 1: | Line 1: | ||
====== Types of NNs ====== | ====== Types of NNs ====== | ||
+ | ===== Perceptron ===== | ||
+ | See [[data_mining: | ||
===== Feed-forward NN ===== | ===== Feed-forward NN ===== | ||
First layer is input, last layer output, hidden layers inbetween. | First layer is input, last layer output, hidden layers inbetween. | ||
Line 8: | Line 9: | ||
Activity of neurons in each layer are non-linear function of the activities in the layer below. | Activity of neurons in each layer are non-linear function of the activities in the layer below. | ||
- | Recurrent NN | + | ===== Recurrent NN ===== |
- | - Directed cycles (you can get back to the neurons where you start). | + | |
- | - Harder to train | + | * Directed cycles (you can get back to the neurons where you start). |
+ | | ||
+ | |||
+ | Natural for modeling sequential data: | ||
+ | * Equivalent to very deep nets with one hidden layer per time slice, except that they use same weights at every time sclice and get input at very time slice. | ||
+ | * Can rember info in the hidden state for a long time. | ||
+ | |||
+ | See [[data_mining: | ||
+ | ===== Symmetrically connected NN ===== | ||
+ | |||
+ | Like RNN, but connections between units are symmetrical (same weigths in both directions). | ||
+ | |||
+ | * Easier to analyze | ||
+ | * Restricted: Cannot model cycles | ||
+ | * " | ||
- | Natural for modelling sequential data: | + | ===== Convolutional NN ===== |
- | - Equivalent to very deep nets with one hidden layer per time slice, except that they use same weights at every time sclice and get input at very time slice. | + | See [[data_mining: |
- | - Can rember info in the hidden state for a long time. | + |