Types of NNs
Perceptron
See Perceptron
Feed-forward NN
First layer is input, last layer output, hidden layers inbetween. Deep network: >1 hidden layer
Transformation which change the similarity of the input cases (e.g. different voiced, same words): Activity of neurons in each layer are non-linear function of the activities in the layer below.
Recurrent NN
- Directed cycles (you can get back to the neurons where you start).
- Harder to train
Natural for modeling sequential data:
- Equivalent to very deep nets with one hidden layer per time slice, except that they use same weights at every time sclice and get input at very time slice.
- Can rember info in the hidden state for a long time.
Symmetrically connected NN
Like RNN, but connections between units are symmetrical (same weigths in both directions).
- Easier to analyze
- Restricted: Cannot model cycles
- “Hopfield nets” if they have no hidden layer.