data_mining:neural_network:types

This is an old revision of the document!


Types of NNs

First layer is input, last layer output, hidden layers inbetween. Deep network: >1 hidden layer

Transformation which change the similarity of the input cases (e.g. different voiced, same words): Activity of neurons in each layer are non-linear function of the activities in the layer below.

- Directed cycles (you can get back to the neurons where you start). - Harder to train

Natural for modelling sequential data: - Equivalent to very deep nets with one hidden layer per time slice, except that they use same weights at every time sclice and get input at very time slice. - Can rember info in the hidden state for a long time.

Like RNN, but connections between units are symmetrical (same weigth in both directions).

- Easier to analyze - Restricted: Cannot model cycles “Hopfield nets” if they have no hidden layer.

  • data_mining/neural_network/types.1484500755.txt.gz
  • Last modified: 2017/01/15 18:19
  • by phreazer