data_mining:neural_network:types

This is an old revision of the document!


Types of NNs

First layer is input, last layer output, hidden layers inbetween. Deep network: >1 hidden layer

Transformation which change the similarity of the input cases (e.g. different voiced, same words): Activity of neurons in each layer are non-linear function of the activities in the layer below.

  • Directed cycles (you can get back to the neurons where you start).
  • Harder to train

Natural for modeling sequential data:

  • Equivalent to very deep nets with one hidden layer per time slice, except that they use same weights at every time sclice and get input at very time slice.
  • Can rember info in the hidden state for a long time.

See https://wiki.movedesign.de/doku.php?id=data_mining:neural_network:sequence_learning#recurrent_neural_networks

Like RNN, but connections between units are symmetrical (same weigths in both directions).

  • Easier to analyze
  • Restricted: Cannot model cycles
  • “Hopfield nets” if they have no hidden layer.
  • data_mining/neural_network/types.1502618281.txt.gz
  • Last modified: 2017/08/13 11:58
  • by phreazer