data_mining:neural_network:loss_functions

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revisionBoth sides next revision
data_mining:neural_network:loss_functions [2019/11/09 23:58] – [Binary cross entropy] phreazerdata_mining:neural_network:loss_functions [2019/11/10 00:04] – [Binary cross entropy] phreazer
Line 20: Line 20:
  
 More classes, higher entropy More classes, higher entropy
 +
 +Recap cross-entropy:
 +
 +$H_p(q) = - \sum_{c=1}^C q(y_c) * log(p(y_c))$ where p is other distribution
 +
 +When $p == q$, then $H_p(q) == H(q)$, so cross-entropy >= entropy.
 +
 +Recap KL-Divergence:
 +
 +Is difference between cross-entropy and entropy.
 +
 +$D_{KL}(q||p) = H_p(q) - H(q) = \sum_{c=1}^C q(y_c) (log(q(y_c) - log(p(y_c)))$
  • data_mining/neural_network/loss_functions.txt
  • Last modified: 2019/11/10 00:40
  • by phreazer