Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revision | |||
data_mining:neural_network:gradient_descent [2018/05/12 20:17] – [Learning rate decay] phreazer | data_mining:neural_network:gradient_descent [2018/05/12 20:21] (current) – [Learning rate decay] phreazer | ||
---|---|---|---|
Line 90: | Line 90: | ||
+ | ===== Saddle points ===== | ||
+ | In high-dimensional spaces it's more likely to end up at a saddle point (than in local optima). E.g. 20000 parameter, highly unlikely that it's a local minimum you get stuck. Plateus make learning slow. |