data_mining:xgboost

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revisionBoth sides next revision
data_mining:xgboost [2019/05/03 01:09] phreazerdata_mining:xgboost [2019/05/03 01:10] phreazer
Line 20: Line 20:
 $$ $$
  
-Gradient boosting+===== Gradient boosting =====
  
 $F$ is space of functions containing all regression trees $F$ is space of functions containing all regression trees
Line 59: Line 59:
   * Logistic loss $l(y_i,\hat{y}_i)=y_i \ln(1+e^{-\hat{y}_i})+(1-y_i)\ln(1+e^{\hat{y}_i})$ (LogitBoost)   * Logistic loss $l(y_i,\hat{y}_i)=y_i \ln(1+e^{-\hat{y}_i})+(1-y_i)\ln(1+e^{\hat{y}_i})$ (LogitBoost)
  
-Stochastic Gradient Descent can not be applied, since trees are used.+Stochastic Gradient descent can not be applied, since trees are used.
  
 Solution is **additive training**: Start with constant prediction, add a new function each time. Solution is **additive training**: Start with constant prediction, add a new function each time.
Line 82: Line 82:
  
  
-Taylor expansion approximation of loss+==== Taylor expansion ====
  
 Use taylor expansion to approximate a function through a power series (polynom). Use taylor expansion to approximate a function through a power series (polynom).
  • data_mining/xgboost.txt
  • Last modified: 2020/08/02 16:12
  • by phreazer