data_mining:logistic_regression

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
data_mining:logistic_regression [2014/07/20 02:29] – [Multiclass classification] phreazerdata_mining:logistic_regression [2018/05/10 17:48] (current) phreazer
Line 56: Line 56:
 Z.B. aus 3-Klassenproblem 3 binäre Probleme erzeugen. $h_\theta(x)^{(i)} = P(y=i|x;\theta); i=1,2,3$ Z.B. aus 3-Klassenproblem 3 binäre Probleme erzeugen. $h_\theta(x)^{(i)} = P(y=i|x;\theta); i=1,2,3$
  
-Dann wähle Klasse i, die $max_i h_\theta^{(i)}(x)$+Dann wähle Klasse i, die $\max_i h_\theta^{(i)}(x)
 + 
 +===== Adressing Overfitting ===== 
 +  - Feature reduction 
 +    * Manual selection 
 +    * Model selection algo 
 +  - Regularization 
 +    * Alle Features behalten, aber Größe/Werte der Parameter $\theta_j$ verändern. 
 +       * Funktioniert gut, wenn es viele Features gibt, die ein wenig zur Vorhersage von y beitragen. 
 + 
 +==== Regularization ==== 
 + 
 +$min \dots + 1000 \theta_3^2 + 1000 \theta_4^2$ 
 + 
 +Kleine Paremeter führen zu "einfacherer" Hypothesis. 
 + 
 +$min \dots + \lambda \sum_{i=1}^n \theta_j^2$ 
 + 
 +=== Gradient descent (Linear Regression) === 
 + 
 +$\dots + \lambda / m \theta_j$ 
 + 
 +$\theta_j := \theta_j (1-\alpha \frac{\lambda}{m}) - \alpha \frac{1}{m} \sum^m_{i=1} (h_\theta(x^{(i)} - y^{(i)}) x_j^{(i)}$ 
 + 
 + 
 +=== Normalengleichung (Linear Regression) === 
 +$$(x^T X + \lambda  
 +\begin{bmatrix} 
 +0 & \dots & \dots & 0 \\ 
 +\vdots & 1 & 0 & \vdots \\ 
 +\vdots & 0 & \ddots & 0 \\ 
 +0 & \dots & 0 & 1 
 +\end{bmatrix})^{-1} X^T y$$ 
 + 
 +=== Gradient descent (Logistic Regression) === 
 + 
 +Unterscheide $\theta_0$ und $\theta_j$! 
 + 
 +Für $\theta_j$: $\dots + \frac{\lambda}{m} \theta_j$
  • data_mining/logistic_regression.1405816189.txt.gz
  • Last modified: 2014/07/20 02:29
  • by phreazer