data_mining:regression

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revisionPrevious revision
Next revision
Previous revision
Last revisionBoth sides next revision
data_mining:regression [2014/07/13 03:11] – [Learning rate $\alpha$] phreazerdata_mining:regression [2019/02/10 17:12] – [Cost function] phreazer
Line 17: Line 17:
  
 ==== Cost function ==== ==== Cost function ====
- +$\displaystyle\min_{\theta_0,\theta_1} \sum_{i=1}^m (h_\theta(x^{(i)})-y^{(i)})^2$
-$\text{minimize}_{\theta_0,\theta_1} \sum_{i=1}^m (h_\theta(x^{(i)})-y^{(i)})^2$+
  
 Vereinfachtes Problem: Vereinfachtes Problem:
  
-$\text{minimize}_{\theta_0,\theta_1} \frac{1}{2*m} \sum_{i=1}^m (h_\theta(x^{(i)})-y^{(i)})^2$+$\displaystyle\min_{\theta_0,\theta_1} \frac{1}{2*m} \sum_{i=1}^m (h_\theta(x^{(i)})-y^{(i)})^2$
  
 $h_\theta(x^{(i)}) = \theta_0 +\theta_1x^{(i)}$ $h_\theta(x^{(i)}) = \theta_0 +\theta_1x^{(i)}$
  
-Cost function (Squared error cost function):+Cost function (Squared error cost function) $J$:
  
 $J(\theta_0,\theta_1) = \frac{1}{2*m} \sum_{i=1}^m (h_\theta(x^{(i)})-y^{(i)})^2$ $J(\theta_0,\theta_1) = \frac{1}{2*m} \sum_{i=1}^m (h_\theta(x^{(i)})-y^{(i)})^2$
  
-Goal: $\text{minimize}_{\theta_0,\theta_1} J(\theta_0,\theta_1)$+Goal: $\displaystyle\min_{\theta_0,\theta_1} J(\theta_0,\theta_1)$
  
 === Functions (example with only $\theta_1$): === === Functions (example with only $\theta_1$): ===
Line 104: Line 103:
  
 $\theta_j := \theta_j - alpha \frac{\partial}{\partial\theta_j} J(\theta)$ $\theta_j := \theta_j - alpha \frac{\partial}{\partial\theta_j} J(\theta)$
 +
 +
 +==== Normalengleichungen ==== 
 +
 +  * Feature-/Designmatrix X (Dim: m x (n+1))
 +  * Vector y (Dim: m)
 +
 +$\theta = (X^TX)^{-1}X^Ty$
 +
 +  * Feature scaling nicht notwendig.
 +
 +Was wenn $X^TX$ singulär (nicht invertierbar)?
 +
 +(pinv in Octave)
 +
 +**Gründe für Singularität:**
 +  * Redundante Features (lineare Abhängigkeit)
 +  * Zu viele Features (z.B. $m <= n$)
 +    * Lösung: Features weglassen oder regularisieren
 +
 +**Wann was benutzten?**
 +
 +  * m training tupel, n features
 +  * GD funktioniert bei großem n (> 1000) gut, Normalengleichung muss (n x n) Matrix invertieren, liegt ungefähr in $O(n^3)$.
  
 ===== Gradient Descent Improvements ===== ===== Gradient Descent Improvements =====
  • data_mining/regression.txt
  • Last modified: 2019/02/10 17:14
  • by phreazer