Expectation Maximization
- Iterative approach to MLE when latent variables are present. - Gaussian mixture models approach to density estimation, paramter of distributions are fit using em algorithm.
Problem: Estimate joint probability distribution for dataset
Density estimation: Select probability distribution and parameters to best explain jpd.
Assumption on MLE: All variables that are relevant to the problem are present (no hidden = latent variables).
Alternate formulation of maximum likelihood is required for searching appropriate model paramters ⇒ EM Algorithm
EM Algorithm
Two modes:
* E-Step: Estimate missing (=latent) variables * M-Step: Maximizte parameters of the model in the presence of the data
Usually applied for unsupervised learning (density estimation, clustering).
Mixture model
Unspecied combination of multiple probability distribution functions. GMM ⇒ estimate stddev and mean for each pdf.