Differences
This shows you the differences between two versions of the page.
Both sides previous revisionPrevious revisionNext revision | Previous revision | ||
data_mining:strategy [2018/05/21 15:25] – [Human level performance] phreazer | data_mining:strategy [2018/05/21 16:50] (current) – [Human level performance] phreazer | ||
---|---|---|---|
Line 48: | Line 48: | ||
Bayes optimal error (best optimal error) | Bayes optimal error (best optimal error) | ||
- | Human level error could be used as an estimate for Bayer error (e.g. in Computer Vision) | + | Human level error could be used as an estimate for Bayes error (e.g. in Computer Vision) |
* H: 1%, Train: 8%, Dev: 10% => bias reduction | * H: 1%, Train: 8%, Dev: 10% => bias reduction | ||
* H: 7,5%, Train: 8, Dev: 10% => variance reduction (more data, regularization) | * H: 7,5%, Train: 8, Dev: 10% => variance reduction (more data, regularization) | ||
+ | |||
+ | What's human-level error? Best performance possible as a human / usefullness | ||
+ | |||
+ | Measure of error between Human Error, Train Error and Dev error | ||
+ | |||
+ | * Avoidable bias: Human level <> Training Error | ||
+ | * Train bigger model | ||
+ | * Train longer/ | ||
+ | * NN architecture/ | ||
+ | * Variance: Training Error <> Dev Error | ||
+ | * More data | ||
+ | * Regularization | ||
+ | * NN architecture/ | ||
+ |