This is an old revision of the document!
Using a single metric evaluation metric
Precision (% of examples recognized as class 1, were class 1) Recall (% of actual class1 were correctly identified)
- Classifier A: Precision: 95%, Recall: 90%
- Classifier B: Precision: 98%, Recall: 85%
Problem: Not sure which classifiers are better (due to tradeoff) Solution: New Measure which combines both (F1 Score): Harmonic mean $2/((1/p)+(1/r))$, or in general average
Use Dev set + single number evaluation metric to speed-up iterative improvement
Metric tradeoffs
Maximize accuracy, subject to runningTime ⇐ 100ms
N metrics: 1 optimizing, N-1 satisficing (reaching some threshold)
Train/Dev/Test set
Dev set / holdout set: Try ideas on dev set
Goal: Train and esp. dev and test set should come from same distribution
Solution: Random shuffle (or stratified sample)
Sizes
- For 100 - 10.000 samples: 70 Train 30 Test, or 60% Train 20% Dev 20 % Test
- For 1.000.000 (NNs): 98% Train, 1% Dev, 1% Test