data_mining:strategy

This is an old revision of the document!


Using a single metric evaluation metric

Precision (% of examples recognized as class 1, were class 1) Recall (% of actual class1 were correctly identified)

  • Classifier A: Precision: 95%, Recall: 90%
  • Classifier B: Precision: 98%, Recall: 85%

Problem: Not sure which classifiers are better (due to tradeoff) Solution: New Measure which combines both (F1 Score): Harmonic mean $2/((1/p)+(1/r))$, or in general average

Use Dev set + single number evaluation metric to speed-up iterative improvement

Metric tradeoffs

Maximize accuracy, subject to runningTime ⇐ 100ms

N metrics: 1 optimizing, N-1 satisficing (reaching some threshold)

Train/Dev/Test set

Dev set / holdout set: Try ideas on dev set

Goal: Train and esp. dev and test set should come from same distribution

Solution: Random shuffle (or stratified sample)

  • For 100 - 10.000 samples: 70 Train 30 Test, or 60% Train 20% Dev 20 % Test
  • For 1.000.000 (NNs): 98% Train, 1% Dev, 1% Test

Change dev/test set and metric

Change metric, if rank ordering isn't “right”

One solution: Use weights for certain errors

Two steps:

  1. 1. Place the target (eval metric)
  2. 2. How to shoot at target (how to optimize metric)
  • data_mining/strategy.1526915676.txt.gz
  • Last modified: 2018/05/21 17:14
  • by phreazer