source('code/mlc_vs_opm.R')
4 First connection with machine learning
\[ \DeclarePairedDelimiters{\set}{\{}{\}} \DeclareMathOperator*{\argmax}{argmax} \]
Some works in machine learning focus on “guessing the correct answer”, and this focus is reflected in the way their machine-learning algorithms – especially classifiers – are trained and used.
In § 2.3 we emphasized that “guessing successfully” can be a misleading goal, however, because it can lead us away from guessing optimally. We shall now see two simple but concrete examples of this.
4.1 A “max-success” classifier vs an optimal classifier
We shall compare the results obtained in some numerical simulations by using
- a Machine-Learning Classifier trained to do most successful guesses
- a prototype “Optimal Predictor Machine” trained to make the optimal decision
For the moment we treat both as “black boxes”, that is, we don’t study yet how they’re calculating their outputs (although you may already have a good guess at how the Optimal Predictor Machine works).
Their operation is implemented in this R script that we now load:
This script simply defines the function hitsvsgain()
:
hitsvsgain(ntrials, chooseAtrueA, chooseAtrueB, chooseBtrueB, chooseBtrueA, probsA)
having six arguments:
ntrials
: how many simulations of guesses to makechooseAtrueA
: utility gained by guessingA
when the successful guess is indeedA
chooseAtrueB
: utility gained by guessingA
when the successful guess isB
insteadchooseBtrueB
: utility gained by guessingB
when the successful guess is indeedB
chooseBtrueA
: utility gained by guessingB
when the successful guess isA
insteadprobsA
: a tuple of probabilities (between0
and1
) to be used in the simulations (recycling it if necessary), for the successful guess beingA
; the corresponding probabilities forB
are therefore1-probsA
. If this argument is omitted it defaults to0.5
(not very interesting)
4.2 Example 1: electronic component
Let’s apply our two classifiers to the Accept or discard? problem of § 1. We call A
the alternative in which the element won’t fail before one year, and should therefore be accepted if this alternative were known at the time of the decision. We call B
the alternative in which the element will fail within a year, and should therefore be discarded if this alternative were known at the time of the decision. Remember that the crucial point here is that the classifiers don’t have this information at the moment of making the decision.
We simulate this decision for 100 000 components (“trials”), assuming that the probabilities of failure can be 0.05
, 0.20
, 0.80
, 0.95
. The values of the arguments should be clear:
hitsvsgain(ntrials=100000, chooseAtrueA=+1, chooseAtrueB=-11, chooseBtrueB=0, chooseBtrueA=0, probsA=c(0.05, 0.20, 0.80, 0.95))
Trials: 100000
Machine-Learning Classifier: successes 87423 ( 87.4 %) | total gain -26033
Optimal Predictor Machine: successes 72662 ( 72.7 %) | total gain 10086
Note how the machine-learning classifier is the one that makes most successful guesses (around 88%), and yet it leads to a net loss! If the utility were in kroner, this classifier would cause the company producing the components a net loss of more than 20 000 kr.
The optimal predictor machine, on the other hand, makes fewer successful guesses overall (around 72%), and yet it leads to a net gain! It would earn the company a net gain of around 10 000 kr.
4.3 Example 2: find Aladdin! (image recognition)
A typical use of machine-learning classifiers is for image recognition: for instance, the classifier guesses whether a particular subject is present in the image or not.
Intuitively one may think that “guessing successfully” should be the best goal here. But exceptions to this may be more common than one thinks. Consider the following scenario:
Bianca has a computer folder with 10 000 photos. Some of these include her beloved cat Aladdin, who sadly passed away recently. She would like to select all photos that include Aladdin and save them in a separate “Aladdin” folder. Doing this by hand would take too long, if at all possible; so Bianca wants to employ a machine-learning classifier.
For Bianca it’s important that no photo with Aladdin goes missing, so she would be very sad if any photo with him weren’t correctly recognized; on the other hand she doesn’t mind if some photos without him end up in the “Aladdin” folder – she can delete them herself afterwards.
Let’s apply and compare our two classifiers to this image-recognition problem, using again the hitsvsgain()
function. We call A
the case where Aladdin is present in a photo, and B
where he isn’t. To reflect Bianca’s preferences, let’s use these “emotional utilities”:
chooseAisA = +2
: Aladdin is correctly recognizedchooseBisA = -2
: Aladdin is not recognized and photo goes missingchooseBisB = +1
: absence of Aladding is correctly recognizedchooseAisB = -1
: photo without Aladding end up in “Aladding” folder
and let’s say that the photos may have probabilities 0.3
, 0.4
, 0.6
, 0.7
of including Aladding:
hitsvsgain(ntrials=10000, chooseAtrueA=+2, chooseAtrueB=-1, chooseBtrueB=1, chooseBtrueA=-2, probsA=c(0.3, 0.4, 0.6, 0.7))
Trials: 10000
Machine-Learning Classifier: successes 6502 ( 65 %) | total gain 4426
Optimal Predictor Machine: successes 6105 ( 61 %) | total gain 5788
Again we see that the machine-learning classifier makes more successful guesses than the optimal predictor machine, but the latter yields a higher “emotional utility”.
You may sensibly object that this result could depend on the peculiar utilities or probabilities chosen for this example. The next exercise helps answering your objection.