4  First connection with machine learning

Published

2023-11-16

Some works in machine learning focus on “guessing the correct answer”, and this focus is reflected in the way their machine-learning algorithms – especially classifiers – are trained and used.

In §  2.3 we emphasized that “guessing successfully” can be a misleading goal, however, because it can lead us away from guessing optimally. We shall now see two simple but concrete examples of this.

4.1 A “max-success” classifier vs an optimal classifier

Note

You find the code for this chapter and exercises also in this JupyterLab notebook for R and (courtesy of Viktor Karl Gravdal!) this JupyterLab notebook for python.

We shall compare the results obtained in some numerical simulations by using

  • a Machine-Learning Classifier trained to do most successful guesses
  • a prototype “Optimal Predictor Machine” trained to make the optimal decision

For the moment we treat both as “black boxes”, that is, we don’t study yet how they’re calculating their outputs (although you may already have a good guess at how the Optimal Predictor Machine works).

Their operation is implemented in this R script that we now load:

source('code/mlc_vs_opm.R')

This script simply defines the function hitsvsgain():

hitsvsgain(ntrials, chooseAtrueA, chooseAtrueB, chooseBtrueB, chooseBtrueA, probsA)

having six arguments:

  • ntrials: how many simulations of guesses to make
  • chooseAtrueA: utility gained by guessing A when the successful guess is indeed A
  • chooseAtrueB: utility gained by guessing A when the successful guess is B instead
  • chooseBtrueB: utility gained by guessing B when the successful guess is indeed B
  • chooseBtrueA: utility gained by guessing B when the successful guess is A instead
  • probsA: a tuple of probabilities (between 0 and 1) to be used in the simulations (recycling it if necessary), for the successful guess being A; the corresponding probabilities for B are therefore 1-probsA. If this argument is omitted it defaults to 0.5 (not very interesting)

4.2 Example 1: electronic component

Let’s apply our two classifiers to the Accept or discard? problem of §  1. We call A the alternative in which the element won’t fail before one year, and should therefore be accepted if this alternative were known at the time of the decision. We call B the alternative in which the element will fail within a year, and should therefore be discarded if this alternative were known at the time of the decision. Remember that the crucial point here is that the classifiers don’t have this information at the moment of making the decision.

We simulate this decision for 100 000 components (“trials”), assuming that the probabilities of failure can be 0.05, 0.20, 0.80, 0.95. The values of the arguments should be clear:

hitsvsgain(ntrials=100000, chooseAtrueA=+1, chooseAtrueB=-11, chooseBtrueB=0, chooseBtrueA=0, probsA=c(0.05, 0.20, 0.80, 0.95))

Trials: 100000
Machine-Learning Classifier: successes 87522 ( 87.5 %) | total gain -25402
Optimal Predictor Machine:   successes 72608 ( 72.6 %) | total gain 9904

Note how the machine-learning classifier is the one that makes most successful guesses (around 88%), and yet it leads to a net loss! If the utility were in kroner, this classifier would cause the company producing the components a net loss of more than 20 000 kr.

The optimal predictor machine, on the other hand, makes fewer successful guesses overall (around 72%), and yet it leads to a net gain! It would earn the company a net gain of around 10 000 kr.

Exercise

How is this possible? Try to understand what’s happening; feel free to research this by modifying the hitsvsgain() function, so that it prints additional outputs.

4.3 Example 2: find Aladdin! (image recognition)

A typical use of machine-learning classifiers is for image recognition: for instance, the classifier guesses whether a particular subject is present in the image or not.

Intuitively one may think that “guessing successfully” should be the best goal here. But exceptions to this may be more common than one thinks. Consider the following scenario:

Bianca has a computer folder with 10 000 photos. Some of these include her beloved cat Aladdin, who sadly passed away recently. She would like to select all photos that include Aladdin and save them in a separate “Aladdin” folder. Doing this by hand would take too long, if at all possible; so Bianca wants to employ a machine-learning classifier.

For Bianca it’s important that no photo with Aladdin goes missing, so she would be very sad if any photo with him weren’t correctly recognized; on the other hand she doesn’t mind if some photos without him end up in the “Aladdin” folder – she can delete them herself afterwards.

Let’s apply and compare our two classifiers to this image-recognition problem, using again the hitsvsgain() function. We call A the case where Aladdin is present in a photo, and B where he isn’t. To reflect Bianca’s preferences, let’s use these “emotional utilities”:

  • chooseAisA = +2: Aladdin is correctly recognized
  • chooseBisA = -2: Aladdin is not recognized and photo goes missing
  • chooseBisB = +1: absence of Aladding is correctly recognized
  • chooseAisB = -1: photo without Aladding end up in “Aladding” folder

and let’s say that the photos may have probabilities 0.3, 0.4, 0.6, 0.7 of including Aladding:

hitsvsgain(ntrials=10000, chooseAtrueA=+2, chooseAtrueB=-1, chooseBtrueB=1, chooseBtrueA=-2, probsA=c(0.3, 0.4, 0.6, 0.7))

Trials: 10000
Machine-Learning Classifier: successes 6442 ( 64.4 %) | total gain 4350
Optimal Predictor Machine:   successes 5988 ( 59.9 %) | total gain 5504

Again we see that the machine-learning classifier makes more successful guesses than the optimal predictor machine, but the latter yields a higher “emotional utility”.

You may sensibly object that this result could depend on the peculiar utilities or probabilities chosen for this example. The next exercise helps answering your objection.

Exercise
  • Is there any case in which the optimal predictor machine yields a strictly lower utility than the machine-learning classifier?

    • Try using different utilities, for instance using ±5 instead of ±2, or whatever other values you please.
    • Try using different probabilities as well.
  • As in the previous exercise, try to understand what’s happening. Consider this question: how many photos including Aladdin did each classifier miss?

    Modify the hitsvsgain() function to output this result.

  • Do the comparison using the following utilities: chooseAtrueA=+1, chooseAtrueB=-1, chooseBtrueB=1, chooseBtrueA=-1. What’s the result? what does this tell you about the relationship between the machine-learning classifier and the optimal predictor machine?