“*There are three kinds of lies: lies, damned lies, and statistics*” — Origin Disputed

If you look around student internet forums and other social media enough, you can’t miss the concept known as the ‘WAM Booster‘ or ‘GPA Booster‘ course. The basic idea is that you take one or two spare electives or general education classes, for campuses that have them, that are easy to get a high grade in, and use this to deliberately ‘inflate’ your WAM or GPA.

As generally happens with the internet, these ideas grow and spread, and start to build a mythology all of their own. At that point, students eager for an easy ticket to a high WAM/GPA will fully get on board, having never even thrown a critical eye at the reality. The idea is even being used by nefarious characters, i.e., contract cheating organisations, to draw in victims with nasty knock-ons including blackmail.

One of the skills that make physicists so employable is the ability to be skeptical about an idea and then use quantitative methods like an ultra-sharp sword to cut through the talk to get at the reality. And that is exactly what I will set out to do in this blog-post. Essentially, the question is:

Will the WAMBooster concept give you this?

Or something more like this?

I got interested in this problem when my colleague, Sue Hagon, was telling me about doing a simple back of the envelope calculation on this problem. Imagine a typical 3 year undergraduate degree with 24 courses in it, and let two of them be ‘WAM Booster’ courses — I’ll use WAM for the remainder of this post as WAM always sounds cooler.

Sue’s calculation was: Assume a student with a WAM of 65 from 22 courses, and they happen to get 75 for two courses, what will be their WAM? You can work this out pretty quickly, it is just (65 * 22 + 75 * 2)/24 = 65.83. Wait, those ten extra marks in two courses are worth only 0.83 on WAM? Yep, check it for yourself if you don’t trust us.

Seeing this immediately tempted me into testing a more extreme example — a student with a WAM of 50 who manages to get 95 in a pair of ‘WAM Booster’ courses. Surely this must take your WAM to the moon, right? It’s an effective ‘WAM differential’ — defined as the difference between mark in a course and your WAM – of a staggering +45… for two courses. Take a guess first, it’s always good to know what your intuition says, and then do the maths (answer down below).

“*In the end, there is no silver bullet, no substitute for actually knowing one’s subject and one’s organization, which is partly a matter of experience and partly a matter of unquantifiable skill. Many matters of importance are too subject to judgement and interpretation to be solved by standardized metrics. Ultimately, the issue is not one of metrics versus judgment, but metrics as informing judgement, which includes knowing how much weight to give to metrics, recognizing their characteristic distortions, and appreciating what can’t be measured. In recent decades, too many politicians, business leaders, policymakers, and academic officials have lost sight of that.*” — Jerry Z. Muller

I spent a few hours this week marking computational physics exercises that our 3rd year physics undergrads do using Jupyter Notebook and the various libraries for data visualization, e.g., Matplotlib. I’ve always been a big believer in “*It is not fair to ask of others what you are not willing to do yourself*” (Eleanor Roosevelt), so I decided to sink a bit of Sunday into a bit of a personal refresher course on these aspects by doing a full workup of the statistics of WAM Boosting, with a few nice bits of DataViz to fully highlight how little edge there really is in this stuff.

I’m happy to share my model, but the basics are as follows:

- I assume a set of 22 normal courses that give a ‘core WAM’ of
*x*. For simplicity, I’ve assumed this distribution to be monodisperse (i.e., they’re all the same mark, which is*x*). The monodispersity shouldn’t massively affect the conclusions, but I’m happy for the curious to test this. - I assume a pair of WAM Booster courses, both of which get a mark of
*y*, which I’ve called the ‘Boost’. I’ve assumed they get the same mark just to keep the model simple (doable in a few hours not days). - All the numerical modelling is done in Excel, simply because some clever tricks with absolute & relative cells and fill-right and fill-down mean I can MacGyver the primary data out quicker than by writing python for it, and see the numbers in real time for checking.
- But excel sucks hard for plots, so I then kick the results off as .csv, and use Jupyter Notebook to handle the data from there, with pandas for the assembly and some of the nice 2D colourplot and contour map features of matplotlib to get all the visualisations out.

So without any further ado, let’s get into some visualisations and really dig into how this WAM Boosting nonsense works:

Figure 1 shows a 2D colormap of a students final WAM (24 courses) plotted as a function of core WAM (22 courses) on the *x*-axis and what I call ‘Boost’, which is just the score for the remaining two courses on the *y*-axis. To help with reading, I’ve added two sets of contours to the plot. The black solid line contours are just the final WAM, hence those lines running parallel to the gradations in the colormap. The blue dash-dot lines is a parameter that I’ve called effective WAM differential, which is the difference between the boost and the core WAM. A positive WAM differential means the boost marks are higher than the core WAM. A negative WAM differential means the boost marks ended up lower than the core WAM.

To walk you through the plot, consider the extreme case from earlier, which is a student with a core WAM of 50 who gets two scores of 95. This is a position on in the far upper left corner, essentially at 95 on the y-axis and around where the blue +45 WAM differential contour would be. You can see the black 55 final WAM contour coming in close here, which is consistent with the WAM of 53.75 that this student would get if you run the numbers (answering the question from earlier).

Yes, you read that right, even in the case where the student just barely passing 22 of their courses manages to get a totes amazeballs 95 their on two ‘WAM Booster’ courses, the most they get as an increase in WAM is a measly 3.75 marks. Wow. Such disappoint.

Figure 1 might be a little tricky to read for those who aren’t familiar with colormaps and contour plots, so it might be easier from here to calculate a new parameter called the ‘Effective WAM Boost’, which is the Final WAM for 24 courses minus the core WAM for 22 courses. In the case above, this would be 53.75 – 50.00 = 3.75. We can plot this as a colormap versus core WAM and Boost too, as we see in Fig. 2.

Figure 2 shows effective boost on a color scale that runs from pure yellow at an effective boost of zero, i.e., the two WAM Booster courses get the same score as the core WAM, to green for a positive effective WAM boost, i.e., the two courses increase final WAM above core WAM, and red for a negative WAM, i.e., the two courses increase final WAM below core WAM.

The first thing that’s evident in Fig. 2 is the limits on effective WAM boost, which takes its most positive value of +4.167 in the top left corner (student with core WAM 50 and 100 on their two WAM Booster courses) and -4.167 in the bottom right corner (student with a core WAM of 100 and 50 in their two WAM Booster courses).

The symmetry in this plot is interesting because what it implies is that almost all of the benefit of WAM Booster courses goes to the students with the lowest core WAM, which will immediately raise the hackles for all the meritocratic elitists out there — except that the gain made from this small in real terms, at most 8.3% (4.167/50) for a student with core WAM of 50. The effect drops off pretty quickly, for example, a student with a core WAM of 65 can’t get any more than 4.9% gain (2.917/65), a student with a core WAM of 80 can’t get any more than 2.1% (1.667/80), and a student with core WAM of 95 can’t get more than 0.44% (0.417/95).

The other interesting aspect of Fig. 2 is that WAM Booster courses can present risk rather than reward to students with high WAM. What I mean here is that the effective boost for a student with high core WAM will be negative unless the score on those two courses is at least equal to the core WAM. This means that the philosophy that we used to take on such courses when I was an undergrad in the early 1990s, which was that ’50 was a pass and 51 was a sign of misplaced effort’ would be a grave tactical error in these modern ‘WAM Booster’ times (i.e., you can’t opt out of the game if the education system makes WAM everything). Students who have a high core WAM and want to hold it cannot afford to do anything but look to knock such courses out of the park as well.

At this point I want to return to what I see as the origin of this whole WAM Booster nonsense, which is as I’ve pointed out in my last two blog posts here and here, that the uncertainty on a statistical quantifier such as WAM is at the integer level, and much of the pathology of WAM arises because it’s a) often presented to several decimal places and b) as though there is no uncertainty in the measure at all. As anyone who has done a first year physics course before will know, this is essentially scientific malpractice.

So how large is the uncertainty here? For the core WAM, since it’s monodisperse, it is zero, but, the final WAM is not monodisperse at all, and so the standard deviation, i.e., uncertainty, is far from zero. Crucially, how does it compare to the boost?

Figure 3 plots the statistical uncertainty, which ranges from 0 in the case where no effective boost has been obtained to a maximum of 14.116 in the top left and bottom right corners. At these two maximal points, indeed everywhere on the entire 2D map, this uncertainty exceeds the effective WAM boost by a factor of 3.39. The reason why will be evident in Fig. 4. To put it completely bluntly, lest it might be missed — in a proper treatment of WAM as a statistical quantity, any boost obtained by the gaming of two courses is less than 30% of the resulting statistical uncertainty from doing so.

In other words, if you are a true professional and always quote WAM with an uncertainty and a number of significant figures commensurate to that uncertainty, then the whole WAM Boosting thing is no longer a thing. For our student with a WAM of 65 who gets two courses with 75, the correct WAM to write is 65 +/- 3 or perhaps 65.8 +/- 2.8, and in that context, a boost of 0.83 is meaningless. Likewise, for our student with a WAM of 50 who gets two courses with 95, the correct WAM to write is 54 +/- 13 or perhaps 53.8 +/- 12.7, and in that context, a boost of 3.75 is also meaningless.

Breaking the monodispersity of the core WAM, which is what gives zero uncertainty for WAM differential of zero, should only add to the uncertainty everywhere. And this should always occur to a greater extent than breaking the monodispersity would affect the resulting effective boost. In other words, the finding that our uncertainty on the average is always greater that any boost obtained should be a universal result.

I will end with one last visualisation of the data, which is just to reinforce how much of a tiny-gains game this WAM Boosting stuff is, more or less across the board.

Figure 4 shows ‘slices’ of the full datafield obtained at 5 different core WAM values (55, 65, 75, 85 and 95) plotted against the magnitude of boost for the 2 remaining courses. The point of this plot is to present what should be evident by comparing Fig. 2 and Fig. 3, which is that while the effective boost and uncertainty both rise with WAM differential, as you’d expect, the uncertainty is always larger. Indeed, these trends are both linear, and their constant difference in slope is the reason behind my statement earlier that the uncertainty is always 3.39 times larger than the boost irrespective of core WAM and boost values.

I’ve chosen to highlight WAM differentials of +10 and +15 in Fig. 4 as a return to what are more sensible/likely WAM differences in real terms compared to the more extreme examples, e.g., core WAM 50 and boost of 95, earlier. As the graph shows, the boost for these two cases is the same, across the board, with the exception of very high WAM, where the boost simply cannot be achieved because a course has a maximum score of 100.

For a WAM differential of +10, the maximum boost in final WAM that can be achieved by this approach, at any core WAM, is 0.83, and for +15, it is 1.25. Someone on a core WAM of 50 would move to final WAM of 51.25, someone on a core WAM of 75 would move to a final WAM of 76.25, and someone on a core WAM of 95 would move to a final WAM of 95.42 as they’d hit 100 on both courses and only see 0.42 of the 1.25.

Ultimately, we really are just talking about a tiny lift here, that somehow by mythology and most people’s weak intuition on statistics, has been totally blown up into an advantage that it’s really not. But that’s not surprising, most people also have a weak intuition for probability, and that’s why online gambling is such a ridiculously lucrative ‘industry’ (whether you’d call wrecking lives for profit an industry is up for debate), as is speculation on things like Dogecoin.

To quickly recap, my main points are:

- Yes, there is a boost, but the boost is relatively small in real terms. A heavily weighted mean is always very resilient to outliers.
- The boost is less than 30% of the statistical uncertainty in the final WAM anyway, so if we simply reported it with the statistical uncertainty and to the correct number of significant figures, the effect would become irrelevant.
- We are totally looking at a problem caused by our obsession with single figure metrics, Goodhart’s law and perverse incentives, and I’d still advocate for getting rid of WAM/GPA entirely, as discussed in my last blog post.

Having done the maths now, I think the whole WAM boosting thing is best ignored by students. The better path is just to a) choose courses that interest and excite you and b) work hard on doing well in all your courses, and the rest will just take care of itself. I certainly wouldn’t take the risk of getting busted for academic misconduct, or worse, sending business the way of notorious thugs & organised crime syndicates, trying to squeeze out such miniscule advantage.

Acknowledgements: Sue Hagon for getting me started down this rabbit hole and matplotlib for some very excellent free plotting software that does awesome quality data visualisations.