Last weekend’s NFL games were surpsisingly nailbiting for somebody who has never shown any interest in football, with the very last few minutes and seconds of Sunday’s final two games (NO @ NE and ARI @ SF) dishing up a complete Uturn for what would have otherwise been some sublime predictions against the spread.
Now that the dust has settled, here’s a wee analysis of the NFL forecasts – and associated confidence levels – ouput by our mysterious quasisentient Win98based forecasting system.
Below is exactly the same table that was output by the system last week, but I’ve colourised it to represent its performance against the actual outcomes of the weekend’s ensuing play.
Green meens that a prediction was correct and therefore a bet – if we had place one (we didn’t place any) – would have won us money. Red means that the prediction was incorrect. However, very importantly, the table has been colourised according to the confidence of the predictions (also indicated as percentages in brackets – as included in the original post). High confidence (where the system would have led us to believe that we should place a bet) is represented by bold colouration, whereas low confidence (where the system was able to hazard a likely outcome, but probably would not have twisted our arm to wager any money) is represented by feint colouration.
Three different levels of colouration are used, representing arbitrary bands of low confidence (5065%), medium confidence (6575%) and high confidence (>75%).


Pretty isn’t it? Lots of red and green. Some good times. Some bad times. A bit like Christmas.
A bit of rearrangement might provide us with a more useful way of looking at all that data. The charts below show how many bets we would have won (and lost) if we were to willing to accept a given minimum confidence level before placing a bet. In general, the charts show that as we require higher and higher confidence in our predictions, the number of bets we would be prepared to place goes down, but the proportion of bets we win goes up.
Both charts in this case show that setting a suitably high confidence level this week would have resulted in us winning every bet we placed. On the down side, that might have been just one bet. Still, better than losing on a cocksure whim. Actually, the left hand graph – which corresponds to picking outright winners (the other shows betting against the spread) – reveals a “sweet spot” at around 73% confidence where, this week at least, we would have won 3 bets and lost none.
Obviously these findings should be taken with a pinch of salt, as they are based on just one week’s games. In the name of something like science, I will conduct a similar analysis for each of the upcoming weeks of play, perhaps including a more statistically meaningful cumulative analysis showing how our system is performing over all of the games to date.
Meanwhile, hopefully our inexplicably selfaware Windows 98 machine – which is now in a roughly stable state (if a little lacking in charisma) – will make some improvements to its secret algorithm, thus improving our performance along the way.
Justin