Last week was a big test for my new EI Index – it had finally reached the point where I was confident it was working well enough to post its predictions in public.

For those people who haven’t come across it before, the EI index is a mathematical system I have been developing for ranking football teams and predicting the outcomes of matches. Using the EI it is possible to predict the odds for each team winning, drawing or losing its match.

So how did the EI do?

Well, this is the tricky bit as the EI is a probability model. For linear models it is relatively simple to assess accuracy as you get an R-squared value showing you how well your predictions match the observed result. The higher the R-squared then the better you did.

For a probability-based model though you cannot do this. An obvious alternative is to just look at whether the model’s predicted favourites won their matches. And on this measure the EI performed fantastically well by correctly matching seven of its nine predictions, giving it an overall success rate of (78%).

But we need to be careful here as this can be a misleading way of looking at accuracy. Just because Manchester City had a 54% probability of beating Chelsea doesn’t mean they will win the match purely because it is the most probable result. Instead, it means that if this match was played 100 times then Manchester City would be expected to win 54 of them and not win 46 of them.

Rather than looking at the accuracy of the predicted favourite winning we really need to look at the accuracy of the predicted probabilities. Are teams ranked with a 50% chance of winning actually winning 50% of the time? Are teams ranked with a 25% chance of winning actually winning 25% of the time?

This can only be done by making lots and lots of predictions so over the coming weeks I will keep making them until I have enough of predictions to get an estimate of how good they are.

Overall though it is a very exciting start for the EI, bring on next week!

**Jonas - March 1, 2013**

Have you heard about the rank probability score (RPS)? It seems, to me a least, to be a reasonable measure of how well a proabalistic model fares.

You should check out the paper “Solving the Problem of Inadequate Scoring Rules for Assessing Probabilistic Football Forecast Models” by Anthony Costa Constantinou and Norman Elliott Fenton.

**Martin Eastwood - March 1, 2013**

Thanks Jonas, that looks really interesting!

**GoalImpact - March 12, 2013**

Hi Martin,

Thanks for sharing your approach. This appears to be more sound to me than most of the rankings around. One easy way to check your prediction quality would be the power stat. Just sort the predicted outcomes by probability and make a xy chart with the sorted predictions on the x axis and the cumulative real outcome on the y axis. Then calculate the Gin I on that graph.

**Martin Eastwood - March 12, 2013**

Good idea, I like the sound of that. Will be giving that a try :)

Submit your comments below, and feel free to format them using MarkDown if you want. Comments typically take upto 24 hours to appear on the site and be answered so please be patient.

Thanks!