I'm not sure I even need to make a post about week 7. I don't know how I can possibly top the ending of Michigan vs. Michigan State. Will anything ever top that? I'm not sure. Here it is. It's ridiculous. I can't believe I watched it live.
In other news, I was having an issue with each of my dashboard pages (links up top on the right) where they were basically non-responsive. That issue has been rectified and all 3 are testing great now. They're still only through week 6, I had a technical issue getting the simulation run this week, but I expect to have them updated tonight (Tuesday night).
I do have my model performance graphic updated for week 7. The graphic is different in one way: I changed how the horizontal lines were drawn to coincide with each 10th percent confidence rather than just doing every 10. This should make it easier for one to eyeball how the model is doing. Ideally, the model would be right 90% of the time when it has 90% confidence*, 80% right when 80% confident, and so on.
Looking down the graphic below, The model went 10/11 on 90% games, 8/10 on 80%, 11/13 on 70%, 8/13 on 60%, and 6/11 on 50% games.
*While it's true that the model should be 90% correct in games where it has 90% confidence, when we look at the 90-99% category, we should expect roughly 95% accuracy (the average of all the games in the 90% bucket). Same goes for the 80-89% category and so on.