[Nate Silver] “One of the most important tests of a forecast — I would argue that it is the single most important one — is called calibration. Out of all the times you said there was a 40 percent chance of rain, how often did rain actually occur? If over the long run, it really did rain about 40 percent of the time, that means your forecasts were well calibrated.”
[Wasserman] It does not get much more frequentist than that. And if using Bayes’ theorem helps you achieve long run frequency calibration, great. If it didn’t, I have no doubt he would have used something else.
Well I have major doubts; especially since non-calibrated forecasts are often far superior to calibrated ones.
Let’s take a look at some 10 day rain forecasts. Suppose over the next 10 days we have five days of rain and five of sunshine. So the actual weather is:
Now a Frequentist produces a forecast:
This is indeed calibrated the way Wasserman would like. If you look at those days when they had rain 40% of the time. For those which had they had rain 60% of the time.
Now here’s my Bayesian forecast:
Well this isn’t calibrated at all and a Frequentist would say I better adopt Frequentist principles or risk looking the fool.
But if we used these two distributions to predict the weather, we’d naturally go with the rule “predict rain on days when the odds favor rain”. In this case the Frequentist would get 50% of the predictions wrong while the Bayesian would get 100% of the predictions right!
So I propose the following definition of a Frequentist:
A Frequentist is someone who can’t wrap their brain around the fact that is predicatively better than even though the latter satisfies Pr(rain) = “frequency of rain” while the former violates it completely.
This isn’t a minor point either. As I tried to explain here (but no one seems to have understood) everyone in Finance is trying to find calibrated distributions like to predict stock prices. What they should be striving for is . If you succeed you’ll make a lot more money.