Entropy is the single most powerful statistical tool discovered to date. That’s a bold claim. I intend to back that claim up in this post.
Many doubt that Statistical Mechanics and Classical Statistics have anything to do with each other. So I’ll lay this out step by step so you can see just how identical they really are. (more…)
Here are two formulations of the same problem. The mathematics is identical in either case, but the verbiage surrounding it illustrates entirely different statistical philosophies. The first is taken from this presentation (slide 8) but is essentially identical to a thousand other “explanations”: (more…)
The Replication crisis in science has brought out the Philosopher of Science in everyone. Great pronouncements are being made as to the way science should be done. So it’s worth recalling how Science achieved reproducible results in the past. (more…)
The previous three posts demonstrated how frequencies are a special case of more general Bayesian methods. This final post will consider the strengths and weaknesses of this special case by addressing three cons and two pros.
The last two posts (here and here) showed how Frequentists conflate the most likely frequency histogram with true probability distributions. The usual Bayesian complaint is that this limits statistics to repeated trials with stable frequency distributions. But that’s the least of the damage done. At a very practical level it causes Statisticians to get the conditions for success completely wrong.
From the previous post we need to construct a (Baysian) probability distribution from which has a high probability manifold such that for most elements in .
Although simple in conception it’s difficult to see how it’s carried out or how it relates to Statistics as usually taught. This post lays out the important mathematics.