# A Bayesian analysis of Clintons 6 heads

Clinton recently won 6 coin flips during an Iowa caucus. On facebook and in the news, I’ve only seen information about how unlikely this is – the chances of 6 heads are 1.56% with a fair coin.

Yes, 6 heads is unlikely but these coin flips could have occurred by chance. I
mean, on the Washington Post coin flip demo, I got all heads on my 5th try.
Instead, it makes more sense a different question: given we observed these 6
heads, what are the chances this coin wasn’t fair?^{1}

This is a Bayesian approach; given our observations what probabilities can we
infer? This is not the classic frequentest approach that says “how likely are
my observations given all parameters about the model (the coin)?” This Bayesian
approach makes complete sense when *only* the observations are known.

To formulate this problem as a probability problem, we’ll have to define some variables:

- $\theta$ is the probability of getting a heads.
- $f_i$ is the $i$th flip and is either 0 or 1. $f_i = 1$ with probability $\theta$.
- $Y = \sum_{i=1}^6 f_i$ is the number of heads we saw. In Clinton’s case, $Y = 6$. $Y$ is a binomial random variable.

Then given the probability of a heads $\theta$, the probability of flipping $6$ heads with this binomial random variable with

which is exactly what Facebook/news articles focuses on. In their analysis, they are given $\theta = 1/2$ and show how unlikely it is. However, it could be that Clinton got 6 heads by chance – maybe she got lucky enough to be in the 1.56%?

To do this, we do need a prior probability, or we need to guess how likely it is that Clinton made the coin unfair, and by how much. We’re guessing at something and that guess will tend to bias our result! This is a big deal; we can’t be certain our estimate is unbiased.

To do that, let’s take it that the probability of a heads $\theta$ has this probability density function (higher values in the graph below are considered more likely):