Category Archives: Uncategorized

An Intuitive Guide To Statistical Significance

An Intuitive Guide To Statistical Significance

The purpose of this blog post is to provide an intuitive understanding of why statistical significance works the way it does.  The most important concept for that is to understand why small differences in measured results become more statistically significant as you get more measurements. And the reason for this is that the standard deviation of the mean decreases significantly as you get more measurements, even if the standard deviation of the population doesn’t change at all.

There are 5 main types of statistical significance tests.  They are

  • Z Test
  • 1 Sample T-Test
  • Paired T-Test
  • 2 Sample T-Test with Equal Variance
  • 2 Sample T-Test with Unequal Variance

This blog post does not walk through all the examples and equations for each of the types.  This post gives the overarching framework that ties them all together.  If you want to see more detail about each type of T-Test you might be interested in my book  “Hypothesis Testing: A Visual Introduction To Statistical Significance

And if you want all the different equations and a quick example of when you would use each on a single page, check out this cheat sheet for hypothesis testing.  http://www.fairlynerdy.com/hypothesis-testing-cheat-sheets/

statisitical significance cheat sheets

 

What Is Statistical Significance?

When you are testing for statistical significance, the equation you are asking is, how unlikely would this outcome have been if it just happened by random chance?

You are likely aware that most real life events have some degree of scatter in the results.  If you take a bunch of measurements for similar things you will get some measurements that are high, some that are low, and some that fall in the middle.  For instance, if you are buying a prepackaged 5 lbs. bag of apples at the grocery store, and you weigh all 20 bags of apples that are on the shelf at the time, you will likely see some that are around 5 lbs., some that are 5.1 lbs, some that are 5.2 lbs, and maybe even a couple that are 4.9 lbs.

You might come in one day and measure 10 bags of apples and find that the average weight of those 10 bags is 5.2 lbs.  You might come in a couple of months later and measure 10 different bags and find that the average weight of those bags is 5.0 lbs.  Does that change in results mean that something changed to cause the new bags to be lighter?  For instance, maybe the apple packers are using more accurate machines to make sure all the weights are barely above the guaranteed 5 lbs.  Or maybe that change in results is just a random outcome of the particular bags you chose to measure on that particular day.   Statistical Significance is a way of determining which of those possibilities is true.

What we are doing with statistical significance calculations is determining how unlikely an outcome was to occur by random chance, and then deciding if that probability is unlikely enough that we can conclude something other than random chance caused that outcome.

The two most important things in a statistical significance calculation are the distance the average value of your measured data is from what you are comparing it against, and the standard deviation of what you are measuring.  It is easy to understand how the difference in measurements is important.  For instance, in the apple example, if my first group of apples weigh 7 lbs. and the second group weigh 5.0 lbs. there is more likely to be a significant difference than if my first group weigh 5.2 lbs. and my second group weigh 5.1 lbs.  The greater the difference in measurements, the greater the significance, assuming that all other values are equal.

 

What Is The Standard Deviation

Standard deviation is the second important topic in calculating statistical significance.  It is worth going over how the standard deviation works.  Standard deviation is a way of measuring how spread out your measured values are. If the values you are measuring are all clustered together, they will have a low standard deviation.  If they have a lot of variation in the measurements, they will have a high standard deviation.

For example, if you had measured 5 bags of apples and their weights were 5.0, 5.1, 5.1, 5.2, and 5.2 lbs, those results are a lot more tightly clustered, and hence lower standard deviation, than if you had measured 5 bags and gotten weights of 5.0, 5.6, 5.9, 6.2, 6.5 lbs.

The image below shows what we typically think of when we think about standard deviation.  There is a mean value at the center of a normal curve.   If you make another measurement of the same type of data it will fall somewhere on that normal curve, with decreasing likelihood the farther you get away from the mean value.

typical normal curve

With a typical normal curve

  • 68 percent of the data will fall within 1 standard deviation of the mean
  • 95 percent of the data be within 2 standard deviations
  • 99.7 percent of the data is within 3 standard deviations

So if we know the mean value and standard deviation of our data, we know the distribution of results that we expect to get.  We know that if we get a result that is 2 standard deviations away from the mean, that it is a rare event because 95% of the results will fall within 2 standard deviations of the mean.

With hypothesis testing, what we are doing is turning that chart around and asking the question in reverse.  Now what we are doing is putting a normal curve around our measured data, and asking the question “How likely is it that this measured data came from the same source as the reference data?”  We are asking how many standard deviations the reference value is from our mean value.   This is shown in the chart below.

reverse normal curve

Now, this might seem like it was a useless distinction.  If the reference value was two standard deviations away from the measured value, then the measured value will be two standard deviations away from the reference value.   That is completely true, but only if we only have a single piece of measured data.

 

 

Statistical Significance Becomes Important When You Have More Than 1 Measurement

The one piece of information that you can’t figure out just using a typical normal curve is what to do with additional measurements.   Let’s say that you have a mean and standard deviation for a set of data. For instance, maybe the bags of apples on week weigh 5.2 lbs on average and have a standard deviation of 0.2 lbs.  If you go back to the store a month later and weigh a bag of apples that is 2 standard deviations heavier, at 5.6 lbs. you will think that is unusual because a 2 standard deviation difference is a rare event.  But what if your next measurement is 5.4 lbs, and your next one after that is 5.1 lbs.?  Do the additional measurements make you more or less convinced that the new bags of apples are different than the old bags?

The effect of increased quantity of measurements is the single most important concept in understanding statistical significance.  If you understand this, then you understand statistical significance. The rest of it is just knowing when to apply which equation.

The concept is this: We do not care about the standard deviation of our data. What we care about is the standard deviation of the mean value of all of our measurements. And that standard deviation of the average can change as you make additional measurements.

This is shown in the chart below. Like the chart above, it has a normal curve centered on the mean values of the measured data. However, due to the fact that there are more measurements in this data, rather than just the single measurement in the chart above, this normal curve is narrower

Since the normal curve is narrower, the reference value falls farther away from the measured average, in terms of the number of standard deviations.  As a result, we will conclude that it is less likely that our measured values and the reference value came from the same set of data.  I.e. we are less likely to get a 4 standard deviation difference than a 2 standard deviation difference unless there is an actual change that drove the difference (i.e. we are using a different group of people to pack the bags of apples this week, and they are packing them heavier.)

 

Why Does The Standard Deviation Of The Mean Decrease With More Measurements?

We stated that what we are interested in is the standard deviation of the mean value of all of the measurements we make.  The standard deviation of that average value decreases as you increase the number of measurements made.  Why is that? Fortunately, we don’t have to use complicated math to explain this topic.  You have almost certainly seen it before, although you probably didn’t think about it in these terms.

The way to understand the standard deviation of the average result is to think about what happens when you roll a single die vs what happens when you roll 2 die or more dice.

If you roll a single die, you are equally likely to get a 1, 2, 3, 4, 5, or 6.  Each value will come up one-sixth of the time, so the probability distribution of a single die rolled six times will have each value coming up one time.

probability distribution of a single die

There is no way to predict the outcome of a single die roll.   No outcome is more likely than any other.

Now, what happens if you roll two dice, and add their values? Well, there are 36 different permutations of die rolls that you can get out of two dice, 6 values from the first die, multiplied by 6 values from the second die.  However, there are only 11 different sums that you can get from those two dice, the values of 2 through 12.

Those 36 different die permutations don’t map evenly onto the 11 different possible sums. You are more likely to get values in the middle of the range than values on the edges. The probability distribution of the sum of two dice is shown below.

probability distribution of 2 die

All of a sudden, you can start to make predictions on the outcome of rolling 2 dice, even though you can’t do it for any specific die.  The single most likely sum is a 7, which is why casinos make that value a number that they win on in the game of craps.  This effect where the sum of two dice have an unequal probability distribution also shows up in the gameplay of a lot of board games, for instance, it is the key mechanic in the game “Settlers Of Catan

settlers of catan

For our purposes, the key point is that the probability of outcomes is more concentrated in the center of the graph for two die rolls than it is for a single die roll. That concentration of probability into the center of the graph doesn’t stop with 2 dice.  If you sum the value of 3 dice there are 216 different permutations of rolls (6 * 6 * 6) mapped onto 16 possible values (3-18).   The probability distribution for 3 dice summed is shown below.

this show how to calculate standard deviation for the mean of rolling 3 dice

Even though it isn’t quite as visually obvious as going from 1 die to 2 dice, summing 3 dice has a greater concentration of probability in the center of the graph than the sum of 2 dice. That process would continue if we kept rolling additional dice.

The Probability Distribution Of The Average Result

So far with these dice, we’ve talked about the sum of the dice values.   Now let’s talk about the average value.  To calculate the average value, just take the sum and divide by the number of dice.  So for instance, if you rolled a sum of 7 on 2 dice, then your average value was 3.5.

The probability distribution of the average value for 1, 2, and 3 dice rolled is shown in the plot below.

as you increase the number of samples, the standard deviation of the mean decreases

This plot makes a few things obvious

  • Not matter how many dice are rolled, the mean value is always centered on 3.5. This is because the average of all the numbers on a single die, (average of 1, 2, 3, 4, 5, 6) is 3.5
  • As you increase the number of dice, the probability of getting an average value that is on the edges of the range decreases dramatically. The bulk of the probability shifts to the center of the graph

 

Why Is This Important For Statistical Significance?

Let’s pause for a second and reiterate why we care about the above result, in terms of statistical significance.  What we see is that by making multiple measurements, we can be more confident in their average value than we can in the result of any single measurement.  The range of possible outcomes for the average of all measurements is the same as the range of possible outcomes for a single measurement, but the distribution over that range is a lot more clustered in the center.  What we are seeing is that the standard deviation of the mean of multiple measurements is lower than the standard deviation of a single measurement.   Since statistical significance is about the number of standard deviations outside of the mean value your results are, this is very important.

 

Calculating The Standard Deviation Of The Average Result

In this plot

standard deviation of the mean of three different dice rolls

It might be surprising that the probability for every single possible average for 3 dice is lower than their counterpart for 2 dice, and also for 1 die. That is because as you increase the number of dice, the probability is spread among more possible outcomes. There are only 6 possible outcomes with 1 die, but 11 possible outcomes with 2 die and 16 possible outcomes with 3 die.   In order to get consistent probability distributions with different numbers of dice, we can use a histogram and ‘binning’ to make sure the probability is spread among the same number of outcomes.

That wouldn’t plot very well for 1, 2 and 3 dice; however here is a binned result of the probability distribution for the average roll when rolling 5, 10, and 20 dice.

standard deviation of the mean of dice rolls

As you can see, the greater the number of rolls, the more the probability distribution of the average value is clustered around the average value of 3.5 and less at the edges. This means that the standard deviation of the average is decreasing as you increase the number of samples used to calculate it.

We can, in fact, calculate the standard deviation of the average for a given number of dice.  For a single die, this is just the population standard deviation of [1, 2, 3, 4, 5, 6], which is 1.7078.   For two dice, it would be the population standard deviation of the 36 possible average values (11 distinct values) of [1, 1.5, 1.5, 2, 2, 2, 2.5, 2.5, 2.5, 2.5, 3, 3, 3, 3, 3, 3.5, 3.5, 3.5, 3.5, 3.5, 3.5, 4, 4, 4, 4, 4, 4.5, 4.5, 4.5, 4.5, 5, 5, 5, 5.5, 5.5, 6], which is 1.207615.   Fortunately, there are more efficient ways to calculate the standard deviation than listing out every single value (by using a weighted average of the squared difference from the mean).  When you calculate the standard deviation of the average for all numbers of dice between 1 and 20, you get the plot below.

standard deviation curve fit of dice rolls

As expected, we see that the standard deviation of the average continues to drop as we increase the number of samples.  The other notable thing about this plot is that the rate of change begins to level off.  Adding a few more dice drastically decreased the standard deviation of the average at the beginning, but it takes a greater and greater number of dice to get the same amount of change.

In practical terms, what this means for statistical significance is that there is a diminishing return to getting more data to use in your statistical significance calculation.  At the beginning additional data will make a large change, however, eventually, the cost of acquiring additional data will outweigh the benefit.

The plot below is the same as the plot above, except with a regression curve fit on the data.

curve fit of standard deviation

What we can see is that the regression fit the data exactly. The rate of change of the standard deviation of the average is equal to the standard deviation of the population, multiplied by the number of data points raised to the power of negative one-half.   A power of one-half is the same as a square root.  A negative power is the same as dividing by that number. Rewriting the equation to incorporate those changes

curve fit equation of dice rolls

Here the 1.7078 is the standard deviation of the population of average values with 1 sample (i.e. [1, 2, 3, 4, 5, 6]). We will denote that with a sigma. Here ‘x’ is the number of dice.   In later problems instead of ‘x’ denoting the number of dice, the equations use ‘n’ to denote the number of measurements. If we use those symbols, this equation becomes

curve fit equation of dice rolls 2

Although it varies slightly depending on the problem in question, that sigma and the square root of n appear in pretty much every variation of the statistical significance equations.  What they are demonstrating is that the standard deviation of the average of the samples is the standard deviation of the samples (sigma) divided by the square root of the number of samples. Or to put it another way, as you increase the number of samples, the resulting average of your measurements is increasingly likely to be close to the true population average, just like we saw with the dice rolling.

One type of statistical significance test is the Z-test.  The equation for the Z-test is shown below

Z test equation

Where

  • x̄ is the sample mean
  • u0 is the population mean
  • σ           is the population standard deviation
  • n is the test sample size

As you can see, the sigma divided by the square root of n that we saw as we measured the average roll of an increasing number of dice is a key part of the equation.

 

 

What Is R Squared And Negative R Squared

R Squared – A Way Of Evaluating Regression

Regression is a way of fitting a function to a set of data.  For instance, maybe you have been using satellites to count the number of cars in the parking lot of a bunch of Walmart stores for the past couple of years.  You also know the quarterly sales that Walmart had during that time frame from their earnings report.  You want to find a function that relates the two, so that you can use your satellites to count the number of cars and predict Walmart’s quarterly earnings.  (In order to get an advantage in the stock market)

In order to generate that function you would use regression analysis.  But after you generated the car to profit relationship function how can you tell if it is a good quality?  After all, if you are using it to try to predict the stock market, you will be betting real money on it.  You need to know, is your model a good fit?  A bad fit?  Mediocre?  One commonly used metric for determining goodness of fit is R squared, or more formally, the coefficient of determination.

This post goes over R2, and by the end you will understand what it is and how to calculate it, but unfortunately you won’t have a good rule of thumb for what R2 value is good enough for your analysis, because it is entirely problem dependent.  Any R squared value greater than zero means that the regression analysis did better than just using a horizontal line through the mean value.  In the rare cases you get a negative r squared value, you should probably rethink your regression analysis, especially if you are forcing an intercept.

 

What Is R Squared?

We will get into the equation for R2 in a little bit, but first what is R squared?   It is how much better your regression line is than a simple horizontal line through the mean of the data.  In the plot below the blue line is the data that we are trying to generate a regression to and the horizontal red line is the average of that data.

r squared is comparing against a horizontal line

The red line is the value that gives the lowest summed squared error to the blue data points, assuming you had no other information about the blue data points other than their y value.  This is shown in the plot below.  For the blue data points, only the y values are available.  You don’t know anything else about those values.   You can download the Excel file I used to generate these plots and tables here.r squared - the best you can do is mean value if you have no other informationIf you want to select a value that gives you the lowest summed squared error, the value that you would select was the mean value, shown as the red triangle.  A different way to think about that assertion is this.  If I took all 7 of the y points (0, 1, 4, 9, 16, 25, 36), rearranged them into a random order, and made you guess the value of all 7 points before revealing any of their order, what strategy would give you the minimum sum squared error?   That strategy is to guess the mean value for all the points.

With regression the question is, now that you have more information (the X values in this case) can you make a better approximation than just guessing the mean value?  And the R squared value answers the question, how much better did you do?

That is actually a pretty intuitive understanding.  First calculate how much error you would have if you don’t even try to do regression, and instead just guess the mean of all the values.  That is the total error.  It could be low if all the data is clustered together, or it could be high if the data is spread out.

SS stands for summed squared error, which is how the error is calculated.  To get the total sum squared error you

  • Start with the mean value
  • For every data point subtract that mean value from the data point value
  • Square that difference

Add up all of the squares.  This results in summed squared error

how to calculate sum squared error

 

Get The Regression Error

Next calculate the error in your regression values against the true values.  This is your regression error.  Ideally the regression error is very low, near zero.

sum squared error for a linear regression

The ratio of the regression error against the total error tells you how much of the total error remains in your regression model.  Subtracting that ratio from 1.0 gives how much error you removed using the regression analysis.  That is R2

In actual equation form it is

r squared equation

To get the total error, you subtract the mean value from each data point, and square the results
sum squared error equation

For the sum squared regression error, the equation is the same except you use the regression prediction instead of the mean value

sum squared error for regression equation

What is a Good R Squared Value?

In most statistics books, you will see that an R squared value is always between 0 and 1, and that the best value is 1.0.   That is only partially true. The lower the error in your regression analysis relative to total error, the higher the R2 value will be.  The best R2 value is 1.0.  To get that value you have to have zero error in your regression analysis.

r squared value of zero

However R2 is not truly limited to a lower bound of zero.  You can get a negative r squared value.

What Does A Negative R Squared Value Mean?

For practical purposes, the lowest R2 you can get is zero, but only because the assumption is that if your regression line is not better than using the mean, then you will just use the mean value.  However if your regression line is worse than using the mean value, the r squared value that you calculate will be negative.

For instance.  Let’s say that you wanted to make a prediction on the population of one of the states in the United States.   I am not giving you any information other than the population of all 50 states, based on the 2010 census.  I.e.  I am not telling you the name of the state you are trying to make the prediction on, you just have to guess the population (in millions) of all the states in a random order.   The best you could do here is take the mean value.  Your total squared error would be 2298.2

If you used something other than mean, for instance took the median, the summed squared error would be 2247.2

Which when converted into R2 is

negative r squared value

And you get a negative R2 number

 

Another Way To Get A Negative R Squared Value

The most common way to end up with a negative r squared value is to force your regression line through a specific point, typically by setting the intercept.  The way the ordinary least squares regression equations work is by making a line that passes through a specified point and has the lowest possible sum squared error while still passing through the specified point.

If the point that is chosen is the mean value of x and y, the resulting line will have the lowest possible sum squared error, and the highest possible R-squared value.  If you chose the mean x and y you cannot get a negative r squared value.

However if you specify a different point for the regression line to go through, you will still get the line that generates the lowest sum squared error through that point, but that doesn’t mean that line is good.  For instance in this chart from Excel

this shows how you can get a negative r squared value by specifying an intercept

The intercept for both of the regression lines was set at zero.  For the regression line for the blue points, that is not too far off of the best possible regression line, and so the resulting R2 value is positive.  However for the regression line for the red points, the true intercept should be around 120, so setting the intercept to be 0 forces the regression line far away from where it should be.  The result is that the regression sum squared error is greater than if you used used the mean value, and hence a negative r squared value is the result.

The assertion that the R squared value has to be greater than or equal to zero is based on the assumption that if you get a negative R squared value, you will dump whatever regression calculation you are using and just go with the mean value.

The take away for R2 is

  • An R2 of 1.0 is the best. It means you have no error in your regression.
  • An R2 of 0 means your regression is no better than taking the mean value, i.e. you are not using any information from the other variables
  • A Negative R2 means you are doing worse than the mean value. However maybe summed squared error isn’t the metric that matters most to you and this is OK.  (For instance, maybe you care most about mean absolute error instead)

R Squared Example

As an example of how to calculate R squared, let’s look at this data

example data for linear regression

 

This data is just the numbers 0 through 6, with the y value being the square of those numbers.  The linear regression equation for this data is

y = 6x-5

and is plotted on the graph below

linear regression line

Excel has calculated the R2 of this equation to be .9231.  How can we duplicate that manually?

Well the equation is

r squared equation

So we need to find the total summed squared error (based off the mean) and the summed squared error based off the regression line.

The mean value of the y values of the data (0, 1, 4, 9, 16, 25, 36) is 13

linear regression sample data

To find the total summed square error, we will subtract 13 from each of the y values, square that result, and add up all of the squares.  Graphically, this is shown below.  At every data point, the distance between the red line and the blue line is squared, and then all of those squares are summed up

how to calculate sum squared error

The total sum squared error is 1092, with most of the error coming from the edges of the chart where the mean is the farthest way from the true value

sum squared error table

Now we need find the values that our regression line of y = 6x-5 predicts, and get the summed squared error of that.  For the sum squared value, we will subtract each y regression value from the true value, take the square, and sum up all of the squares

regression error table

So the total summed squared error of the linear regression is 84, and the total summed squared error is 1092 based on the mean value.

Plugging these numbers into the R2 equation we get

what is r squared equation

Which is the same value that Excel calculated.

A different way to think about the same result would be that we have 84/1092 = 7.69 % of the total error remaining.  Basically, if someone had just given us the y values, and then told us that they were going to pick random order those y values and we had to guess what they all were, the best guess we could have made was the mean for each one.   But if now they give us the X value, and tell us to try to guess the Y value, we can use the linear regression line and remove 92.31% of the error from our guess.

 

But Wait, Can’t We Do Better?

We just showed a linear regression line that produced an R squared value of .9231 and said that that was the best linear fit we could make based on the summed squared error metric.  But couldn’t we do better with a different regression fit?

Well the answer is yes, of course we could.  We used a linear regression, i.e. a straight line, on this data.  However the data itself wasn’t linear.  Y is the square of x with this data.  So if we used a square regression, and in fact just used the equation y = x2, we get a much better fit, which is shown below

x squared regression line

Here we have an R2 of 1.0, because the regression line exactly matches the data, and there is no remaining error.   However the fact that we were able to do this is somewhat beside the point for this R2 explanation.  We were able to find an exact match for this data only because it is a toy data set.  The Y values were built as the square of the X values, so it is no surprise that making a regression that utilized that fact gave a good match.

For most data sets, an exact match will not be able to be generated because the data will be noisy and not a simple equation.  For instance, an economist might be doing a study to determine what attributes of a person correlate to their adult profession and income.  Some of those attributes could be height, childhood interests, parent’s incomes, school grades, SAT scores etc.   It is unlikely that any of those will have a perfect R2 value, in fact the R squared value some of them might be quite low.  But there are times that even a low R2 value could be of interest

Any R2 value above 0.0 indicates that there could be some correlation between the variable and the result, although very low values are likely just random noise.

 

 

An Odd Special Case For R Squared

Just for fun, what do you think the R2 of this linear regression line for this data is?

r squared for a horizontal line

Here we have a pure horizontal line, all the data is 5.0.

As it turns out, the R squared value of a linear regression on this data is undefined.   Excel will display the value as N/A

r squared undefined

What has happened in this example is that the total summed squared error is equal to zero.   All the data values exactly equal the mean value.  So there is zero error if you just estimate the mean value.

Of course, there is also zero error for the regression line.  You end up with a zero divided by zero term in the R squared equation, which is undefined.

r squared undefined equation

Binomial Equation – Made Easy To Remember

The binomial equation calculates the probability of getting a certain outcome after a number of discrete events.  For instance, if your team has a 60% chance of winning any single game against another team, what are the odds that they will win at least 4 out of 7 games in the championship series against that other team?

The most common equation you see for the binomial distribution is

classic binomial equation

As far as what the letters mean

  • f – just means it is a function of k, n, p
  • k – This is the number of successful events. e. the total number of heads in the coin flips, or the total number of outcome A out of A/B
  • n – This is the total number of events. This would be all the flips, regardless of the outcome
  • p – This is a decimal number between 0 and 1 inclusive representing the probability of a successful event on a single trial. For instance if you were rolling a die and it only counted if you got a 6, then p would be 1/6 = .1667

 

However I think the equation is easier to remember when written in a slightly different way, as

modified binomial equation

Here A is the number of times outcome A occurs, so it is equal to k from the previous equation.  B is the number of times outcome B occurs, and is equal to (n-k) when there are two events.  pa is the probability that outcome A occurs and pb is the probability that outcome B occurs

 

What You Already Know In The Binomial Equation

The reason I like the equation in this form is that it builds on information that you already know.  For instance, say that you have an outcome that has a 1/6 chance of occurring, like rolling a 6 on a die.  What are the odds that event will occur 10 times in a row?   Well the odds of that happening are 1/6 raised to the 10th power.  You likely already know that, and in equation form it would be

probability of A occurring

Now let’s say that you have two different outcomes, A and B.   What is the probability that outcome A will occur A times in a row, and then outcome B will occur B times in a row?   This is just the probability of A, raised to the power of A times, multiplied by the probability of B, raised to the power of B times.

probability of A & B occurring

For instance, if you wanted to calculate the odds of rolling a die 5 times in a row and getting 6 every time, and then flipping a coin 4 times in a row and getting heads every time it would just be  1/6 raised to the 5th power, multiplied by 1/2 raised to the 4th power.  That is simple enough, easy to understand, and frankly you don’t need a special equation in order to remember it.  And that is the second half of the binomial equation.

modified binomial equation

The second half of the binomial equation is just   “If outcome A had to occur all the times in a row, then outcome B all the times in a row, what is the probability that string of outcomes would occur?”

 

1st Half Of Binomial Equation

Ok, that is the second half of the equation with the exponentials, what about the first half of the equation?

half of binomial equation

This is the combination equation.  There are a lot of interesting things to know about the combination equation, and those are discussed in this blog post on combinations and permutations.  For our purposes right now, the best way to think of this part of the equation is as such:   It is the number of different orders of ways you can do outcome A and outcome B.   i.e. it is the number of different tries you get at making your desired outcome happen.

As an example, let’s say that outcome A was rolling a 6 on a die, and outcome B was flipping a heads on a coin, and you needed to do that two times each.   If you had to roll a 6 twice in a row, and flip the coin twice in a row the probability would be

probability of A & B occurring

As we showed before.   However now let me tell you that you get to try again, however many times you can reorder the events.  For instance, the first trial was

  • Roll-Roll-Flip-Flip

So that is one order.  But you can also try again

  • Roll-Flip-Roll-Flip

And

  • Roll-Flip-Flip-Roll

And so on.  There are in fact 6 unique orderings of 2 rolls and 2 flips.   This can be calculated by 4! / (2!*2!)  =   24 / (2*2) = 6       here the 4! Is because the total number of events is 4  (2+2)   and each of the two factorial is because there are two trials of event A, and two trials of event B.

The 6 unique orders of rolls and flips, as A’s and B’s are

  • A-A-B-B
  • A-B-A-B
  • A-B-B-A
  • B-A-B-A
  • B-B-A-A
  • B-A-A-B

So the first half of the equation

half of binomial equation

Tells you the number of times you get to try to get a specific outcome, and the second half of the equation

probability of A & B occurring

Tells you the probability of that outcome if done a single time.   Combined they give the full probability of an outcome

modified binomial equation

 

Example Problem Solved

Let’s take it back to the starting example.  Your team has a 60% chance of winning.  What are the odds they will win exactly 4 games in 7 ?

  • Outcome A is that your team wins. This occurs 4 times, and a has a probability of .6
  • Outcome B is that the other team wins. This occurs 3 times and has a probability of .4.

There are 7 events in all.  The equation becomes

probability of winning 4 games out of 7

So the odds of your team winning exactly 4 games are 29.03%.

 

What Are The Odds Of At Least 4 Games?

But wait, the real question we asked was the odds of winning at least 4 games, not exactly 4 games.  Your team could win 5 games, or 6 games, or all 7.  That means we have to calculate the answer again for each of those possible number of wins.  That calculation is done below

binomial distribution table

The end result is that if your team has a 60% chance of winning any single game, you expect that they have a 71.02% chance of winning at least 4 games in a 7 game series.

If you increased the number of games, and went best 5 of 9, or best 6 of 11 the likelihood of the team with the higher winning percentage winning the series increases.

 

Multinomial Equation

There is another reason why I like the binomial equation rewritten the way we did it.  That is because it is easy to expand that equation from a Binomial equation to a Multinomial equation.  I.e. it is easy to change from 2 outcomes to 3 or more outcomes.

What would be a multinomial event?  Well for instance, instead of having two teams play, think of three people racing in a track meet.  So instead of calculating the odds of one team winning 4 games out of 7, what are the odds that person A will win 4 races out of 7, person B will win 2 races out of 7, and person C will win one?

Here assume the odds of person A, B, and C winning any given race are 50%, 40%, and 10% respectively.

Getting the equation for 3 possible outcomes from the 2 possible outcome equation is almost trivial.  It is

multinomial equation

Once again, the second half of the equation is the probability that that specific chain of outcomes will occur, and the first half of the equation is the number of different paths you can take to get there.

For this specific racing problem the equation becomes

multinomial equation solved

 

Thanks For Reading

Thanks for reading this post.  Please comment if there is anything not clear.   I’ve also written a longer Kindle book on this topic that has a number of more examples that you can find here.

Dynamic Programming Time Complexity

What Is The Time Complexity Of Dynamic Programming Problems ?

I always find dynamic programming problems interesting.  Find a way to use something that you already know to save you from having to calculate things over and over again, and you save substantial computing time.

But just how much computing time are you saving ?  One of the classic questions that you will get on any programming interview is “What is the time and space complexity of your problem?”  Which means, if you start ramping up the size of the problem with bigger numbers, will the solution get a lot slower really fast, or will it take really large numbers before the solution takes too long to be practical.

The classic way of doing dynamic programming is to use memoization.   Memoization (which looks a lot like memorization, but isn’t) means to store intermediate answers for later use.  You are increasing the amount of space that the program takes, but making the program run more quickly because you don’t have to calculate the same answer repeatedly.

 

The Solution, If You Want To Skip Over The Rest Of The Post

The rest of this post goes through an example memoization program to show how it works and build an intuitive understanding of the time/space complexity.  But if you are already familiar with those type of problems and just want the answer, it is that the time and space complexity of dynamic programming problems solved using recursive memoization are nearly always equal to each other.  And they are equal to solving every sub-problem exactly one time.  So if you figure out the maximum number of sub-problems, that is your solution.

A lot of these problems can be represented on a 2 dimensional graph, so the time complexity to fill every square in the graph is N * M.  However that there are problems that are multi-dimensional (3, 4 or more dimensions), or take additional time complexity to solve a single square, so the N * M is not a generic answer.

 

An Example Dynamic Programming Problem.

You have a set of coins of different denominations.  You have infinite quantities of each of the denominations.  You also have an amount of money to make change for.   How many different unique combinations of coins can you use to make exact change?

For example, if you are making change for 4 cents with coins valued 1 and 2 cents,  [1, 1, 1, 1] is unique from [1,1,2] and is unique from [2,2].  However [1,1,2] is not unique from [2,1,1].   So the solution to this small example is 3 unique ways to make change.

More importantly for this blog post the question is, when this problem is solved for an arbitrary number of coins and an arbitrary amount to make change for, what is the time and space complexity of that solution, and how can it be generalized.

Here is a larger example to see how we can solve this problem with a recursive algorithm.

For this example, assume that you have that you have 6 cents, and can make change in the denominations 1 cent, 2 cents, or 5 cents.

The recursive solution to this problem is to call the function repeatedly and each time branch into two more calls.

  • On one branch subtract the largest coin from the amount remaining to make change from.
  • On the other branch remove the largest coin from the list of available coins used to make change.
  • If you run out of coins or get negative change, then that branch is null.
  • If you end up with exactly zero money remaining, that is a solution to count.

Here is a graph of the recursive branching, without using any memorization.  (click on it for full sized image)

dynamic programming recursion solution to coin change problem

That was the solution without memoization.  This was a small problem and it quickly blew up into a lot of work, as is the nature of exponential type solutions.   With memorization we get  (click for full sized image)

dynamic programming memoization solution to coin change problem

In this chart we can see a lot fewer branches, because we re-used some of the data that we had previously computed.  Clearly the memoization saved work, since we saved re-computing several sub-problems. So how many sub-problems did we have to solve?

The answer is we had to potentially solve for every quantity of our money value (N), and every count of the number of counts we have (M).  This means that the solution will take N*M time.

And this highlights the key to the solution to the memoization time / space complexity.  For all of these type of problems you generally have to, at least potentially, solve every sub-problem, but you will only have to do it one time.  Then you can save the result to use later.

This problem had a maximum of 6 different values of change to make (1, 2, 3, 4, 5, 6) and a maximum of 3 different sets of coins [1, 2, 5], [1, 2], [1].   So we would have a maximum of 18 sub-problems to solve, and it would potentially take 18 blocks of memory to store that information once we solved them.  It turns out that we had slightly less than 18 problems due to the numbers that we used, but for a generalization the number of problems is on the order of N (amount of money) * M (number of different coin denominations)

The other good way to remember how much time / memory you need to solve a recursive memoization problem is to realize that the recursion is a top-down solution.  However you are solving the exact same problem as if you built the table from the bottom up.  Nearly all of these type of dynamic programming problems have a bottom up solution. ( In fact I’ve never encountered on that couldn’t be solved bottom up, but they probably do exist) Often times the bottom up solution is more difficult algorithmically, but it is easier to realize how much time and space it needs.

If You Want To Solve This Problem Yourself

If you want to solve the coin change problem yourself, here is some sample input.  Note, these all solved nearly instantaneously on my computer with memoization in place.  without memoization, Example 3 will probably freeze your computer.

Example 1

  • Amount of Money : 6
  • Coin List –  1, 2, 5
  • Number of possible combinations for change – 5

Example 2

  • Amount of Money : 20
  • Coin List –  1, 2, 4, 6, 8
  • Number of possible combinations for change – 94

Example 3

  • Amount of Money : 200
  • Coin List –  1, 2, 4, 6, 8, 20, 35, 50
  • Number of possible combinations for change – 258,822,120

If You Want My Python Code

You can download the python code I used to solve this problem here (as a zip file).

Kelly Criterion For Betting

How Much Should You Bet In A Game That Favors You ?

Here is an interesting problem.  I’m going to lay out a favorable bet.  You will have the opportunity to place a bet multiple times, and resolve each one before moving onto the next one.  How much should you bet each time so that the you whose luck is exactly average will win the most money ?

Here is the gambling scenario –

  • You start with 100 dollars.
  • We will flip a fair coin, and if it is heads, you win.
  • Before each flip you get to choose how much money you want to bet on that flip
  • If you win, you double the amount of money that you bet. If you lose you lose half of the money that you bet.    e. if you bet the full 100 dollars you started with, and you win you have 200 dollars, and if you lose you have 50 dollars.
  • You can play this game up to 100 times.

Now clearly this is a favorable bet.  On the first bet, if you bet 100 dollars, you have a 50/50 chance of winning vs losing.  50 percent of 200 dollars plus 50 percent of 50 dollars means the expected value of that bet is 125 dollars, for a gain of 25 dollars.

However just as clearly if you place that bet 100 times you are just as likely to win as many bets as you are to lose those bets.  So if you bet all your money each time, then times that you get lucky and win more times than you lose, for instance win 56 times and lose 44 times, you will win a lot of money.   However the person whose luck is exactly average will win 50 times and double their money, and lose 50 times and cut their money in half, and leave with the same 100 dollars they started with.

We want that average lucky person to win the most money possible, so how much should you bet ?

To simulate that I ran a python program through a couple million people betting different percentages of their bankroll. You can download that python program, the one that generated the charts below, and the associated Excel file here. This chart shows how much money the 50th percentile person would have depending on how much they bet as a percentage of their bankroll.

kelly criterion results 1

 

So if you knew that you were going to win exactly 50% of your bets, your optimal bet amount exactly half of your bankroll each time.

Now if you think that you are going to be lucky, and win more than average, or you want to maximize your potential winnings you could bet more.  For a 90th percentile person, you are likely to win 56 bets out of the total 100.  That is a lot better than only winning 50 bets, but it is still a lot of losses.  This is what the winning curve looks like for the 90th percentile person

kelly critierion results 2

Even though you were in the 90th percentile of luck, betting all your money each time would be a mistake. If you knew that you were going to win exactly 56 of your 100 bets, your optimal bet size would be 68% of your bankroll each time.  So betting more money on a positive expected value game can increase your net expected value, but it would come at an increased risk if you don’t run lucky.

 

Even Winnings – In A Game That Favors You

The scenario outlined above may not be the most likely scenario you will encounter.   I have given that scenario some thought because it was in a video game that I played when I was younger.  At the end of each level you were given the opportunity to make a bet 3 times and double or halve the number of coins you had collected in the level each time.  (  Bonus points if you know what Gameboy game I’m referring to)  There are situations in business or with stock option investing that could replicate that, and you could either double your money or lose half your money, but if you are placing bets at a casino the scenario you are most likely to encounter is that you will win or lose the same amount of money.  However your odds of winning and losing may not be 50/50

Let’s say that you had a game where you would win or lose your bet, but that the odds of winning were 55% and you only had a 45% chance of losing.  i.e. imagine that you were betting on red or black on a game of roulette, except that the color distribution favored you instead of the casino. You have the opportunity place this bet 100 times.

Here betting 100% of your money is obviously a bad idea, because when you lose you will lose all of your money, and you will lose almost half the time.  The curve of winnings vs bet rate looks like this for the 50th percentile lucky person

kelly criterion results 3

As it turns out, the best amount to bet here is 10% of your bankroll each time.

That 10% number is suspiciously the same as the amount of advantage that you have in the game, i.e. the difference between 55% win rate and 45% loss rate.

If we run the simulation again with a 60% win rate, and a 65% win rate

kelly criterion results 4

kelly criterion results 5

 

We see that the best bet amount for 60% is to bet 20% of your bankroll, and the best bet amount for a 65% win rate is to bet 30% of your bankroll.    It turns out that the best bet is exactly the advantage that you have in the game.

 

Kelly Criterion

It turns out that this optimal amount to bet was first discovered in 1956, and is known as the “Kelly Criterion”.   This was named after John Larry Kelly, who was an associate of the famous Claude Shannon, who went on to use this formula as well as other mathematical techniques to make a substantial amount of money in the casinos and in Wall Street.

The Kelley Criterion equation is

kelly criterion equation

Where

  • f* is the percentage of the current bankroll to bet
  • p is the probability of winning the bet
  • q is the probability of losing the bet ( 1 – p )
  • b is the net odds received on the bet. e. “b to 1”.  So for a flip of the coin, where you bet 1 and get 2 if you win (your original 1 + 1)  b would be 1.   If you are betting 1 dollar and get 6 if you win, i.e. 5 to 1 odds,  b is 5

This turns out to match our results exactly.  For the first example where we got either double or half, b is 2, and p and q are both .5.  The equation becomes

kelly criterion equation resolved

And f* is calculated to be .25 = 25% of our bankroll as the optimal bet.   Now the graph indicated that we should be betting 50% of the bankroll.   However since we could only lose half of that 50% of the bankroll in the double or half bet, that is equivalent to betting 25% of the bankroll when using the same terminology as the Kelly Criterion.

For the bets where our payoff is the same as our bet, and our odds of winning are either 55%, 60%, or 65%, b is 1, and an example equation becomes

kelly criterion equation for a 60% win rate

Which shows that we should be betting 20% of the bankroll for the 60% chance of winning case, which is exactly the same as the python code determined.

For most real world scenarios, where you are betting against the house which has a house edge, f* becomes negative, which means that you shouldn’t be playing that game.  Truthfully it means that you should take the other side of the wager, become the house, and make them bet against you!

 

 

 

Gambler’s Ruin

People have come up with many different betting systems over the years. One of these, the Martingale betting system, is interesting because it illustrates how a high level understanding of a concept can be insufficient.

The Martingale betting system as follows:

  • I will start by betting some small amount on a wager that is as close to 50/50 as I can get. So assume I’m betting $1 on a coin flip.
  • Anytime I lose, I’ll double the bet
  • Anytime I win, I will start over at $1
  • Since I’m always increasing the stakes, anytime that I win it means that I will have cleared all my loses, and won an additional dollar. For instance, If I lose $1, $2, $4, then win $8 I will have lost 7 dollars, but then won 8 dollars for a net gain of 1 dollar

What the bettor appears to have is a system that guarantees them small winnings. They will steadily be winning the 1 dollar net bets.

Clearly this is not the case, from Gambler’s Ruin we know that you cannot win any negative expected value game long term. So the Martingale system is not a betting system that you should use. But why is it bad?

There is a surface issue, and an underlying math issue.

The First Problem With The Martingale – Bet Doubling System

On the surface, one problem is that there isn’t really a game out there that is 50/50 odds you can bet on. Betting red/black in roulette is about as close as you come, but even there your odds of winning are 16/33 or 16/34 depending on the 0’s

The Main Problem With The Martingale – Bet Doubling System

The underlying math issue is more interesting. It boils down to the fact that you cannot keep doubling your bet forever. At some point you hit either the limit of your bankroll, or the house limit. And in any finite money scenario, all you have done is set up a situation where you frequently win a little bit of money, but infrequently lose a lot of money.

So how does that work ? Let’s give an example with different amounts of bankroll.

Let’s say that you have 7 dollars total. This means that you can keep betting up to the 4 dollar limit, and if you lose the 4 dollar bet you will have lost everything. That gives you a total of 3 bets, at 1, 2, 4, before you lost.   How likely is this to happen ?

This tree illustrates the probabilities

martingale betting system

If you play this game 8 times, then 7 times you will win 1 dollar, and one time you will lose 7 dollars. The net expected value from this is zero, which is exactly what you would expect since you are assuming a 50/50 game.

If you have 63 dollars, you can run through it and find that you have a 1 in 64 chance of losing the 63 dollars, and a 63 in 64 chance of winning 1 on any give series of bets.

Your Chance Of Going Broke Is Directly Proportional To Your Bankroll

Assuming 50/50 odds, and that you have a power of 2 bankroll, your chance of going broke is exactly equivalent to your bankroll. Do you have a 1023 dollar bankroll? Then you will go broke 1 in 1024 times. Do you have a 4095 dollar bankroll? You will go broke 1 in 4096 series of bets.

What this strategy does is shift the probability curve, but the total probability remains the same. If you have a 63 dollar bankroll, and you bet 1 dollar 64 times, keeping your bet to 1 dollar regardless of if you won or lost, your probability curve would look like this

probability curve of winnings after 64 bets

Which can be calculated from the binomial distribution.   Doing that betting strategy, the “Normal” strategy of betting the same amount every time would give you practically zero chance of losing all your money after 64 bets.

If instead you went with the bet doubling strategy, and placed 64 strings of bets this is what the probability curve would look like.

probability distribution of martingale betting system

I generated these odds plot with a this python program & this Excel file simulating 1 million gambling outings.   Assumptions made were

  • You would place 64 strings of bets. With a string ending either when you won your bet, or couldn’t double the bet anymore
  • If you lost all your money then you were done
  • If you lost and couldn’t double your bet, but you still had some money you would start again at a 1 dollar bet

This chart is much different than the binomial based chart. Most of the probability is pushed to the edges of the chart, so you are most likely to either walk away having won 64 dollars, or having lost your 63 dollar bankroll.   There are other options, such as winning 30 dollars, but then losing your 64 dollar bet and ending up down ~ 34 dollars

The total expected value of that chart, which is the cumulative probability multiplied by net winnings, is zero.

So while Martingale strategy of doubling the bet strategy isn’t a surefire winning technique, it does change the outcome so that instead of being likely to win or lose a little bit, you are most likely to win or lose a lot.

Permutations And Combinations Simplified

Permutations and combinations are an essential part of statistics. They show up in a ton of different places when you are finding the probability of anything. But it can be hard to remember the exact formula for a permutation or a combination when you need it without looking it up.   This post will show you an easy, intuitive way to understand permutations and combinations so that you only need to remember one thing, and the rest you can just calculate when you need it.

 

Permutations and Combinations Basics

Permutations and combinations are a way of determining how many different possibilities of something there are.

Permutations are what you use when the order matters. For instance, if 8 people are racing in a track meet, and you want to find the different ways they could get 1st, 2nd, and 3rd place, then the order matters. So you would use a permutation.

Combinations are what you use when the order doesn’t matter.   For instance, if you have 10 different pieces of clothing you want to take on a trip, but you can only fit 7 of them in your suitcase, it matters which 7 you pick, but it doesn’t matter what order you put them in the suitcase, so you would use a combination

 

The Key Point

There is one key thing to know with Permutations and Combinations, and that is the Factorial.   Typically denoted with an exclamation point !

If you want to find how many different ways you can arrange 8 different items, it is 8 Factorial, which is 8 * 7 * 6 * 5 * 4 * 3 * 2 * 1.   What this represents is that when you make your first choice of items to arrange, you have 8 to choose from. When you make your second choice, there is one less so that you have 7 to choose from, then 6 and so on.

Everything with permutations and combinations are just different applications of the Factorial.

 

Permutations – Slightly Simpler Than Combinations

Let’s go back to the track example. Let’s say that you have 8 people racing on the track. The total different orders they could come in are 8 !

8 * 7 * 6 * 5 * 4 * 3 * 2 * 1 = 40320

Now let’s say you only care about the order of the first three people on the track. Clearly we would have fewer than the full 8! different permutations, because we only care about how the first 3 people finished, not all 8. So in that case you have 8 options for first place, 7 options for 2nd place, and 6 options for 3rd place, and that’s it.

8 * 7 * 6 = 336

This is the permutations of 8, choosing 3.

Now there isn’t a function that lets us just multiply 8 * 7 * 6 easily.   If we wanted the order of everyone, then 8 Factorial lets us multiply 8, down through 1, but it doesn’t stop in the middle. The way we do this is by finding 8 Factorial, and then dividing by 5 factorial. We use 5! because there are 5 items left behind that we don’t care about (8-3 = 5) That ends up being

8 choose 3 permutations

 

The

( 5 * 4 * 3 * 2 * 1)

Cancels out of both the numerator and denominator, and we are left with 8 * 7 * 6.

So to find the permutations of a subset of a group, what we have just done is

  • Find the permutations of the entire group ( 8! in this case)
  • Divide by the permutations of the part of the group left behind ( 5! In this case)

Why are we dividing by the permutations of the parts left behind ?   Because we don’t care what order they are in, so we need to cancel out all different orderings that they can be in.

 

This is a key point to remember

  • If you want to find the permutations of something, use the factorial
  • If you want to find the permutations of a subset, find the permutations of the entire group, and then divide by the permutations of the set left behind.

 

Combinations – Build on Permutations

Combinations simply start with permutations with a subset and add 1 more step.

  • Permutations – We take the factorial of the entire set to find out the number of possibilities
  • Permutations with a subset – We take the factorial of the entire set, and then divide by the factorial of the left behind set

Well for combinations we still don’t care about the order of the left behind set, but we also don’t care about the order of the set that we have chosen.   So we start with the permutations of the entire set, then divide by the permutations of the left behind set, then divide by the permutations of the chosen set.

So if we have 10 different items of clothing, and we can only choose 7 to pack, so there are 3 left behind, the number of possibilities are

10 choose 7 combinations

Which is equal to 120

An important thing to know is that there will always be at least as many permutations of a set as combinations, and typically many more permutations than combinations.

 

The Traditional Permutations & Combinations Equations

At this point, it is worth showing the traditional permutations and combinations equations. These are the things that you might typically be expected to memorize for a class, but can be challenging to remember long term

 

Here is the permutation equation

nPk permutation equation

And here is the combination equation

nCk Combination equation

So while it is manageable to memorize those equations, it is easier to just intuitively understand

  • To find the permutations of a full set, take the factorial
  • To find the permutations of a partial set, find the permutations of the full set, then divide by the permutations of the items left behind
  • To find the combinations of a partial set, find the permutations of the full set, divide by the permutations of the items left behind, and divide by the permutations of the selected items.

 

Taking It One Step Farther

If you understand using the factorial, instead of memorizing the equations, you can apply it to cases that the equations don’t cover. For example, if you have 12 items, and you need to break them up into 4 equally sized groups, how many options do you have ? (order does not matter within any group)

Once again you start with the permutations of the entire group, and then divide by the permutations of any subset whose order you don’t care about. In this case we have 4 subsets with 3 items each where the order doesn’t matter. So the equation would be

complicated permutations and combinations

Which is 19,958,400

That would be a challenging answer to get relying on just the standard equations

 

Two Envelope Problem

The Two Envelope problem is one of my favorite statistical paradoxes. It is a problem that, even after you understand it, you can still revisit and be surprised by it.

 

The Two Envelope Problem

You are presented with two identical envelopes. Both of the envelopes have money in them. You are told that one envelope has some amount of money in it, and the other envelope has twice as much money in it. But you don’t know how much that amount of money is. You are allowed to choose one envelope

You choose one envelope, which you decide to call Envelope A, and open it and observe X dollars. You are now given the opportunity to switch envelopes, so you calculate the expected value of switching. Since you know one envelope has twice as much money as the other, there is a 50% chance that envelope B, has twice as much as A, and a 50% chance that envelope B has half as much as A so

B = .5 * A/2 + .5 * 2A

B = 1.25 A

You conclude that the expected value of taking envelope B is 25% greater than keeping envelope A, so you start to switch. Before you do, you realize that if you had chosen envelope B to start with, you would have done the same calculation in reverse

A = .5 * B/2 + .5 * 2B

A = 1.25 B

and determined that the expected value of envelope A was 25% greater than envelope B.  

 

How can you resolve this apparent paradox ?

 

The Fairly Nerdy Solution

There are apparently quite a few solutions out there to this problem, with their authors believing their solution is clearly correct, and others disagreeing.   I’ll leave it to you to let me know what you think of my solution.

First we need to clarify how the money was put into the envelope, because it makes a big difference to the results.   The possible options are

  • Coin Flip Scenario – The money is not put into the second envelope until you look at the first envelope.   i.e. if you see $100 in the first envelope, then the game master flips a coin to decide whether to put $200 or $50 in the second envelope
  • Infinite Money Scenario – The money is put into both of the envelopes before you choose. One envelope has X dollars and the other envelope has 2X dollars. The game master has infinite money, and would be willing to put any amount of money, including infinite money, into the envelopes
  • Finite Money Scenario – The money is put into the both of the envelopes before you choose. One envelope has X dollars and the other envelope has 2X dollars.   However the game master is only willing to put a finite amount of money in the envelope, so there would be a maximum of M dollars in any envelope

Each of those options need to be considered separately, and I think that some confusion on this problem stems from not clearly defining those options

Coin Flip Scenario

In my opinion, this scenario doesn’t really match the problem description of having two envelopes to choose from. However it does exactly match the math description of how the expected value of the second envelope is calculated.

If you are faced with this scenario, and see X dollars in your envelope, then your expected value of switching truly is 1.25 X, because the amount of money in envelope B can be calculated by

B = .5 * X/2 + .5 * 2X = 1.25X
which is how the problem is outlined.

This is basically a double or nothing bet, but instead of nothing, it’s double or half, and it is clearly a profitable bet to take.

 

Infinite Money Scenario

To my mind, this is the least realistic scenario. Because while you could set up the coin flipping problem outlined above, or set up the finite money scenario outlined below, infinite money isn’t possible.   Even if you were playing the game with the richest person alive, or even the government of all of Earth, there is still some upper limit to the total amount of money. But if you do assume infinite money, what happens?

If you are assuming infinite money, the most reasonable assumption is that one envelope has some amount of money in it, between zero to infinity dollars, and the other has half that amount in it.

The first thing we notice is that the expected value of envelope A, taken all by itself, is infinity. This is because the average of all numbers between zero to infinity is infinity.   We then immediately observe that the expected value of envelope B is also infinity.   So after solving the equation of

B = .5 * A/2 + .5 * 2A

We are left with

B = 1.25 infinity

And

A = 1.25 infinity

So this scenario doesn’t really boil down to “why is the expected value of switching greater than the expected value of not switching”   it ends up being “Infinity has weird properties that don’t match our intuition”

 

 

Finite Money Scenario

The scenario where the envelopes have finite money is the most interesting scenario in my opinion. Both because you could actually replicate it in real life, and because the math is interesting. With the finite money scenario, the most reasonable assumption for how it gets distributed is

  • The game master puts a random amount of money in the first envelope, between zero dollars up to the maximum dollars he is willing to put in one envelope, denoted as M
  • The game master puts half that amount of money in the second envelope
  • The game master flips a fair coin to shuffle those envelopes so they are randomly presented as envelope A & envelope B

However, we are still posed with the problem, why does

B = .5 * A/2 + .5 * 2A = 1.25 A

and

A = .5 * B/2 + .5 * 2B = 1.25 B

The cause of this apparent paradox turns out to be bad intuition on the possible distribution of money in the envelopes. The assumption in this paradox is that the all dollar amounts are equally likely. Therefore you have a 50% chance of doubling your money, and a 50% chance of cutting it in half, just like in the coin flip scenario.

For that to be true, it implies a probability function that looks like this, where all numbers are equally likely

Incorrect Two Envelope Probability Distribution

Where M is the Maximum amount of money the game master would be willing to put in a single envelope. However the actual distribution of money in those envelopes will look like this

Correct Two Envelope Probability Distribution

Contrary to our assumptions, all dollar amounts are not equally likely.   There will actually be 3 times as many quantities below half of the maximum value as the number above half the max value.

So before going any farther, intuitively what will this mean ?   Well clearly the likelihood of doubling our money goes down if we have a larger amount of money.   So the equation of

B = 50% * A/2 + 50% * 2A

Doesn’t really apply if the chances of getting A/2 and 2A aren’t the same for every value of A.

Why are there more small numbers than big numbers in this distribution ?   Because of the way the money is distributed.   If we ran this many times, and randomly generated the money in the first envelope to be between 0 and M dollars over and over, and then sorted that money, we would end up with a distribution that looked like this finite_distribution

Half of our numbers are less than M/2, and half are greater than M/2. Exactly what we intuitively expect.   When we generate the numbers in the second envelope by dividing all of these by 2 we end up with all of the numbers in the second envelope being less than M/2

two envelope problem money distribution

In total, 1/4 of the numbers in both envelopes are greater than M/2, and 3/4 less than M/2.   Note, we set up these envelopes by putting up to a maximum amount into the first envelope and then dividing to get the second envelope. The 1/4 and 3/4 distribution would still hold if you went the other way and put up to a maximum amount into the first envelope and doubled it to get the second envelope.

At this point the intuitive solution to this paradox is clear. For any number there is not an even chance that we will double the value in the envelope vs the chance that we will cut that value in half.   The chance that we will double is larger for the small values than for the large values, which will mean that the

B = 1.25 A

A = 1.25B

Equations won’t hold.

The next section works out the math for exactly what the expected value of switching is for the finite money scenario. Then, in the final section we will exploit this information to see how if we were playing this game multiple times in a row, we actually could intelligently switch and end up with more money at the end of multiple rounds than the average expected value.

Expected Value Of Switching – Finite Money Scenario

For calculating the expected value of switching, we will give a concrete example.   For this example, we will assume that the maximum amount of money that the game master is willing to put in an envelope is $40.

In this scenario, the average value of the lower values is $10, and the average value of the upper values is $30.   The average value of all the money in the envelopes is $15, calculated by .75 * $10 + .25 * $30 = $15

two envelope problem money distribution

At this point it’s easiest to assume that we will always get either the $10 average value, or the $30 average value when we open an envelope.   So when we open the first envelope, 75% of the time we will get $10, and 25% of the time we will get $30.

two envelope problem expected value

So what is the expected value of switching if we have either the $30 or the $10 in the first envelope ?   For the $30 it’s easy because if the first envelope has money that lies in the upper half of the range, switching will always get in the lower half of the range.   So the $30 will always be cut in half to $15

The $10 is interesting because for every 3 times we see the $10, 2 of those times we will double to $20, and 1 time we will cut in half to $5

two envelope problem expected value

Breaking out the table for the expected value

two envelope problem expected value 2

Finding the weighted average of 50% of $20, 25% of $5, and 25% of $15 gives us the total expected value of switching envelopes, which is $15.

So the expected value of envelope A in this case is $15, and if we switch, the expected value of envelope B is $15, which are the same. This resolves our paradox, since we expect the average values in these envelopes to be the same

 

 

Exploiting This Information

Interestingly, even though the average value of A & B are the same, this is still an exploitable game assuming:

  • You play more than one round and
  • You can look in your envelope before deciding whether you want to switch

Our earlier calculations show that the average value of all the switches was the same as not switching. But that was just the average value. Some of the individual switches themselves were beneficial, and some lost you money. If you can find a way to distinguish the profitable switches from the unprofitable ones, you can get more money.

The obvious solution is that you want to switch when you have the low dollar amount, and not when you have the high ones. The expected value of switching when you have the low dollar is:

  • You will double your money 2 times out of 3
  • You will lose half your money 1 time out of 3.

two envelope problem expected value 3

On average switching when your envelope has less than half the maximum amount of money the game master would be willing to put in will yield +50% on those switches. Since you likely only have a rough guess as to how much money could be in the envelopes, a good (not necessarily perfect) strategy would be to switch envelopes the first time, and then after that switch if the amount in your envelope is less than half of the running maximum you have seen.

I simulated that strategy in the Excel below, playing 1000 games with a random maximum amount of money that could be in the envelope, random amounts in one envelope, half that amount in the other, randomly shuffled.   Whatever envelope got selected (from the random shuffle) we call envelope A, and the other envelope gets call B

Implementing that strategy gave approximately 25% more money than just picking the A or B envelopes.   You can download that Excel file here.

Two Envelope Simulation

An Intuitive Guide To Bayes Theorem

The purpose of this page is to give you an intuitive understanding of how to solve Bayes Theorem problems.  The equation for Bayes Theorem is not all that clear, but Bayes Theorem itself is very intuitive.  The basics of Bayes Theorem are this

  • Everything starts out with an initial probability – That is, before you do any tests or have any data, there is some initial probability of an event
  • Tests can update that probability –  After you assign an initial probability, if you gather more information that is relevant then the probability can change.  For instance, you may initially have a very low chance of having an illness, but if a test for that illness comes back positive, the probability that you have it has increased
  • After a test, all probabilities get normalized to 1 –  It doesn’t matter if an event is unlikely to have occurred.  What matters is if the event is likely compared to all other possible events.   For instance, if you don’t know whether you are observing a 6 sided die or a 20 sided die, and you see the die roll 4 five times in a row, it is unlikely that the 6 sided die would have rolled those values.  But it is extremely unlikely the 20 sided die would have, so comparatively the 6 sided die is more likely

6 Easy Steps For Any Bayes Problem

  1. Determine what you want the probabilities for, and what you are observing
  2. Estimate initial probabilities for all possible answers
  3. For each of the initial possible answers, assume that it is true and calculate the probability of getting the observation with that possibility being true
  4. Multiply the initial probabilities (Step 2) by the probabilities based on the observation (Step 3) for each of the initial possible answers
  5. Normalize the results
  6. Repeat steps 2-5 over and over for each new observation

 

Bayes Theorem Applied To Cancer Testing

Testing for a disease is a classic Bayes Theorem problem, and one that can give counter intuitive results the first time you see it.   Let’s say that you are testing a generic patient for cancer.   One percent of the population has this cancer.   You have a test that will return a True Positive (return a positive when they actually do have cancer) 99% of the time, and return a True Negative (return a negative when they do not have cancer) 95% of the time.

You do 1 test, and get back a positive result.   What are the odds this patient actually does have this cancer ?

1. Determine Possibilities

There are two possibilities.  The patient either has cancer.  Or they do not have cancer

2. Estimate Initial Probabilities

Since  this is a generic patient they should be like the general population, so we assume there is a 1% chance they have cancer, and a 99% chance they do not.

Bayes Theorem For Cancer Testing

3. Calculate The Probability Of Getting The Result For Each Possible Answer

The result is a positive test.   If the patient has cancer, the probability of getting that result (True Positive) is 99%.   If the patient does not have cancer, the probability of getting that result (False Positive) is 5%  (which is 1 minus the 95% true negative rate)

4. Multiply Step 2 By Step 3 To Get The Combined Probability

This step should be similar to any other probability you have studied.  We are just calculating what is the probability they have cancer, and got a positive test.  And separately calculating what is the probability they do not have cancer, and got a positive test.

Bayes Theorem For Cancer Testing

5. Normalize The Results

This will be the final answer after 1 positive result.  At this step we see how likely the having cancer was, considering that a false positive was a possibility

Normalized Bayes Theorem Cancer Results

And that is the answer, we have found that after the “99% Reliable” test, there is only a 16.7% chance that the patient has cancer

6. Repeat The Steps Over Again With Additional Observations

If you do additional tests, you use the new values as your starting probability.  In this case let’s assume that we do a second test, get a Positive result, and then a third test and get a Negative Result.

For the second test, the conditional equation is the same as the first test.  The normalized has cancer value of 16.7% gets multiplied by .99, and the normalized does not have cancer value of 83.33% gets multiplied by .05.

For the third test, since this was a negative result we need to change the formula.  We multiply the normalized has cancer probability by the False Negative rate of .01  (1-.99) , and the normalized does not have cancer rate by the True Negative rate of .95.

The results after both tests are shown below

 Bayes theorem after cancer tests

After the second positive result, the odds the patient actually has cancer jumps up to 79.8%, but after the negative test, the odds drop back down to 4%

 

That Example Was Great, But You Promised Me Intuition

The page promised you intuition.  So far we have solved one Bayes Theorem problem, which is decent example, but not too different than what is on Wikipedia for Bayes Theorem.  Here is the intuition you should develop

  • Bayes Theorem Is Just Multiplication and Division –  Bayes theorem itself is very simple.  Multiply out all of the strings of probabilities, and then normalize.  However some problems it is applied to are themselves very complicated, so the whole thing becomes complicated.  For instance, you can make the problem more difficult by using complicated probability distributions for the conditional probabilities or complicated initial probability distribution functions.  There might be special probability functions applied to a goal scoring problem in soccer, or a line waiting problem at a store.  But that doesn’t mean Bayes Theorem itself is all that complicated, they Bayes part of the problem is still just multiplication and division
  • It is just as easy to solve for all possibilities as a single one – You might encounter a problem such as “This bag has 4 sided, 6 sided, 8 sided and 12 sided dice in it.  Your friend draws out a die, rolls it, and reports the number as 5.  What is the probability the selected die was a 6 sided die”  The problem asks you to solve for the 6 sided die, but since you have to get the total probability at each step any way in order to normalize, it is just as easy to solve the problem for the 4, 6, 8, and 12 sided dice at the same time.  Solving them all at the same time makes the thought process more straightforward, and can be done in a nice clean table.
  • The order of observations doesn’t matter to end results – Bayes theorem amounts to repeated multiplication.  Multiplication is commutative.  You can change the order of the terms and get the same final results.   But if you change the order of the observations, for instance putting the negative cancer test result first in our example problem, the intermediate results will have different probabilities
  • You Don’t Actually Have To Normalize Each Step – We normalized the example problem each step.  You could do all of the multiplication for the observations, and then normalize at the end and get the same result.  The only caution is that the probabilities can get very small after repeated decimal multiplication if you do not normalize. You can run into trouble with round off or truncation error depending on what you are using to do the math.

So Why Was The Cancer Problem Result Surprising ?

Many people are surprised to see that a positive result on the 99% reliable test still only means there was a 16.7% chance the patient had cancer.  Why was that surprising ?  Because most people do not bake the initial probabilities into their intuition.

We do a good job of understanding the conditional probability.  After all, a 99% reliable test should make it much more likely the patient has cancer, which it does.  But if the initial probability is a really small number, the new probability will probably be small as well.  This often gets overlooked, and people implicitly assume an evenly distributed initial probability when thinking about these types of problems.

Overlooking the initial probability is the real joke behind this XKCD comic https://xkcd.com/1132/   (not having to pay the bet if the sun actually exploded is merely a bonus)

These pie charts are a good way to visualize what is occurring.  If the patient doesn’t do any test, the odds of having cancer are a small slice of the pie

bayes theorem pie chart

 

While the patient is waiting for the test results there are 4 possibilities, either they have cancer and the test comes back positive (blue),  they don’t have cancer and the test comes back positive (green),  they don’t have cancer and the test comes back negative (purple)  or they have cancer and the test comes back negative (red slice, but too small to be seen)

 bayes theorem pie chart 2

 

Once they get positive test results, the purple and red slices of the previous chart go away.   We normalize the green and blue slices in light of the new total probability. The odds of getting that result due to a false positive (green) are still larger than the odds of a true positive (blue).

bayes theorem pie chart 3

 

More Examples

If you want more examples and information about Bayes Theorem, here is a book I wrote walking through half a dozen Bayes Theorem examples

Bayes Theorem Examples Book

And here is an Excel file solution to some Bayes Theorem problems

Understanding Statistical Significance

Statistical Significance in Real Life

Statistical significance is a way of quantifying how unlikely something that you are measuring is, given what you know about the baseline.   Exactly how unlikely something needs to be before it is statistically significantly depends on the context.  You likely have an intuitive understanding of statistical significance based on your own life.

For instance,  if you were at a United States airport, and it was announced that your plane was 15 minutes late, you wouldn’t think that it was anything unusual.  But if you were at a Japanese bullet train station, and found out that it was going to be 15 minutes late, you would probably think that was at least somewhat odd.

Why does one seem like a more significant event than the other ?   It is because you know that planes are frequently late, where the trains almost never are.   So the trains being late is more significant because it is more different than the normal day to day variation than the plane being late.

 

Plot The Delay

Statistical Significance is very easy to understand on a probability density plot.   The red line shows 15 minutes late.   The blue line shows how likely a train will be any given time late, and the green line shows how likely a plane will be any given time late.   The total area under each of the blue and green lines is 1

It is clear on the chart that very few trains are more than 15 minutes late, but  a lot of planes are.

how late planes and trains are

 

There are really two things going on in the chart.  The first is that the average plane is more late than the average train is.   The average plane is 10 minutes late, and the average train is 0 minutes late.   So being 15 minutes late is bigger difference from average than for a train than a plane.

The second thing that is going on is that the distribution of plane lateness is a lot wider than the distribution of train lateness.  There is a lot more variation in the plane departure time than there is in the train departure time.   Because of that the plane lateness would have to be even greater to be unusual

The Gist of Statistical Significance

Statistical Significance means quantifying the probability of how unlikely an event is.  Exactly what is statistically significant depends on context, but typical numbers considered statistically significant are if something would have less than a 5% chance, less than a 1% chance, or less than a .5% chance of occurring if there wasn’t some difference between what you are measuring and the baseline.

The information that is important to statistical significance are

  • How many measurements you have – The more measurements you have, the more likely you have measured the full population of what is occurring, and not just a non-representative sample
  • How different the average of your measurements is from the expected average  –  The bigger the difference, the more likely it is significant.
  • How much variation there is in the measurements.  –   The less variation there is in the measurements, i.e. the tighter the spread is, the smaller the difference needs to be to be significant

There are small differences in the equations based on exactly what has been measured, but essentially all of the equations boil down to

  • Get a number which is the difference in average values, multiplied by the square root of the number of measurements you have, and divided by the square root of the variation in your measures.   Call that number the “Test Statistic”
  • The larger the Test Statistic the more statistically significant the difference.
  • Look up the Test Statistic in the appropriate “Z-Table” or “T-Table” to find the probability that there is a statistically significant difference between your samples, as opposed to just random variation

Equations For Statistical Significance

Now that you have a general understanding of Statistical Significance, it is time to look at the equations.   The most commonly used test for statistical significance is the Z-Test.   You use this test if you have a lot of measurements  (at least 20, preferably at least 40) and you are comparing it against a population with known values.   For example, you would use this test if you work at a hospital that had 500 babies born in it the past year, and you wanted to see if the average weight of those babies was different than the average weight of every baby born in your city.

z test equation

Where

  • X_bar   : is the average of the measured data
  • U_0      : is the population average
  • Sigma  : is the population standard deviation
  • n           : is the number of measured samples

You then look up the Z-value in a Z-Table to get probability

There are a few other different equations for Statistical significance called “T-Tests”.   You would use one of these T-Tests instead of a Z-test for one of these reasons

  • The number of measurements you have is small, certainly you would use a T-Test with fewer than 20 measurements, or maybe fewer than 50
  • You want to compare before and after measurements for the same individual.  For instance, if you have a before and after measurement for 20 people after a diet, you would use a certain type of T-Test.

 

What is the difference between a Z-Test and a T-Test?

What is a T-test vs a Z-test, and how do you know when to use a Z-test or a T-test?  The thing to understand about T-Tests, is that they are almost the same as the Z-Test, and will give almost the same answer as you increase the number of measurements that you have.  The whole point of T-Tests is that they put more area at the tail of the normal curve, instead of the middle to account for uncertainty you would have in your measured mean and standard deviation if you have a very small sample size.   Once you get above 20 or so measurements the difference between Z-test results and T-test results becomes vanishingly small.

The plots below show the probability density for a Z-curve, and T-test curves with different sample sizes

z test vs t test plots

Once you get past 20 or so measurements (green line, hardly visible) there really isn’t much of a difference between a T-Test or a Z-test (purple line).  However if you only have a few measurements than the T-Test results will need a lot greater Test Statistic to give a statistically significant result

It can be a little bit confusing knowing exactly which test to use, but using the exact right test isn’t that important unless you are taking an exam or righting a scientific paper.  The tests will all give similar results assuming you have more than 10 measurements, and very similar assuming you have 30 or more.

For a better understanding of the different types of tests, you can refer to this cheat sheet I put together giving the formulas for each test, and when they are used.

Z-Test and T-Test Cheat Sheet

Examples of Z-Test vs T-Tests

This post was intending to give an intuitive understanding of statistical significance.   If you are interested in looking at examples of Z-Tests and T-tests and exactly how they are used and in what circumstance you might use one or the other, you can find some examples in this book I’ve put on Amazon

hypothesis testing examples book

Or you can get an Excel file with different hypothesis testing examples here.