Switching light bulbs on and off

In spirit of the previous post, here’s another counting light bulbs problem to think about:

There are 100 light bulbs in a row, all turned off. You first go through and turn all of them on. Then you turn off every other bulb, starting with the second bulb. Then you toggle every third bulb, turning the off bulbs on and vice versa, starting with the third bulb. You continue this until you turn off every 100th bulb, starting with the 100th bulb. How many bulbs are lit at the end?


Of course, you could just model out the problem and spend 25 minutes (as well as over $300 if you actually decide to buy the bulbs). However, to look for patterns, let’s first solve the problem with just 10 light bulbs, which won’t take nearly as much time:

In the beginning, all the bulbs are off:

\begin{array}{r|cccccccccc}\text{position}&1&2&3&4&5&6&7&8&9&10\\\hline \text{on/off}&O&O&O&O&O&O&O&O&O&O\\\hline \text{toggles}&0&0&0&0&0&0&0&0&0&0\end{array}

In the middle row, O means “off” and X means “on”. The bottom row shows the number of times each light has been turned on or off.

First we toggle every light:

\begin{array}{r|cccccccccc}\text{position}&1&2&3&4&5&6&7&8&9&10\\\hline \text{on/off}&X&X&X&X&X&X&X&X&X&X\\\hline \text{toggles}&1&1&1&1&1&1&1&1&1&1\end{array}

Then we toggle every other light, starting with the second (so we turn off lights 2, 4, 6, 8, and 10):

\begin{array}{r|cccccccccc}\text{position}&1&2&3&4&5&6&7&8&9&10\\\hline \text{on/off}&X&O&X&O&X&O&X&O&X&O\\\hline \text{toggles}&1&2&1&2&1&2&1&2&1&2\end{array}

Then we toggle lights 3, 6, and 9:

\begin{array}{r|cccccccccc}\text{position}&1&2&3&4&5&6&7&8&9&10\\\hline \text{on/off}&X&O&O&O&X&X&X&O&O&O\\\hline \text{toggles}&1&2&2&2&1&3&1&2&2&2\end{array}

Continuing until we toggle just light 10:

\begin{array}{r|cccccccccc}\text{position}&1&2&3&4&5&6&7&8&9&10\\\hline \text{on/off}&X&O&O&X&O&O&O&O&X&O\\\hline \text{toggles}&1&2&2&3&2&4&2&4&3&4\end{array}

We see that 7 lights are “on”. The real question, of course, is which seven: we note that all seven “on” bulbs have been toggled an even number of times (2 or 4 in this case). We also see that the lights that have been toggled twice are in positions 2, 3, 5, and 7. Hmm… seems like prime numbers, and they have only two factors! Lights 6, 8, and 10 were toggled 4 times, and 6, 8 and 10 all have 4 factors!

Therefore, numbers with n factors are toggled n times. This makes sense because when we toggle every other bulb, we toggle bulbs in even-numbered positions, when we toggle every third bulb, we toggle those in positions that are multiples of 3, and so on.

Only lights in positions that have an odd number of factors will remain “off” at the end. And we know that only perfect squares have an odd number of factors*, so the number of lights turned off at the end is just the number of perfect squares less than or equal to the total number of bulbs!

For 100 light bulbs, there are 10 perfect squares between 1 and 100 (1 through 10), so at the end, there are 100-10=\boxed{90} bulbs in the “on” position.

*Consider this: factors must come in pairs, so in most cases, a number has an even number of factors. For 24, the pairs are (1,24),(2,12),(3,8),(4,6). However, all perfect squares have a factor that is “paired” with itself: the factor pairs for 9 are (1,9) and (3,3); the pairs for 36 are (1,36), (2,18), (3,12), (4,9), and (6,6). Therefore perfect squares have an odd number of factors.


We can easily solve the same problem for any number of bulbs: for m bulbs in a row, there are \lfloor\sqrt{m}\rfloor perfect squares less than or equal to m, and after doing all of the toggling according to the rules, there will be m-\lfloor\sqrt{m}\rfloor lights on. Not too bad.

Now we consider a (seemingly) harder question: we still have 100 off light bulbs, but what if we toggled starting at the first light in each case? That is, we toggle lights 1,2,3,\ldots, then 1,3,5,\ldots, and then 1,4,7,\ldots, etc. Now our factor-counting shortcut breaks down.

Or does it? It seems like the first light will be toggled no matter what, so let’s get it out of the picture for the time being. Re-number the lights so that the first light is in position 0, the second light is in position 1, etc. We have to likewise adjust our toggling procedure: we first toggle lights 0,1,2,\ldots, then lights 0,2,4,\ldots, then 0,3,6,\ldots, and so on. Hey, this is basically the original problem, and we can use our factor-counting trick!

There are two differences, however: because of our renumbering, the highest-numbered bulb is now only 99, not 100. Thus we only need to count the number of perfect squares less than or equal to 99. Furthermore, now we have a light “0”. It is toggled 100 times, so it ends up being “on” at the end.

Let’s generalize this for any number of bulbs: for m bulbs, renumber the bulbs as 0,1,2,\ldots,m-1. Then out of bulbs 1,2,\ldots,m-1, exactly \lfloor\sqrt{m-1}\rfloor will be off at the end (and so m-1-\lfloor\sqrt{m-1}\rfloor bulbs are on). Bulb 0 will be toggled m times, so it will be “on” if m is even and “off” m is odd. Therefore, the number of bulbs that are “on” is:

\displaystyle \begin{cases}m-\lfloor\sqrt{m-1}\rfloor&m\text{ is even}\\ m-1-\lfloor\sqrt{m-1}\rfloor&m\text{ is odd}\end{cases}

For 100 bulbs, there should be \boxed{91} bulbs on. It turns out this problem is not so much harder than our original problem.

Let’s check this analysis manually (i.e. the long way) for our simple 10-bulb problem: after all of the necessary toggles, we get

\begin{array}{r|cccccccccc}\text{position}&0&1&2&3&4&5&6&7&8&9\\\hline \text{on/off}&O&X&O&O&X&O&O&O&O&X\\\hline \text{toggles}&10&1&2&2&3&2&4&2&4&3\end{array}

There are 10-\lfloor\sqrt{10-1}\rfloor=7 bulbs on. Note how similar the bottom row is to what we found in the original problem.

Posted in brain teasers, misc | Leave a comment

Carefully lighting lightbulbs

There are 10 light bulbs in a row. How many ways can you light them if no two adjacent lights can be lit?


Consider the light bulb on the far left. If we assume that it is off, then we reduce the problem to the number of ways to light the remaining 9 light bulbs such that no two adjacent bulbs are lit. If the leftmost light is on, then the bulb next to it must be off. We’ve now reduced the problem to the remaining 8 bulbs with no two adjacent “on” bulbs.

The leftmost bulb is either “on” or “off”, so if W_{10} is the number of ways to light 10 bulbs such that no two adjacent bulbs are on, then the argument above shows that W_{10}=W_9+W_8.

But we can make the exact same argument for W_9 and W_8! And we repeat this until we get to W_2 and W_1. We easily see that W_1=2 (there’s only one light, and it can be either on or off) and W_2=3 (on/off, off/on, or off/off). Therefore, W_3=W_2+W_1=5, W_4=W_3+W_2=8, and so on until we get W_{10}=89+55=\boxed{144}.

The sequence W_1,W_2,\ldots resembles the Fibonacci sequence, where each term is the sum of the two preceding terms. Technically, the 1st, 2nd, and 3rd Fibonacci numbers are defined as F_1=1, F_2=1, and F_3=2, so W_1=2 is actually the 3nd Fibonacci number (slight subscript mismatch!), and W_{10} is the 12th Fibonacci number, F_{12}.

Extending this concept, we see that the number of ways to arrange n light bulbs such that no two adjacent lights are on is the n+2-th Fibonacci number. Using our compact symbols, W_n=F_{n+2}.

Alternative solution:

Doing this problem without the Fibonacci sequence requires analyzing several cases: one for each number of “on” lights. Luckily, however, there is a pattern to make life simpler. We are still considering 10 bulbs in a row:

For 0 lit bulbs, there is only one way (light nothing). There are 10 ways to light just one bulb (no concern of two adjacent lights both being “on”). 

Now the fun begins. If two bulbs are lit, there are 8 “off” bulbs. Consider this diagram, where the O’s are the “off” bulbs:

_ O _ O _ O _ O _ O _ O _ O _ O _

The _’s are where the two “on” bulbs can go. There’s no way for them to be next each other, because we’ve placed an “off” bulb between the spaces. For example, we could arrange as follows (X’s are lit bulbs):

X O X O _ O _ O _ O _ O _ O _ O _

Removing the underscores gives XOXOOOOOOO, a perfectly valid arrangement.

There are 9 spaces, and two lit bulbs, so \binom{9}{2} = 36 ways to arrange them.

With three “on” bulbs, there are 7 “off” bulbs, and 8 positions to place “on” bulbs, similar to the diagram above. There are \binom{8}{3}=56 arrangements.

Continuing, we see a pattern: for i “on” bulbs, there are 10-i “off” bulbs, and 10-i+1=11-i spaces to place “on” bulbs. Therefore, we have \binom{11-i}{i} ways to arrange the lit bulbs.

Summing over all possible numbers of “on” bulbs (0-5):

\displaystyle \binom{11}{0}+\binom{10}{1}+\binom{9}{2}+\binom{8}{3}+\binom{7}{4}+\binom{6}{5} =1+10+36+56+35+6=144

And this is what we got above. To generalize, consider n bulbs in a row. If there are i “on” bulbs, then there are n-i “off” bulbs and n-i+1 positions for the “on” bulbs, giving \binom{n-i+1}{i} arrangements. Summing over all possible i:

\displaystyle \sum_{i=0}^n \binom{n-i+1}{i}

Now, we’re going to get some binomial coefficients with larger bottoms than tops; make these equal to 0 because they are impossible situations. In case you’re wondering, the maximum possible i is equal to n/2 for even n (alternating on/off), and \frac{n+1}{2} for odd n (alternating on/off, with the first and last being “on”). So to be more technically correct we should have used the ceiling function and had the summation end at \lceil{n/2}\rceil.

Notice that we’re proven the following mindblowing identity by counting the same problem in two different ways:

\displaystyle \sum_{i=0}^{\lceil{n/2}\rceil} \binom{n-i+1}{i}=F_{n+2}

or equivalently,

\displaystyle \sum_{i=0}^{\lfloor{n/2}\rfloor} \binom{n-i}{i}=F_{n+1}

where F_n is the nth Fibonacci number, with F_1=1,F_2=1,\ldots. To get the second equation, we just replaced n with n-1. A more subtle detail is the change from the ceiling function to the floor function on top of the summation symbol; this follows from the identity \lceil{\frac{n-1}{2}\rceil}=\lfloor{n/2}\rfloor for integers n.

So the next time you’re drowning in binomial coefficients that seem to make some sort of weird pattern, this formula might come in handy (but first, you should memorize some of the Fibonacci sequence!).

Posted in other probability | Leave a comment

Simulating fair coins with unfair coins (and vice versa)

You are a tennis umpire. In order to decide which player serves first, you need to flip a fair coin. Instead, you only find a biased coin with probability 1/3 of getting heads. Can you simulate a fair coin with this biased coin? (Can you do also do the reverse? That is, simulate the biased coin with a fair coin).


(We’ll worry about simulating the biased coin with the fair coin later, in the Extensions section)

The key to finding a solution is to realize that you don’t have to flip the biased coin just once, and that you don’t have to consider each outcome of each flip.

Obviously, flipping the coin once will not work, so let’s try twice, and look at the probabilities, keeping in mind that the probability of flipping a tail is 2/3:

P(HH)=\frac{1}{3}\left(\frac{1}{3}\right)=\frac{1}{9} \qquad P(HT)=\frac{1}{3}\left(\frac{2}{3}\right)=\frac{2}{9}

P(TH)=\frac{2}{3}\left(\frac{1}{3}\right)=\frac{2}{9} \qquad P(TT)=\frac{2}{3}\left(\frac{2}{3}\right)=\frac{4}{9}

Notice that P(HT)=P(TH)! This is exactly what we need: two outcomes with equal probabilities. If we ignore all the other outcomes, we have a fair situation. We can let HT correspond to “heads” and TH to “tails”. Both HT and TH happen with equal chance, so we basically have a fair coin. If we get HH or TT we just pretend that the outcomes never happened and flip again.

We can actually improve on this, however. Our current plan requires us to repeat our two-flip sequence 5/9 of the time (=P(HH)+P(TT)). Think of how irritating this would be before a tennis match. Notice that the chance of flipping either HT OR TH is 4/9, which is the same as the probability of flipping TT! Our new strategy:

–          if HT or TH, then “heads”,

–          if TT, then “tails”, and

–          if HH, then reflip

This way, we only have to repeat the two-flip sequence 1/9 of the time. Unfortunately, if the coin was biased in a different way, such as P(H) = 1/5, this strategy wouldn’t work. But the first HT/TH pairing strategy seems promising, because it looks like it’s based on symmetry (HT and TH are simply the reverse of one another).


Assume instead that the coin had a probability p of coming up heads and 1-p of tails. Now let’s look at our two-flip situation:

P(HH)=p^2 \qquad P(HT)=p(1-p)

P(TH)=(1-p)p \qquad P(TT)=(1-p)^2

Indeed, P(HT)=P(TH), so we could just ignore the HH and TT cases like above; then, we map HT to “heads” and TH to “tails” and we’re done!

This method is pretty efficient if p(1-p) is not too small; the probability that we don’t have redo the sequence is P(HT)+P(TH)=2p(1-p). If p=0.5, this value is 1/2, which is the maximum out of all 0\le p\le1. As p gets farther away from 0.5, the value of p(1-p) gets smaller. We saw that for p=1/3, 2p(1-p)=4/9, which is slightly less than 1/2. When p=0.1, we have 2p(1-p)=0.18, so we would have to repeat the sequence more than 4 out of 5 times!

However, one can also say that if you only have a coin that gives heads 90% of the time, you may want to resort to other methods that also can give a fair outcome.


OK. Now let’s try to simulate an unfair outcome with a fair coin. There are infinite many unfair outcomes, while only one fair outcome, so we now have a much more complicated problem on our hands.

Let’s start with our original problem: using a fair coin to simulate a coin with P(H) = 1/3, or P(T) = 2/3. In other words, we need so design two outcomes, one of which happens twice as often as the other.

This is actually quite easy. Notice that P(HH) = 1/4 and P(HT or TH) = 2/4. Bam, we’re done: flip the coin twice; map HH to “heads”, and TH or HT to “tails”, and TT means to reflip.

The general case, modeling any unfair situation with a fair coin, is much harder because as noted above, there are infinitely many cases. Let’s try to tackle a few:

We can in fact replicate any coin with P(H) = 1/n, where n is an integer. In other words, we want to find two outcomes, one of which happens n-1 times more often than the other.

Well, P(n-1\text{ H's in a row}) = \frac{1}{2^{n-1}}, and P(\text{1 H, }n-2 \text{ T's}) =\frac{n-1}{2^{n-1}}, so we can design the following scheme to simulate P(H) = 1/n:

Flip n-1 coins. If all heads, map to “heads”. If exactly one head shows up, map to “tails”. Reflip on any other occasion*. Notice that P(H) = 1/3 is achieved by taking n = 3.

By switching up our mapping (replace “heads” with “tails” and vice versa), we can replicate P(H) = \frac{n-1}{n} in almost an identical fashion.

Replicating P(H) = 2/n is similar:

Flip n-1 coins. If all heads or all tails, map to “heads”.  If one head shows up, map to “tails”. Reflip on all other outcomes.

And the P(H) = \frac{n-2}{n} is done by reversing the mapping.

The simple fractions are quite easy to simulate, but after that the game becomes as hard as you want it to get. You really have to play around with the binomial coin probabilities to get the answer. I’m not sure if there is a general algorithm to generate any P(H) fraction. For example, can we do P(H) = 3/13?

*We can dramatically decrease the number of expected reflips for the P(H) = 1/n case by mapping all heads OR all tails to “heads”, and mapping one head or one tail to “tails”. Unfortunately, no equivalent shortcut is available for P(H) = 2/n.

Posted in Uncategorized | 1 Comment

Flipping HHT before HTT? Not what you think

If you keep rolling a fair coin, what is the probability that you will get HHT before HTT (H = heads, T = tails)? How about HHH before HTT? HHT before TTT?


The answer seems like \frac{1}{2} for both, because the coin is fair, and there shouldn’t be a preference for a certain pattern over another, right? After all, the probability that you will roll HHT right off the bat is \frac{1}{8}, and the same for HTT and HHH, so there should be symmetry. Well, this is not the case, or else there wouldn’t be an entire post about it. Let’s see why.

Assume first that we want HHT before HTT, and let P(E) be the probability of this happening. If we flip tails first, we have made no progress towards either HHT or HTT, so essentially nothing has happened; we could just forget about that flip altogether. Remember that we only care about one pattern relative to the other, so we don’t care about sequences that affect both patterns equally. The same goes if we flip tails again; we need to ignore any initial tail flips and focus on what happens once we roll our first head. In other words,


where P(E|TT) means the probability of achieving our desired event E given that we flip two T’s first (it’s a conditional probability).

However, by the law of total probability,


so P(E|H)=P(E). This makes sense, because whether HTT or HHT occurs first depends solely on what happens once we roll a head.

Given that we have our first head, we can either flip heads again to end up with HH or get tails to end up with HT. Either happens with 1/2 chance, so

P(E|H)=\frac{1}{2}P(E|HH)+\frac{1}{2}P(E|HT)        (*)

If we get heads again (we now have HH), we’ve automatically achieved HHT before HTT. Why? Because we already have (at least) two heads, so as soon as we get tails, we would have HHT. We could flip five heads in a row, and then tails, or even 100 heads, but we would get HHT as soon as we get flip tails.

Therefore, P(E|HH)=1 and


What if we get tails right after flipping the first head? Now we have HT. Nothing can be concluded for sure in this case, so let’s go further:

-If we get tails again, we have HTT before HHT, which isn’t what we want. So if we flip two tails in a row after flipping our first head, we’ll have a 0 chance of winning (i.e. we lose).

-On the other hand, if we get heads again, we have HTH. The HT part is now essentially useless because it doesn’t make any progress towards either HHT or HTT (there can’t two consecutive heads make HHT a reality, and the H right after the HT pattern prevents HTT from occurring). This is kind of like when we kept rolling tails in the beginning: we ignored the tails until we got our first head. Therefore, obtaining HTH is just like rolling our first head, or P(E|HTH)=P(E|H)=P(E).

Therefore, P(E|HT)=\frac{1}{2}P(E|HTT)+\frac{1}{2}P(E|HTH)=\frac{1}{2}(0)+\frac{1}{2}P(E)

Putting this into our starred equation above:


Solving gives us P(E)=\boxed{\frac{2}{3}}.

We have a greater chance of getting HHT first. Surprising, huh? This is because once we ever get HH, we win automatically. There is no such scenario for HTT; no two-flip sequence guarantees a HTT.


HHH before HTT warrants a similar analysis; let P(E) be the probability of getting HHH before HTT. If we get initial tails, effectively nothing happens.

Assume we flip heads. If another head comes up, giving HH, we can’t conclude anything just yet (no automatic wins like the HH case last time), so we have to break it up into further cases.

– Yet another head (HHH) will let us automatically win.

– Flipping a tail (HHT) gives a situation that is similar to getting HT (we’ll analyze HT next); the first head is useless because the subsequent T makes HHH impossible, while getting the second head doesn’t help progress towards HTT (we only need one head for HTT, so getting two heads in a row doesn’t help!).

So P(E|HH)=\frac{1}{2}P(E|HHH)+\frac{1}{2}P(E|HHT)= \frac{1}{2}+\frac{1}{2}P(E|HT).

Meanwhile, if we get HT, we also have to look further:

– Yet another tail (HTT) will cause an immediate loss; P(E|HTT)=0

– Another head (HTH) will revert us back to getting the first head; the HT creates no progress towards either HHH or HTT.

So P(E|HT)=\frac{1}{2}P(E|HTT)+\frac{1}{2}P(E|HTH)=\frac{1}{2}P(E|H)=\frac{1}{2}P(E).

Note that P(E|H)=P(E) for the same reasons as we saw earlier. Now we’re ready to tackle the big equation



where we have used the fact that P(E|HT)=\frac{1}{2}P(E). Solving gives P(E)=\boxed{\frac{2}{5}}.

This would be impossible to predict at first, and requires this kind of case-by-case analysis.


Lastly, consider HHT before TTT. Now flipping a tails in the beginning actually matters. For now, let’s just set the probability of winning given tails first equal to P(E|T) and consider flipping a head first. If we do, then flipping another head (HH) would guarantee a win. If we flip a tail (HT), we’re basically back to the case of flipping a tail first. So P(E|HT)=P(E|T).

Therefore, P(E|H)=\frac{1}{2}P(E|HH)+\frac{1}{2}P(E|HT)=\frac{1}{2}+\frac{1}{2}P(E|T).

Now to find P(E|T): to save myself from being redundant you can convince yourself that P(E|TH)=P(E|H) and P(E|TTH)=P(E|H). Of course, P(E|TTT)=0. Therefore,


And P(E|T)=\frac{1}{2}P(E|TH)+\frac{1}{2}P(E|TT)


Plugging this into our equation for P(E|H):

P(E|H)= \frac{1}{2}+\frac{1}{2}\left(\frac{3}{4}P(E|H)\right).

Solving, we get P(E|H)=\frac{4}{5} and thus P(E|T)=\frac{3}{4}\left(\frac{4}{5}\right)=\frac{3}{5}.

Now, for the answer:

P(E)= \frac{1}{2}P(E|H)+ \frac{1}{2}P(E|T)=\frac{2}{5}+\frac{3}{10}=\boxed{\frac{7}{10}}.

Whew! Quite a strange answer, considering we only have 8 possibilities after 3 flips of a coin.

You can do the same for HHH before TTT, HHT before THT, etc. Notice that the chance of getting HTT before HHT is 1 minus the probability of getting HHT before HTT, the latter of which we already know. Also notice that the probability of rolling TTH before THH is the same as the probability of rolling HHT before HTT, because we can just reverse the heads and tails of our argument and get the same answer.

It may be interesting and fun to create a table of the probabilities of getting one pattern before another. Since there are 2^3=8 possible patterns, there can be 8\times7=56 possible comparisons. However, because of the symmetry of heads and tails described above (for example, HHT before HHH is the same as TTH before TTT), this number can be immediately cut in half to give 28 comparisons. Using the “1 minus” rule for certain combinations makes counting even simpler.

In fact, at some point, I might post a table of these probabilities; I haven’t done so yet, but since I’ve already figured out three of the harder cases, the others should follow from problem solving momentum.

Posted in coins, other probability | Leave a comment

Dipping cubes into paint

A little break from the complexity of dice sums to focus on painting cubes:

You submerge an unpainted 3x3x3 cube, made up of 27 1x1x1 cubes, into red paint and break it apart into the small individual cubes. Then you pick a small cube at random and roll it. What is the probability that you’ll end up with a red painted side on top?


This is a classic example of a problem that really only requires 30 seconds to solve, but at first sight seems much more complex. Most systematically-minded people will take the longer route:

The small 1x1x1 cubes will either have no sides painted (the one in the middle of the 3×3 cube, aka interior piece), one side painted (the center of each face, aka face piece), two sides painted (the middle cube of each edge, aka edge piece), or three sides painted (the corners). There are 27 cubes total, and there is 1 center piece, 6 faces, 12 edges, and 8 vertices. The probability of selecting an interior piece is thus \frac{1}{27}, and \frac{6}{27} for selecting a face piece, and so on. With n sides painted, the probability of rolling a painted side up is n/6 because a cube has 6 sides. Therefore our total probability of rolling a painted face up is:

\displaystyle \underbrace{\frac{1}{27}\left(\frac{0}{6}\right)}_\text{interior piece}+\underbrace{\frac{6}{27}\left(\frac{1}{6}\right)}_\text{face piece}+\underbrace{\frac{12}{27}\left(\frac{2}{6}\right)}_\text{edge piece}+\underbrace{\frac{8}{27}\left(\frac{3}{6}\right)}_\text{corner}=\frac{6+24+24}{162}=\boxed{\frac{1}{3}}.

When you have an answer that simple, you know that there’s a better way to do it. The shorter way would be to realize that on the big cube, there are 3×3=9 small cubes showing on each of the 6 faces, so 6*9=54 ‘small’ faces are painted (this is the same thing as the surface area of the large cube). When you break up the cube, there are 27*6=162 ‘small’ faces in total, 6 faces for each of the 27 smaller cubes. So the probability of getting a painted face when you roll a random small cube is just 54/162=\boxed{1/3}, like above.


You can ask the same problem for basically any 3D object made of 1x1x1 cubes, and with the short method, the answer is simple to find: you take the number of small faces that are painted (that is, the surface area of the larger object), and divide it by the total number of faces present after breaking the object apart; there are 6 faces per small cube, so the total number of faces present is 6*(number of smaller cubes).

The number of 1x1x1 cubes is just the volume of the object. So the probability of rolling a painted side after selecting a small cube at random is:

\displaystyle \frac{\text{surface area}}{6\cdot\text{volume}}.

For example, say we have a 3x4x5 rectangle made up of 60 1x1x1 cubes, and we paint the outside. If we break it apart and roll a cube at random, the probability that a painted side will show up is:

\displaystyle \frac{2(3(4)+4(5)+3(5))}{6\cdot3(4)(5)}=\frac{47}{180}.

You would get the same answer if you went through the analysis of interior, face, edge, and corner pieces: there are (3-2)\times(4-2)\times(5-2)=6 interior pieces (subtract two from each dimension, because you don’t want the two outside pieces), 24 edge pieces (for an edge of side n, there are n-2 edge pieces; there are 4 edges of each length, so we have 4((3-2)+(4-2)+(5-2))=24), 8 corners (like always), and 22 face pieces (what’s remaining of the 60 cubes must be the face pieces!) , for a probability of

\displaystyle \underbrace{\frac{6}{60}\left(\frac{0}{6}\right)}_\text{interior piece}+\underbrace{\frac{22}{60}\left(\frac{1}{6}\right)}_\text{face piece}+\underbrace{\frac{24}{60}\left(\frac{2}{6}\right)}_\text{edge piece}+\underbrace{\frac{8}{60}\left(\frac{3}{6}\right)}_\text{corner}=\frac{22+48+24}{360}=\frac{47}{180}.

This is quite complicated, even though we made life a little easier by counting the face pieces last; they are hardest to count directly, so we counted the other pieces first and took the remainder to be face pieces.

For a cube with side length n, the surface area is 6n^2 and the volume is n^3. So the probability of rolling a painted side would be \frac{1}{n}. This confirms our answer of 1/3 from above for n=3. Notice that for larger cubes, the probability of getting a painted piece approaches zero. This makes sense, because there are so many interior cubes that have no paint on them at all; while there is only 1 interior cube in a 3x3x3, or 1/27 of the cubes, there are 8^3=512 interior cubes in a 10x10x10 – over half the cubes are interior!

Posted in dice, other probability | Leave a comment

Most likely sum of two and three dice (part 2)

Part 1Part 2 | Part 3

As reference, below are the tables for two and three dice possibilities, adapted from the previous post.

\begin{array}{ccc}x&d_2(x)&d_3(x)\\\hline  2&1&0\\3&2&1\\4&3&3\\5&4&6\\6&5&10\\7&6&15\\8&5&21\\9&4&25\\10&3&27\\11&2&27\\12&1&25\\13&&21\\14&&15\\15&&10\\16&&6\\17&&3\\18&&1\end{array}

Before you stare at the top row of the table cluelessly, let me mention that the ds are just a convenient shorthand that I made up 20 seconds ago: d_3(4) means the number of ways to get a sum of 4 with three dice, and d_2(x-1) is the number of ways to get a sum of x-1 with two dice. It’s truly a relief to compress a 13-word phrase into a simple symbol.


Just like the two dice case, the number of possibilities in the three-dice case is symmetric. This seems amazing, but the reason is not difficult to see. If you pair the sums that have equal number of possibilities: (3,18), (4,17), etc., you’ll notice that each pair adds up to 21. If you take any possibility with a certain sum, say a,b,c, and replace each item by 7 minus that item: 7-a,7-b,7-c, the result will add up to 21-(a+b+c), or 21 minus the original sum. In other words, if you take a sum, take all the possibilities with that sum, and replace the items in each possibility with 7 minus the items, you’ll get possibilities that add up to the other element in the sum pair: 21 minus your original sum. Using our d notation, we can write d_3(x)=d_3(21-x).

That may have been confusing, so let’s look at a more concrete case. For example, take the possibility 1,1,2, which adds up to 4, and change it to 6,6,5, which adds up to 17. The sums (4,17) are indeed a pair that adds up to 21.

Since for every possibility of a given sum, you can form a (unique) possibility that gives the other element in the sum pair, the two elements in the sum pair must correspond to equal numbers of possibilities.

This is true for any number of dice: if you replace each of the items x_i in a possibility x_1,x_2,\ldots,x_n (which gives a sum of x_1+x_2+\cdots+x_n) with 7-x_i, you will get a possibility with sum 7n-(x_1+x_2+\cdots+x_n), which is the other element of the sum pair. For a given sum, this can be done with each possibility giving the sum, so the elements of the sum pair give equal numbers of possibilities. Using our shorthand, d_n(x)=d_n(7n-x).

Now our task is to figure out whether the number of possibilities for each sum forms a symmetrical ‘pyramid’, because it seems to for 2 and 3 dice. In other words, the number of possibilities increases, then decreases, and the change from increase to decrease happens at the halfway point. If it does, then we’ve found our answer: the most likely sum is halfway between the minimum and maximum possible sums.

Mathematically, the minimum possible sum is n (roll all ones) and the maximum is 6n (roll all six’s), so the halfway point is their average: 3.5n. If n is even, this is a whole number: great, we’re done. If n is odd, we’ll get something with a .5 at the end, like 10.5 for n=3. In this case, the numbers directly less and greater than the halfway point will be the sums with the most possibilities; for three dice, it’s 10 and 11.

So the symmetric ‘pyramid’ idea can be more formally written as d_n(x+1)\le d_n(x) for all x<3.5n and d_n(x+1)>d_n(x) for all x\ge3.5n.

OK. Enough mathematical digression. Let’s first take a look at how we can count the number of possibilities for each sum more efficiently (instead of listing out the possibilities). How can we make 9, for example, with three dice? We can roll a 1 and then the remaining two dice have to sum up to 8. We can roll a 2 and then the remaining dice sum up to 7, and so on. However, we know the number of ways to get sums with 2 dice! So for a sum of 9, we can have:

\begin{array}{ccc}\text{First number}&\text{Remaining two dice}&\text{Possibilities} \\ \hline  1&8&5\\2&7&6\\3&6&5\\4&5&4\\  5&4&3\\6&3&2\\\hline&d_3(9)=&25  \end{array}

where d_2(8)=5, d_2(7)=6, etc. Notice that the number of possibilities is purely dependent on the number in the second column, which is already known from our earlier analysis; if the second column is y, the third column is d_2(y). The answer of 25 for d_3(9) agrees with the table above. How can we use this method of counting to our benefit? Consider what happens when you go from a sum of 9 to a sum of 10:

\begin{array}{ccc}  \text{First number}&\text{Remaining two dice}&\text{Possibilities}\\ \hline  1&9&4\\2&8&5\\3&7&6\\4&6&5\\  5&5&4\\6&4&3\\\hline&d_3(10)=&27\end{array}

In the third column, we lost the last row (2 possibilities) and gained the top row (4 possibilities), so our total increases by 2. Notice how the items in the third column get shifted down one position, which is because the second column gets shifted down one position. What if we go to sums of 11 and 12?

\begin{array}{ccc}  \text{First number}&\text{Remaining two dice}&\text{Possibilities} \\ \hline  1&10&3\\2&9&4\\3&8&5\\4&7&6\\  5&6&5\\6&7&4\\\hline&d_3(11)=&27  \end{array}

\begin{array}{ccc}  \text{First number}&\text{Remaining two dice}&\text{Possibilities} \\ \hline  1&11&2\\2&10&3\\3&9&4\\4&8&5\\  5&7&6\\6&6&5\\\hline&d_3(12)=&25  \end{array}

From 10 to 11, the numbers in the third column stay the same (just in a different order), so the sum doesn’t change either. From 11 to 12, shifting the third column down pushes the 4 (bottom row) out, and a 2 comes up on the top, so the sum drops by 2.

OK. We’re definitely going somewhere here. Let’s make our ideas more rigorous. For a sum of x, we must have:

\begin{array}{ccc}  \text{First number}&\text{Remaining two dice}&\text{Possibilities} \\ \hline  1&x-1&d_2(x-1)\\2&x-2&d_2(x-2)\\3&x-3&d_2(x-3)\\4&x-4&d_2(x-4)\\  5&x-5&d_2(x-5)\\6&x-6&d_2(x-6)\\\hline&\text{(sum=}x)&d_3(x)  \end{array}

If we go from a sum of x to a sum of x+1:

\begin{array}{ccc}  \text{First number}&\text{Remaining two dice}&\text{Possibilities} \\ \hline  1&x&d_2(x)\\2&x-1&d_2(x-1)\\3&x-2&d_2(x-2)\\4&x-3&d_2(x-3)\\5&x-4&d_2(x-4)\\6&x-5&d_2(x-5)\\\hline&\text{(sum=}x+1)&d_3(x+1)  \end{array}

All the numbers in the second and third rows get pushed down one position (assuming that the first column stays put), so we had to get rid of the last row of the second and third columns and add some stuff to the first row.

As you can see, the third column stays the same except that d_2(x-6) is replaced by d_2(x). The change, d_3(x+1)-d_3(x), is equal to the difference d_2(x)-d_2(x-6).

When is this difference positive and when is it negative? Let us see:

\begin{array}{l|c|c|c}  x&d_2(x)&d_2(x-6)&d_2(x)-d_2(x-6)\\\hline 2&1&0&1\\3&2&0&2\\4&3&0&3\\5&4&0&4\\6&5&0&5\\7&6&0&6\\8&5&1&4\\9&4&2&2\\10&3&3&0\\11&2&4&-2\\12&1&5&-4\\13&0&6&-6\\14&0&5&-5\\15&0&4&-4\\16&0&3&-3\\17&0&2&-2\end{array}

Indeed, the difference is always positive until you get to x=10, when it goes to 0 and then negative for larger x. You can thus see that




And therefore, it’s reasonable to conclude that sums of 10 and 11 have the most number of possibilities, since the changes in d_3(x) become negative afterwards.

Why exactly does the difference go from positive to negative at x=10? Consider the shape of the 2-dice sum distribution: d_2(x) increases for x\le 7 and decreases thereafter, and the distribution is symmetric. For x=10, we know from symmetry that d_2(10)=d_2(4), so d_3(11)-d_3(10)=0 as we saw. For larger x, d_2(x) gets smaller, while d_2(x-6) gets larger, so their difference becomes more negative. The opposite is true for smaller x.

We now are convinced that the most likely sum for two and three dice occurs in the “middle” of the distribution: the average of the highest and lowest possible sums. Soon we will see that this is true for any number of 6-sided dice:

\text{most likely sum occurs at}=\arg\max_x d_n(x)=\begin{cases}3.5n&\text{n is even}\\3.5n\pm0.5&\text{n is odd}\end{cases}

The stuff on the left of the equals sign means “the value of x (i.e. the argument) that gives the maximum value of d_n(x)“. Recall that the largest possible sum for n dice is 6n, the smallest is n, so their average is 3.5n. However, this is a decimal if n is odd, so as in the 3-dice case, there are two most probable sums. In the next post, we will prove this generalization.

Posted in dice | Leave a comment

Most likely sum of two and three dice

Part 1 | Part 2 | Part 3

What is the most likely value of the sum of two dice? How about three dice?


For two dice, it’s quite easy and straightforward, and you could just count. Let S be the sum, and in the possibilities column, each tuple represents the dice values:

\begin{array}{lcl|l}  &&&\text{Possibilities}\\\hline  P(S=2)&=&\frac{1}{36}&(1,1)\\  P(S=3)&=&\frac{2}{36}&(1,2),(2,1)\\  P(S=4)&=&\frac{3}{36}&(1,3),(3,1),(2,2)\\  P(S=5)&=&\frac{4}{36}&(1,4),(4,1),(2,3),(3,2)\\  P(S=6)&=&\frac{5}{36}&(1,5),(5,1),(2,4),(4,2),(3,3)\\  P(S=7)&=&\frac{6}{36}&(1,6),(6,1),(2,5),(5,2),(3,4),(4,3)\\  P(S=8)&=&\frac{5}{36}&(2,6),(6,2),(3,5),(5,3),(4,4)\\  P(S=9)&=&\frac{4}{36}&(3,6),(6,3),(4,5),(5,4)\\  P(S=10)&=&\frac{3}{36}&(4,6),(6,4),(5,5)\\  P(S=11)&=&\frac{2}{36}&(5,6),(6,5)\\  P(S=12)&=&\frac{1}{36}&(6,6)\\  \end{array}

Notice this pyramid-like shape of the possibilities list – it’s an easy way to remember the probability distribution of the sum of two dice: start with \frac{1}{36} for S=2 or 12, go to \frac{2}{36} for S=3 or 11, and work your way up.

For three dice, it’s not so simple. The number of possibilities goes

\begin{array}{l|l}  \text{sum}&\text{possibilities}\\\hline  3&1\\4&3\\5&6\\6&10\\7&15\\8&21\\9&25\\10&27\\11&27\\12&25\\13&21\\14&15\\15&10\\16&6\\17&3\\18&1\end{array}

So the answer is 10 or 11, but this took a lot of counting work (not to mention the potential for error). Worse yet, what we had to repeat this problem for four or five dice? In the next post, we’ll gain some insight on how these numbers of possibilities work.

Posted in dice | 1 Comment