Can You Beat the Heats?
There are 25 racers, but only 10 can race in any given heat. Without a stopwatch, how many heats would you need to identify the fastest, second-fastest, and third-fastest racers?
Welcome to Fiddler on the Proof! The Fiddler is the spiritual successor to FiveThirtyEight’s The Riddler column, which ran for eight years under the stewardship of myself and Ollie Roeder.
Each week, I present mathematical puzzles intended to both challenge and delight you. Beyond these, I also hope to share occasional writings about the broader mathematical and puzzle communities.
Puzzles come out Friday mornings (8 a.m. Eastern time). Most can be solved with careful thought, pencil and paper, and the aid of a calculator. Many include “extra credit,” where the analysis gets particularly hairy or where you might turn to a computer for assistance.
I’ll also give a shoutout to 🎻 one lucky winner 🎻 of the previous week’s puzzle, chosen randomly from among those who submit their solution before 11:59 p.m. the Monday after that puzzle was released. I’ll do my best to read through all the submissions and give additional shoutouts to creative approaches or awesome visualizations, the latter of which could receive 🎬 Best Picture Awards 🎬.
This Week’s Fiddler
From Steven Stern comes a puzzle about speedy runners that you can mull over at a more leisurely pace:
There are 25 sprinters at a meet, and there is a well-defined order from fastest to slowest. That is, the fastest sprinter will always beat the second-fastest, who will in turn always beat the third-fastest, and so on. However, this order is not known to you in advance.
To reveal this order, you can choose any 10 sprinters at a time to run in a heat. For each heat, you only know the ordinal results, so which runner finishes first, second, third, and so on up to 10th. You do not know the specific finishing time for any runner, making it somewhat difficult to compare performances across heats.
Your goal is to determine with absolute certainty which of the 25 sprinters is fastest, which is second-fastest, and which is third-fastest. What is the fewest number of heats needed to guarantee you can identify these three sprinters?
Extra Credit
At a different meet, suppose there are six sprinters that can race head-to-head. (In other words, there are only two sprinters per heat.) Again, they finish races in a consistent order that is not known to you in advance.
This time, your goal is to determine the entire order, from the fastest to the slowest and everywhere in between. What is the fewest number of head-to-head races needed to guarantee you can identify this ordering?
Making the Rounds
There’s so much more puzzling goodness out there, I’d be remiss if I didn’t share some of it here. This week, I’m sharing a 1958 Putnam exam that somehow went viral. I’m pretty sure there’s at least one problem I’d label Physics and another I’d label Finance. But there’s definitely one I’d label a Fiddler, which I’ve copied below:
Real numbers are chosen at random from the interval from the interval (0 ≤ x ≤ 1). If after choosing the nth number the sum of the numbers so chosen first exceeds 1, show that the expected or average value for n is e.
If this were truly a Fiddler, you would have been asked to determine the expected value for n. But for the Putnam, you were given the computational result and “merely” asked to prove it.
Want to Submit a Puzzle Idea?
Then do it! Your puzzle could be the highlight of everyone’s weekend. If you have a puzzle idea, shoot me an email. I love it when ideas also come with solutions, but that’s not a requirement.
Last Week’s Fiddler
Congratulations to the (randomly selected) winner from last week: 🎻 Laurent Lessard 🎻 from Toronto, Canada. I received 40 timely submissions, of which 29 were correct—good for a 72.5 percent solve rate.
Last week, you had to tackle a continuous version of Showcase Showdown from the game show, “The Price Is Right.”
There were two players: A and B. Player A was the first to spin a giant wheel, which spat out a real number chosen randomly and uniformly between 0 and 1. All spins were independent of each other. After spinning, A could either stick with the number they just got or spin the wheel one more time. If they spun again, their assigned number was the sum of the two spins, as long as that sum was less than or equal to 1. If the sum exceeded 1, A was immediately declared the loser.
After A was done spinning (whether once or twice), B stepped up to the wheel. Like A, they could choose to spin once or twice. If they spun twice and the sum exceeded 1, they were similarly declared the loser.
After both players were done, whoever had the greater value (that did not exceed 1) was declared the winner.
Assuming both players played the game optimally, what were Player A’s chances of winning?
This was a particularly great puzzle. And as is often the case with great puzzles, it has been posed previously. The idea of a continuous Showcase Showdown was asked at the Teacher Professional Continuum by reader Bown Kerins back in 2009. Meanwhile, reader Dan Swenson worked through the problem in great detail in a 2015 article, which I’ll circle back to in the Extra Credit.
But enough about the history of this puzzle—let’s get to solving it!
Most solvers started with Player B, whose strategy was relatively straightforward. If A had already “busted” by exceeding 1, then B automatically won. Otherwise, B had to exceed A’s value at all costs—if B’s first spin wasn’t high enough, then B had to spin again, even if it meant risking a bust.
Suppose A’s score was a, meaning that was the value B had to exceed. What were B’s chances of winning at this point? Well, B could get lucky and exceed a on a single spin (with probability 1−a). But if B was unlucky on the first spin—something that happened with probability a—then B needed to spin again and hope that the sum of the two spins was between a and 1. Player B’s victory could be represented geometrically by the blue region in the unit square below. The area of this region, which was precisely B’s chances of winning, was the area of the rectangle on the right plus the area of the parallelogram on the left, or 1−a+a(1−a), which simplified to 1−a2. (Note that the rectangular region on the right didn’t actually involve a second spin, but its area still represented a victory for B.)
So B’s strategy was entirely predicated on Player A’s results. If A had a score of a, then B’s probability of winning was 1−a2, which meant A’s probability of winning was a2.
Meanwhile, Player A had an actual decision to make. There had to be some critical value, k, below which A would opt to spin again, and above which A would “hold.” The challenge of this puzzle was essentially to determine which value of k maximized A’s chances of winning.
A direct approach was to compute A’s chances of winning for any given value of k. To do this, you first needed to figure out the probability distribution for A’s score, which we’ve been calling a, given a particular value of k. Let’s call this probability pk(a).
When A’s first roll was greater than k, A held. This corresponded to a uniform probability distribution between k and 1. And when A’s first roll was less than k, A rolled again. This time, you needed the sum of two random variables (i.e., the convolution of their probability distributions), the first of which was uniformly distributed between 0 and k, and the second of which was uniformly distributed between 0 and 1. The output of this convolution was a trapezoidal distribution that peaked between k < a < 1.
Stacking these two distributions (a > k and a < k) atop each other gave you the resulting distribution for pk(a):
It kind of looks like Devils Tower, right? Anyway, you could check your work at this point by confirming the area under the distribution was 1, regardless of k.
To determine A’s chances of winning for a given value of k, you had to integrate the product of pk(a), which was the probability of A having a score of a, and a2, which was A’s probability of winning given that score of a. Of course, this only applied to values of a less than 1, i.e., when A didn’t bust.
Splitting the probability distribution into two pieces (a triangle and a rectangle), the integral expression became:
The first integral was k4/4, while the second was (1+k)(1−k3)/3. Adding these together gave you 1/3 + k/3 − k3/3 − k4/12. Here’s what a graph of this function looked like, courtesy of Michael Schubmehl (who used the variable a to represent the stopping threshold, rather than k):
This probability was maximized when the first derivative with respect to k was zero, which looked to be somewhere between 0.5 and 0.6 according to the graph. That derivative was 1/3 − k2 − k3/3, and setting it equal to zero gave you the cubic equation k3 + 3k2 − 1 = 0. You could solve this equation for k, and then plug that back into the expression 1/3 + k/3 − k3/3 − k4/12 to figure out A’s chances of winning.
Numerically, the unique solution for k3 + 3k2 − 1 = 0 with 0 < k < 1 was approximately 0.5321. This was quite close to the cutoff from the discrete version of this puzzle, which had turned out to be 0.55.
But a few solvers, including Vince Vatter, Q P Liu, Eric Widdison, and Michael Schubmehl, went the extra mile and solved the cubic equation analytically. Vince did it with some clever trigonometry:
First, he eliminated the quadratic term by defining a new variable y = k+1. Plugging in y−1 for k gave the equation y3 − 3y + 1 = 0. Then, he defined another new variable 𝜃 such that y = 2cos𝜃. That made the cubic equation 8cos3𝜃 − 6cos𝜃 + 1 = 0. Why was this helpful? Because of a particular triple-angle trig identity: 4cos3𝜃 − 3cos𝜃 = cos(3𝜃). That simplified the equation to cos(3𝜃) = -1/2, which meant that 3𝜃 was equal to 2𝜋/3 radians, and that 𝜃 was equal to 2𝜋/9 radians. This in turn meant the exact value of k was 2cos(2𝜋/9)−1, which, consistent with what we previously found numerically, was approximately 0.5321.
Plugging this value for k back into expression for A’s chances of winning, which we had said was 1/3 + k/3 − k3/3 − k4/12, revealed an exact answer of 3/4 − cos(2𝝅/9)/2 + sin(𝝅/18)/2, or about 0.4538.
By going second, B had the advantage of seeing A’s result and only spinning the wheel twice when it was absolutely necessary. Meanwhile, A’s chances were less than 50 percent, and the best A could hope to do was win about 45.38 percent of the time.
Last Week’s Extra Credit
Congratulations to the (randomly selected) winner from last week: 🎻 Sanandan Swaminathan 🎻 from San Jose, California. I received 17 timely submissions, of which 10 were correct—good for a 59 percent solve rate.
Instead of two players for the continuous Showcase Showdown, now you had three: A, B, and C. Players spun the same giant wheel, once or twice. As before, spins generated numbers that were randomly, uniformly, and independently chosen between 0 and 1.
If a player spun once, they were assigned the resulting number. If they spun twice, they were assigned the sum of the spins. But if that sum exceeded 1, that player was immediately declared a loser.
Player A went first, then Player B, and finally Player C. After all three players were done, whoever had the greatest value (that did not exceed 1) was declared the winner.
Assuming all three players played the game optimally, what were Player A’s chances of winning?
Keep reading with a 7-day free trial
Subscribe to Fiddler on the Proof to keep reading this post and get 7 days of free access to the full post archives.