Mathematics of Doubling in Match-Play

In this post I wish to discuss the mathematics of doubling in some detail.

Let us assume Hero and Villain are playing a “match-to-three”. That is, first player to win three VP is the overall match winner. Each individual game is worth 1 VP. The actual game being played is irrelevant: it could be Chess, Agricola, Snakes and Ladders, the royal game (with Hero moving the cards and Villain trash-talking at every sub-optimal play), or a well-known poker variant where both players start with three items of clothing. To make things interesting, assume the winning probability of an individual game is slightly less than 50%. To be specific, I will set the win rate to 40%. What are the chances of Hero winning the overall match?

This kind of question is most easily solved with dynamic programming. The boundary condition says that if someone already has three wins then the overall winning chances is either 0% or 100%. Next, we can compute the winning chances when the score is 2-2. We can then work our way backwards, eventually arriving at 31.7% winning chances for the Hero at 0-0.

You will notice that if the match score is X-X then the Villain prefers smaller numbers of X. Intuitively, smaller values of X means that it is less likely that Hero will find enough “random noise” to overcome Villain’s advantage in the long run. The following diagram summarises the probability of Hero winning the match at every possible match-score:

I should mention that Bart has already done his homework, and he knows how to compute the probabilities corresponding to each match score. If the winning chances of an individual game are 0.25 then he correctly computes the following:

  • 0.049 chance of winning 5-point match, no doubling
  • 0.104 chance of winning 3-point match (equivalent to 5-point match with 2VP per game)

I will leave this as an exercise for the reader.

Now let us give Hero the following handicap: Before each game, Hero can demand the game be played for 1 VP or 2 VP. Moreover, there is no bonus for winning the match with 4 VP instead of 3 VP.

This means, for instance, if Villain has 2 points then Hero will always play for 2 VP. Again, we can use dynamic programming to compute the winning chances. If you use Excel to perform the dynamic programming then you will need the function max(FOO,BAR) somewhere in your calculations. You will notice several things:

  • The boundary conditions now include either player having 3 or 4 VP.
  • The table only shows winning percentages, but not whether Hero should play for 1 VP or 2 VP.
  • The numbers look weird: The winning chances on the main digonal is no longer strictly increasing and the winning chances for 1-2 is the same as 1-1.

The latter is easily explained. Since Hero can choose to play for 2 VP, Villain gets no advantage from being 1-2 instead of 2-2. Slightly more interesting is a match-score of 1-1. If Hero plays for 2 VP, then the next game decides the match. If Hero plays for 1 VP then the worst case scenario is the match score becomes 1-2, in which case Hero can play for 2 VP and the next game decides the match. Therefore, it must be correct to play for 1 VP. It turns out Hero is close to breaking even thanks to his handicap.

It is not hard to get an Excel spreadsheet to crunch the numbers for different parameter values. We can, for instance, figure out what happens in a match to 13 points with Hero’s winning chances down to 30% for an individual game.

When we talk about longer matches, it is generally more convenient to think in terms of number of VP remaining instead of number of VP already scored. For instance, 19-18 in a 21-point match is equivalent to 5-4 in a 7-point match and it makes sense to describe both of them as “2-away 3-away”.

Redoubles

Now what happens if either side has the right to double the stakes, but there are no redoubles? This is a trivial case: it makes no sense to refuse a double. If one side declines to double, he can’t prevent the opponent from doing the same. Therefore, it must be correct for either side to double.

But what if redoubles were allowed? This means e.g. if Hero proposes to play for 2 VP then Villain can propose to play for 4 VP before the game starts. Then things may get weird. I will leave the analysis as an exercise for the reader.

Doubling During the Middle of a Game

Hitherto, we have assumed that doubling could only occur before the start of every game. In this case, each individual game can be thought of as a black box – the only relevant parameter is the winning chances of a single game. Moreover, it never makes sense to pass a double since the loser gets the same “pre-starting position” and the winner gets a free point.

But if doubling were allowed during a game then the “structure” of the game tree becomes important. For instance, two different games could have the same winning chances of 25% but a different structure. If Hero doubles judiciously then he can leverage the structure to improve his overall chances of winning the match (of course he can’t leverage the structure to improve his chances of winning a particular game – if Villain were hell-bent on winning the next game at all costs, then he would never pass a double).

To illustrate the concept of structure, assume we are interested in maximising the expected number of VP for a single game instead of a “match-to-N-points”. Consider the following “random-walk-game”: A happy star randomly moves to one of the twelve coloured leaf nodes, with each node occurring with probability 1/12. If the colour is Green (Red) then Hero wins (loses) one VP. If no doubling cube is used then basic math says the expected gain per game is 1/3 = 0.333 VP for both structures depicted in the left and right halves of the diagram below.

Now suppose that Hero has the right to double before the game and Villain can double when the happy star reaches one of the three “intermediate” nodes. On the left diagram, assume that Hero doubles (since there are more greens than reds). Villain should accept and then Hero can expect to win 0.667 VP. But on the right diagram Hero is only winning 0.5 VP if Villain uses correct doubling strategy. This is left as an exercise for the reader.

Therefore, structure is important: without the cube Hero wins the same expected VP in both games, but with the cube Hero prefers the first game. Another lesson is that ownership of the cube (i.e. exclusive right to make the next double) is worth some equity. As a general rule, if all other things are equal then whoever owns the cube prefers game states that require many moves before one side has a decisive advantage.

Hopefully this example should make it clear why a simple mathematical analysis breaks down when we consider real games with doubling decisions occurring during the game. Similar considerations obviously apply when computing optimal match strategy rather than expected VP in a single game.

Summary

In this post I show that optimal use of the doubling cube is a lot more complex than “always double (take) if our winning chances are at least 80% (20%)”. There are three caveats:

  • The elephant in the room is nobody knows how to reliable estimate the winning chances of a specific game state. Not even Spider GM can do this, unless there are very few cards unseen
  • 80% and 20% turn out to be optimal parameters – only if we assume the winning chances change continuously rather than suddenly change (think Brownian motion instead of quantum leaps!).
  • The parameters 80% and 20% also assume we are playing for money (e.g. $1 = 1 VP, aim to maximise expected winnings). It doesn’t work in match-to-N-points, especially when we reach the pointy end with both sides close to victory.

If we can solve the elephant in the room, then this doubling strategy should be a good starting point – coupled with a few “common-sense tweaks” near the end of the match. For instance, you never double when you are 1 point away from winning the match etc. But one could argue it is precisely the elephant in the room that makes Spider Solitaire such a great game 😊

Fun Fact

If Hero has the exclusive right to make the next double then it is possible to construct a pathological game tree where one can change some green nodes to red while also changing Hero’s correct doubling decision from No-Double to Double. A well-known Backgammon example is the Jacoby Paradox.

Will 2022 Be Year of the Ninja Monkey?

“Twenty Twenty Two Gonna Be Great Year!” shrieks Ninja Monkey.

“How so?” asks the Wise Snail.

“Twenty Twenty Two – My Year! Year of Monkey!”

“You don’t have to jump up and down all the time,” gripes the Sand Griper. “It’s annoying.”

“Besides,” roars the Tiger, “you have no evidence to back up your claim. According to the Chinese Zodiac, it’s MY year. The formula for Tiger years is 12x + 6 for any whole number x. Substitute x=168 and you get 2022. Quod Erat Demonstrandum.”

“Tiger is right,” says the Elephant. “I may not be the sharpest tool in the box when it comes to Phil Hellmuth’s menagerie of poker animal types, but I can remember every sign of the Zodiac and which year corresponds to which animal.”

“Not so fast,” says the Ox. “Chinese New Year starts on February not January, so I still get to enjoy approximately 30 more days of fame.”

“To add insult to injury,” adds the rot13(fzneg nff), “the formula for Monkey years is 12x for any whole number x. That means you are the maximum possible distance in either direction from one of your good years – therefore 2022 is the worst possible year for the monkey”

“Monkey don’t care, Monkey don’t care! Monkey invent his own Zodiac!”

“But you can’t invent something out of the blue just because a few unpleasant facts got in the way of a really good story” says the Eagle.

Ninja Monkey presents his own version of events: there are exactly 337 animals in today’s meeting. Each animal represents a different species – if we conveniently ignore the fact Ninja Monkey brought his GF along. Ninja Monkey assigned himself the year 0, his GF the year 1 and the rest of the animals different years in no particular order. How convenient it was that 337 happened to be a prime divisor of 2022. Quod Erat Demonstrandum.

“The monkey raises a valid point,” says the Wise Snail. “If the Zodiac caters for only 12 animals, then the vast majority of us miss out altogether. Monkey’s suggestion is much fairer even if it is not based on accepted tradition”.

With nobody able to rebut the Wise Snail’s statement, everybody stews in awkward silence for a few minutes. Finally Bad Idea Bear #1 comes up with a resolution: The Eagle would deal ten random hands and Ninja Monkey would have to win at least one game without rot13(haqb) using his Improved Random Move Algorithm. If the Monkey could achieve this then the new Zodiac is in. Otherwise, the Monkey would have to reluctantly accept the Tiger’s version of events and wait another six years.

Nobody else has anything better to offer, and for once a suggestion from a Bad Idea Bear gets unanimous agreement. At least there is no BIB #2 around to come up with something even worse. So, without further ado Let The Games Begin!

Game 1

Game 2

Game 3

“I thought things had started well,” sighs the Wise Snail, who is Ninja Monkey’s best friend.

Game 4

“Some promising signs there,” sneers the Tiger. “Too bad in the end!”

<<Several hands later>>

“Okay fine”, says the Monkey as he slams the cards onto the ground after conceding the last hand. “Year of the Tiger it is!”

THE END

One for the Math Geeks

After the world’s most intense game of Spider SOLITAIRE played by Spider GM, two International Masters and three coloured blobs from Among Us, followed by some rather detailed post-mortem analysis I think now is a good time for some light relief with some rot13(zngurzngvpny znfgheongvba). More specifically, we answer the following question: if cards are dealt one at a time from a perfectly shuffled deck, how long do we have to wait until a full suit appears?

The relevance to Spider Solitaire players should be pretty clear: we have absolutely no control over what cards turns up. If, for example, there are no Queens in the first 20 cards then the chances of the next turnover being a Queen are exactly 8 in 84 – no amount of skilled or unskilled play can fight the basic laws of probability. All you can do is hope to mitigate the effects of not getting any Queens in the first 20 cards with skilful play. Similar considerations apply if you’re in the middle-game desperately waiting on the Three of Diamonds to complete a full suit.

If you insist on shifting the odds, we can commit the cardinal sin of Spider Solitaire by using rot13(haqb). Alternatively, we can somehow fail to obtain any turnovers in the tableau for the remainder of the game – in either case, the mere thought is too horrible to contemplate.

There is one minor complication: when dealing from the stock 10 cards appear simultaneously (which I mentioned more than once during the post-mortem of the Among-Us game). Here we will assume they appear sequentially. In the example below, a row of 10 cards has just been dealt. There are 61 cards visible, and by the time the Three of Diamonds is dealt we see every card in Diamonds. In other words, we needed 60 cards to obtain a full suit.

Clearly, the minimum number of cards required is 13 and the maximum is 96 (if we get e.g. no Kings until the last eight cards). During the Among Us game I thought we were fairly lucky to see a full suit after 60 cards. From experience I would expect we need more cards visible before the chances of seeing a full suit are 50%. However, I never bothered to quantify this. Fortunately, it is relatively straightforward to run a computer simulation. We might simulate X hands and find e.g. we need on average Y cards instead of 60 before there is an even chance of finding a complete suit. If Y cards corresponds to the Zth percentile then we would know where we stand in terms of how lucky we were.

In this case, I ran 5000 iterations and found 60 or lower cards were needed to achieve a complete suit 1176 times. This corresponds to the 22nd or 23rd percentile. The mean and median is closer to 66.5 and 67.0 respectively.

As I suspected, we did get lucky with the Diamond suit, but nowhere near enough to justify improvising a rap song with the phrase “statistically significant” appearing once every ten seconds. To put this in Dungeons & Dragons terms, if this game of Spider Solitaire were a character, then we would have rolled better-than-average initial stats for complete-suits, but nobody in their right mind would accuse the dice of being rigged. Presumably we would have lousy initial stats for other abilities such as turnovers or shortages of particular ranks etc (I did have some misgivings about our winning chances during this hand therefore something had to be lousy), but unfortunately there is only so far one can go with the Dungeons & Dragons analogy.

Without the ability to remove Diamonds, I expect we would have been in more trouble than Ian Nepomniachtchi’s Bishop getting harassed by both Black rooks after capturing a poisoned pawn in Game 9 of the World Chess Championship since no other suit is close to completion. I’m not sure what’s the best way to test this conjecture via simulation, so if you have any clever ideas, please leave a comment 😊

Intermezzo

In Microsoft Window’s Spider Solitaire I found an unusual feature/bug: if a player completes a suit he receives a 100-point bonus but is not penalised 1 point for the move. For instance, suppose we have K-Q-J-T-9 of Hearts in one column and 8-7-6-5-4-3-2-A of Hearts in another and our score is 458. If we clear Hearts our score becomes 558, not 557. Another example: suppose we have no completed suits, a score of 400 and our target is to win with 1000+ points. The maximum number of moves we have is 208, not 200.

Note that clearing a suit is automatic, so we don’t have the option of completing the suit but refusing to move it do the foundations. In rare circumstances this option may be desirable, but giving an example is left as the proverbial exercise for the reader.

Here’s another fun question: what is the theoretical minimum number of moves required to beat Spider Solitaire if the cards fall perfectly?

To avoid accidentally revealing spoilers, I will insert the lyrics for one of my favourite anti-smoking songs.

Once a stupid smoker camped by a billabong

Under the influence of L.S.D.

And he sang and he smoked while his mates were drinking alcohol

You’ll come a smoking will kill ya with me

Smoking will kill ya smoking will kill ya

You’ll come a smoking will kill ya with me

And he sang and he smoked while his mates were drinking alcohol

You’ll come a smoking will kill ya with me

Up rode the smoker mounted on his thoroughbred

Down came the troopers one two three

What’s that illegal drug you’ve got in your tucker bag?

You’ll come a smoking will kill ya with me

Smoking will kill ya smoking will kill ya

You’ll come a smoking will kill ya with me

What’s that illegal drug you’ve got in your tucker bag?

You’ll come a smoking will kill ya with me

Rot13(shpx) said the smoker he committed suicide

You’ll never catch me alive said he

And his ghost may be heard as you pass by the riverside

You’ll come a smoking will kill ya with me

Smoking will kill ya smoking will kill ya

You’ll come a smoking will kill ya with me

And his ghost may be heard as you pass by the riverside

You’ll come a smoking will kill ya with me

The simplest viewpoint is to consider in-suit builds because the game is finished if and only if we have exactly 96 in-suit builds.

Each move gains at most one in-suit build (e.g. 6 of Hearts onto the 7 of Hearts). Dealing a row of 10 cards can gain at most 10 in-suit builds. We start with zero in-suit builds and can deal from the stock five times. To obtain 96 in-suit builds we require a minimum of 5 deals (worth 50 in-suit builds) plus 46 moves to pull off the Holy Grail of Four-Suit Spider Solitaire. That’s 51 moves in total. Well done if you answered 51 moves.

Assuming we start with 500 points, our final score is 500-51+800 = 1249 points … plus an 8-point rebate for the feature/bug described above. That’s a grand total of 1257.

And what happens to our score if we deal a row of ten cards and automatically remove two or more suits? Even I don’t know the answer to that one 😊

Exercise for the interested reader: get two physical decks of playing cards and deal a hand of Spider Solitaire. Ignore the identity of face-up cards (so if you see an Eight of Hearts, you can pretend it is the Four of Spades or any other card of your choosing), so you are effectively distinguishing only between face-up cards and face-down cards. Remember that you cannot deal a row of ten cards if you have at least one empty column. Play out the hand and verify that victory can indeed be attained in exactly 51 moves.

Tower of Hanoi

The Tower of Hanoi is a simple mathematical problem or puzzle. You are given three rods and a number of discs of different sizes. The puzzle starts with all discs on a single rod. Your aim is to move all of them to a different rod according to various rules:

  • Only one disc can be moved at a time
  • No disc can sit atop a smaller disc.

It is not hard to show that with N discs, we can achieve the goal in 2N – 1 moves. The simplest proof is to observe that with N discs we need to perform the following three steps: (i) shift the top N-1 discs to an empty rod (ii) shift the bottom disc to the other empty rod, (iii) shift the top N-1 discs onto the bottom disc. By mathematical induction one easily establishes the formula 2N – 1. Note that we are essentially reducing the problem with N discs to a problem with N-1 discs.

With similar reasoning one can show that any random position of discs can be obtained (as long as no disc covers a smaller disc). The proof is left as an exercise for the reader.

The Tower of Hanoi is an example of shifting a large pile of items with limited resources. If you are not familiar with this puzzle, you will probably be surprised by the fact that only three rods are required no matter how many discs you start with. Avid readers of this blog may have come across terms like “Tower-Of-Hanoi manoeuvres” from previous posts, so if you were unsure what the fuss was all about, then now you know 😊.

In Spider Solitaire we are often confronted with the problem of shifting large sequences of cards with limited resources. A simple example is shown below: A complete suit of Spades is visible but can we actually clear the suit with only one empty column?

The answer is yes. We can shift the Eight of Diamonds onto the Nine of Diamonds in column six, build the J-0-9 of Spades onto the K-Q in column 2, move the 8-7-6-5 of Spades from column five onto the 9 of Spades, swap the 4H and 4S on top of both the Spade Fives and finally add the Ace of Spades from Column three to complete the suit.

Going back to the Hanoi puzzle, with a small number of rods a monkey could probably luck his way into a solution by making random moves, but once you get a decent size pile of discs the random move strategy doesn’t work so well! Also, with random moves it is difficult to prove that e.g. 30 moves or less is impossible given five discs. Similar considerations apply to Spider Solitaire. Since the above example is relatively simple, a monkey could probably complete a suit of Spades by repeated trial and error, assuming he only makes moves that are “reversible”. But with a more complex problem, the monkey won’t do so well.

If you want more practice with “Tower-of-Hanoi manoeuvres” I recommend the following exercise: set up the diagram above, ignoring any face-down cards or cards not in sequence (for instance in column two you keep only the K-Q of spades).  Then try to minimise the number of in-suit builds using only reversible moves (you should be able to get pretty close to zero). From this new position pretend you’ve just realised your mistake and try to clear the Spades using only reversible moves. This exercise should give you an idea of why empty columns are so valuable.

Note that all this carries the assumption of no 1-point penalty per move (commonly used in many implementations of Spider Solitaire). If there was such a penalty then we would have to think twice about performing an extra 50 moves just for the sake of one more in-suit build. But for now we’ll keep things simple.

A closer look at Choose Your Difficulty

In Microsoft Spider Solitaire a player can choose 1/2/4 suits and a difficulty level. A player can gain Experience Points by winning games of Spider Solitaire, and after gaining enough XP he can level up. After sufficient levelling up, the player might even win some percentage of Microsoft shares or dot com stock options … uhhh just kidding 😉

The XP gained is shown in the table below.

Experience Points Table

The first thing to notice is not all combinations of “suit count” and difficulty are legit. For instance there is no Grandmaster hand at the 1-suit level and the minimum difficulty for 4-suit level is Expert. A random deal presumably means the deck is properly shuffled (in math terms all 104! hands occur with equal probability if we ignore the equivalence of cards of the same rank and suit), and the player is explicitly warned that such deals may be unsolvable. Any deal other than random is guaranteed to be solvable with sufficient luck or the use of boop.

Obviously it is difficult to measure the true worth of how hard a game really is. For instance if we played 1-suit then should beating an Easy and Medium hand be worth the same as beating a Hard? At least  increasing the number of suits or difficulty level results in increasing the XP gained, which is what we expect. So far so good.

However, I noted the XP gained for a random deal is equal to the XP gained for the lowest permissible difficulty level for the same number of suits, and this makes little sense.

For sake of argument let us assume we have 400 hands at the four-suit level. 300 of these are solvable and are arranged in order of increasing difficulty from left to right. The remaining 100 are unsolvable and occupy the right-most 100 deals in random order. An Expert deal would choose randomly out of the left-most 100, but a Random deal would choose randomly out of the entire 400 hands. Clearly it should be easier to beat an Expert deal than a Random deal, and therefore the latter should be worth more XP than the former.

In practice, the overwhelming majority of games are winnable, even at the Four-suit level (although I know that many folk will dispute that claim!) so the above example should really have e.g. 301 = 100+100+100+1 hands instead of 400. Essentially Random is equivalent to “Random but guaranteed winnable”. Therefore the same reasoning says a Random hand should be worth less than a Grandmaster hand.  Perhaps a Random deal should be worth the same as a Master deal, Maybe a little more or little less. But it certainly should be worth more than Expert. Of course, similar considerations apply to the 1-suit or 2-suit levels.

Perhaps some dude who is much, much smarter than I am can write a Ph. D. on the true worth of XP for a given difficulty level and number of suits. Another Doctor of Spider Solitaire anyone??? 😊

My friend is a Doctor of Spider Solitaire :)

It’s official – I have awarded my Scrabble friend a Doctor of Spider Solitaire. His first actual attempt was

  • Philosophy -> Peter Thiel -> Forbes 400 -> Bill Gates -> Microsoft Windows -> Windows 3.0 -> Microsoft Solitaire -> Spider (solitaire).

Unfortunately Peter Thiel does not link to Bill Gates in 1 step, and there were a few false leads with some Windows versions (e.g. XP) not having a Microsoft Solitaire link.

For those who prefer visuals – here is a screen dump showing multiple routes from philosophy to Spider (solitaire):

My friend says he likes paths that go through Creed Bratton/The Office (visible on the left if you look closely). I have nothing much to add here 🙂

From Spider Solitaire to Philosophy – and Back Again

And now for something completely different:

Let us try the following experiment. We start with the Wikipedia page on Spider Solitaire and then do the following:

  • click the first link of the “main text” (but ignoring anything in parentheses).
  • Rinse
  • Repeat

From the screenshot below, step 1 says should click on the word “patience”


After a few iterations we reach a closed loop of the form Philosophy > Existence > Ontological > Philosophy.

The interesting phenomenon is that the starting point is almost always irrelevant: if you pick a random page then it is heavy odds on you reach the same closed loop involving “philosophy”. Not surprisingly, Wikipedia itself has a page on this phenomenon and it is estimated (as of February 2016) that 97% of all articles in Wikipedia lead to Philosophy. The remaining articles either lead to “sinks” (no outgoing wikilinks), non-existent pages or closed loops other than Philosophy. This phenomenon was pointed out to me by someone from Adelaide University on the 4th of March.

Just for the record, here is the chain that starts with Spider Solitaire. I will not discuss this chain in detail – the reader is invited to draw his or her own conclusions:

  • Spider (solitaire)
  • Patience
  • Card games
  • Game
  • Play
  • Intrinsically motivated
  • Desire
  • Emotion
  • Biological
  • Natural science
  • Branch of science
  • Sciences
  • Knowledge
  • Facts
  • Reality
  • Imaginary
  • Object
  • Modern Philosophy
  • Philosophy  > existence > ontological > philosophy

Being a self-proclaimed Grand Master of Spider Solitaire, I am more interested in the reverse process. Starting from the Wikipedia page on Philosophy, is it possible for me to choose any outgoing links of my choice (not necessarily the first) and eventually land on the Spider Solitaire page? I don’t have a definitive answer. All I know is the random link algorithm proposed by my good friend Ninja Monkey doesn’t work so well!

If anybody can find a path from Philosophy to Spider Solitaire I will be happy to grant said person the title of Great Grand Master of Spider Solitaire. Challenge accepted anyone?

Who moved my Phone?

“Where is my damn phone?” I yell.

One of these days I’m gonna have to get rid of this bad habit. I’m pretty sure I left it under the tree like three minutes ago … right next to where Ninja Monkey is sitting … OH FOR 70,85,67,75,83 SAKE!!!!!!!!

“This is weird”, says Ninja Monkey.

“Ninja Monkey,” I say sternly. “We need to talk.”

Ninja Monkey shows me my phone. Somehow he has reached level 742 in Jewels Magic. Given his fascination with random move algorithms I’m pleasantly surprised to find he hasn’t made any in-app purchases yet.

“This game is rigged,” says Ninja Monkey.

I suddenly remember that Monkey and I published a paper about a certain Spider Solitaire game being rigged some time ago. Maybe the Ninja Monkey is onto something after all.

“Why is level 742 of Jewels Magic rigged?” I ask.

“I realised random move algorithms ain’t always what they’re cracked up to be,” says Ninja Monkey. “I’m not very good with these abstract strategy games – so I asked my friend Wise Snail for insights.”

“As you know,” says Wise Snail, “being the World’s slowest Spider Solitaire player I like to analyse the current game state to the Nth degree before making a move.”

***Sigh***

Why couldn’t Ninja Monkey at least ask one of my better students for advice?

“<sarcasm> What fascinating insight did you come up with this time? </sarcasm>” I ask.

“I soon realised if I wait for three seconds then the game will highlight 3 or more jewels of the same color,” replies the Wise Snail.

“So your new strategy is just wait for three seconds and then play whatever move the app suggests?”

“I know I’m not the best player, but my strategy has one important advantage: If you’re trying to prove a game is rigged then nobody can accuse you of deliberately playing sub-optimal moves to promote your desired hypothesis, null or otherwise.”

“True,” I respond. “Very true.”

 “We start with 26 moves,” says Ninja Monkey. “The goal says we need to collect 50 red, 50 blue and 50 orange jewels. If I use the suggested-move algorithm instead of random-move-algorithm then I always collect plenty of red and orange jewels but very few blue jewels.”

“That is weird,” I reply. “There is no logical reason why one colour should be favoured over another. That’s like you-know … racism or something like that.”

“I ran the following test,” says the Wise Snail. “I played 10 games on level 742, stopping whenever one of the jewel counts reaches zero or I run out of moves. I got the following results:”

RedBlueOrange
0298
0290
0227
2300
8370
0319
03112
0387
3390
0398

“So that means the blue number is always largest, and by a country mile,” I say.

“Of course that doesn’t tell us why it behaves that way.”

“But that’s all I need to know,” I reply. “Q.E.D. The game is rigged. Maybe I should write an angry-gram  to the developer of this game.”

“I agree,” says the Snail. Unfortunately he takes a minute just to type the word “Dear” on my phone.

“Let me have a go,” says Monkey. He can literally type at one million words per minute but unfortunately he can only produce gibberish of the highest quality.

Fine. I have to type the angry-gram myself. It takes three minutes, and I finally press Send. Whoosh!

Hmmm … perhaps it’s time for another collaboration with Ninja Monkey and the Wise Snail. For now, they’re back in the good books again. But if I catch them playing with my phone once more without my permission then I might reconsider …

THE END

Rank Imbalances

In this lesson we will examine the issues of rank imbalances. The current diagram shows the state of play after we dealt a second row of cards from the stock.

One thing you may have noticed is we seem to have a lot of Jacks but not many Tens. To be more precise we have only one Ten but six Jacks. That’s a delta of 5. If we try to construct the entire “histogram” for all card ranks we will also find several other discrepancies between other pairs of adjacent ranks such as seven Fours but not as many Threes or Fives.

Of course all players would know that such imbalances are less than convenient, but how best to deal with such imbalances?

One thing to note is that we can’t affect the probability of turning over specific cards. For instance, there are two Jacks remaining and 104-49=55 cards unseen. The probability that the next exposed card is a Jack is always 2/55 no matter how well or badly we play (if we deal from the stock we can always pretend 10 cards appear sequentially instead of simultaneously). But we can mitigate the effects to some extent. For example if two Jacks are buried under a King then the effects of too many Jacks will be attenuated, but if the only Ten was buried under two Kings then that is obviously much worse.

It is beyond the scope of this post to discuss in detail how to deal with rank imbalances, but a general principle is that the more flexibility you have, the better your chances will be – and the best way to retain flexibility is to procrastinate whenever possible.

Consider the following questions:

  • Can we get back our empty column? (a good question to ask whenever at least one column has no face-down cards!)
  • Can we increase the number of in-suit builds?
  • How many guaranteed turnovers do we have? (Note that an empty column usually equates to one more turnover, but not always).
  • What would be your next play?

As an aside, here’s a question for the math geeks among you. Do you think we would be justified in complaining about our bad luck seeing that out of 49 exposed cards there are six Jacks but only one Ten?

One might try to compute the probability that out of 49 cards we will get at least six Jacks and at most one Ten. Computer simulation says the chances are 0.61 per cent.

Not so fast. Note that I specified “Jacks versus Tens” after seeing the current game state, which is clearly unfair. Either we have to guess a pair of adjacent ranks (e.g. Fours vs Threes) or alternatively include all ranks. In the latter case we might ask “what is the probability that out of 49 cards we will get at least six repeats of X and at most one Y for some pair of adjacent ranks X and Y?” Remember that X can either be Y+1 or Y-1. In this case the probability is about 12.5%.

If you’re really nit-picky you might also ask “why 6X versus 1Y? Why not 7X vs 2Y etc”, but you get the gist.

There is also the issue of selective memory. We might have played eight games and we only remember the one game with way too many Jacks and only a solitary Ten. And by some strange coincidence, 12.5% happens to equal the fraction 1/8.

This is probably too much detailed mathematic specificity for the average Joe Bloggs, but the point I wish to make is don’t complain that the game is rigged unless you really know your statistics better than your alphabet.

 Now, going back to the lesson