Now that the year is drawing to a close and holidays are upon us, I now have time to revisit the things that really matter in life, including the Royal Game.
Some time ago I uploaded my Ninja Monkey code to GitHub (my username is hilarioushappystar) and now I wish to discuss this in more detail.
To generate a “result” I need two seeds which I call “Game seed” and “Monkey seed” for lack of better alternative. Let us assume a pair of seeds is written as (g,m), where g and m represent the game and monkey seeds respectively.
The game seed determines the initial position. The monkey seed determines the monkey’s behaviour. For instance, if the seeds are (27, 1) then the monkey would always start with the move “ca”, but if the seeds were (27, 2) then the monkey would prefer the move “cb” in the same starting position.
The screenshot below shows a game seed of 27 and a monkey seed of 1. The monkey loses with 24 face-down cards remaining. I do not claim this to be a paragon of virtue from a software engineering perspective.

Of course, the result is pseudo-random in the sense that with game seed = 27 and monkey seed = 1 the monkey always loses with exactly 24 facedown cards. At least that’s the result I get on my machine. Your machine may yield different results, but if your monkey loses with e.g. 16 face down cards the first time then it should lose with 16 face down cards every time.
Here are my results. Note that the game seeds are not indexed from 1 to 20. By using different seeds for monkey and game, I reduce the chances of confusion when trying to reproduce these results.
Note that these results encode the result of the game: if there is at least one face down card remaining the monkey must have lost. I assume the monkey always wins if it manages to expose every card in the tableau (otherwise I could always put an asterisk next to a zero if the unthinkable happens).
Game | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
27 | 24 | 27 | 27 | 19 | 10 | 17 | 29 | 26 | 22 | 14 |
82 | 17 | 24 | 18 | 20 | 9 | 25 | 16 | 18 | 20 | 22 |
41 | 14 | 21 | 10 | 11 | 27 | 27 | 15 | 19 | 20 | 17 |
124 | 26 | 9 | 33 | 18 | 21 | 26 | 26 | 30 | 23 | 30 |
62 | 27 | 23 | 27 | 23 | 21 | 26 | 21 | 22 | 21 | 25 |
31 | 18 | 24 | 0 | 24 | 14 | 25 | 16 | 28 | 26 | 20 |
94 | 16 | 17 | 29 | 25 | 15 | 26 | 24 | 27 | 26 | 20 |
47 | 18 | 18 | 20 | 21 | 24 | 28 | 18 | 22 | 26 | 23 |
142 | 14 | 11 | 21 | 16 | 23 | 23 | 9 | 15 | 19 | 9 |
71 | 4 | 27 | 13 | 17 | 21 | 22 | 12 | 21 | 8 | 8 |
214 | 23 | 33 | 32 | 22 | 22 | 30 | 22 | 32 | 20 | 33 |
107 | 17 | 14 | 17 | 21 | 18 | 18 | 16 | 18 | 0 | 15 |
322 | 27 | 26 | 24 | 22 | 26 | 22 | 32 | 25 | 22 | 30 |
161 | 20 | 13 | 15 | 14 | 19 | 11 | 16 | 12 | 12 | 12 |
484 | 14 | 14 | 14 | 14 | 11 | 9 | 8 | 11 | 0 | 11 |
242 | 25 | 15 | 0 | 5 | 25 | 14 | 23 | 25 | 19 | 23 |
121 | 17 | 20 | 0 | 27 | 26 | 17 | 21 | 27 | 18 | 21 |
364 | 26 | 28 | 19 | 24 | 22 | 20 | 18 | 25 | 19 | 24 |
182 | 25 | 23 | 25 | 28 | 26 | 22 | 18 | 31 | 17 | 23 |
91 | 9 | 15 | 10 | 13 | 12 | 17 | 7 | 7 | 17 | 14 |
We can make a few observations:
- Out of 20 games, there are 5 hands where Monkey scores one victory. For the other 15 hands the Monkey never wins
- These five wins appear in only two columns. This is a statistical glitch and there is no logical reason why two games with same monkey seed and different game seed should be correlated. I blame the small sample size 😊 (about the only utility of columns is to assist in reproducing the raw results).
- Some hands look really bad. For instance in game 214 the Monkey always has at least 20 face down cards at the end of the game.
- Other hands look promising, for instance in game 484 the Monkey as at most 14 cards remaining
This is an example of an “exploratory analysis” (as opposed to explanatory analysis). I’m trying to get familiar with the data and I don’t have a specific hypothesis that I’m trying to prove. Of course, the more data you collect, the more chances of finding something interesting. For instance, I could have chosen to have 15 monkey seeds instead of 10, or 50 game seeds instead of 20.
Once you have completed your exploratory analysis, you might be able to form a concrete hypothesis about a spider program which you suspect to be dodgy. For instance, suppose that Shay Dee Games releases a new version of the Royal Game and we find that in every hand, either the Monkey consistently gets 10-or-less cards face-down or consistently gets 20-or-more cards face-down at the end of the game. We would suspect something is amiss, even if Shay Dee has the “correct” average win rate computed over all games. Of course, all this assumes we are able to determine the identity of every face-down card.
When testing a concrete hypothesis, things start to get technical. T’was brillig slithy toves gyre gimble wabe blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah Kolmogorov-Smirnov test blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah – or if you want the less technical version, yes we have formally proved that Shay Dee Games is indeed rigged.
Note also I have omitted certain stats. For instance, I could have recorded the maximum number of empty columns obtained at any stage of the game, number of suits removed or the number of levels in Toy Blast I manage to beat before the monkey finishes the game. I have also omitted discussion of specific programs that I suspect to be biased. The important point is the reader has something to go on if he wishes to investigate the veracity of a specific Spider Solitaire program.
Until next time, happy Spider Solitaire playing!