Yet Another Digression

When I first started this blog, I was under the impression that Spider Monkeys are intelligent creatures, able to recognise legal moves in a game of Spider Solitaire and play at lightning-fast speed – albeit with less than optimal strategy. But Wikipedia says I wasn’t even close. In fact, I wasn’t even aware that Spider Monkeys are a thing, and I only found out by complete accident when trying to lit-review a subject I know very little about, but my supervisor wants me to look at.

This cartoon is based on a well-known Russian chess joke.

Toward Agent-Based Models for Investment is the title of a paper by J. Doyne Farmer in the 2001 AIMR Conference proceedings. The important bit appears near the end of page 2. It says:

“The same principles apply in blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah spider monkeys. Blah blah blah monkeys blah blah blah blah blah”

At least I have learnt that spider monkeys are New World monkeys belonging to the genus Ateles, part of the subfamily Atelinae, family Atelidae. Reading the rest of the Wikipedia article and summarising the important bits in a paragraph or two is left as an exercise for the reader.

Anyways, that’s enough digressing for now. Back to the game …

Monkey Algorithm User Interface Goes Graphical!

I have continued to achieve awesomeness in all things Spider Solitaire.

Hitherto my Spider Monkey algorithm program was text-based, but I have successfully managed to convert this into a graphical user interface and upload the new version onto GitHub.

The screenshot below shows an example position from a Spider Solitaire server which I suspect may be biased. But since I am not too confident of my suspicions (unlike the other one, where even a 1-suit algorithm was enough to expose the bias), I have redacted the name of the file, which would give away the software company in question.

The graphical interface allows the user to initialise a start-position from either a text file or generate a random position. The user can then get the AI to step through the hand one moveblock at a time (repeatedly clicking “single moveblock”) or fast-forward through the entire game (“clicking all moveblocks”). A moveblock is essentially a sequence of moves that exposes one or more cards, either through turning over cards in the tableau (woohoo!) or dealing from the stock (gasp!).

You may have noticed the game started with “ja” instead of the superior “cg” or “ij”. The reasons for this have been discussed in a previous post. The evaluation function is 10 points for a turn-over, 1 point for an in-suit build, and +100 for removing a complete suit (note that removing a complete suit loses the points gained for every in-suit build so the effective bonus is only +88 instead of +100).

Note that there is no “equity” assigned to empty columns or columns that are close to becoming empty. This could be a future task, either for myself or someone else who wishes to contribute to my project.

I have used a four-color deck instead of the usual red-black colors for most card games. I have also used gray for face-down cards and added a minus sign for good measure (to be color-blind friendly). The stock is shown in the bottom half of the window and I have decided to have the cards face up for ease of visibility (they could have equally well been face-down).

https://en.wikipedia.org/wiki/Four-color_deck

Here is another screenshot showing some extremely long moveblocks in action. Sometimes long move sequences are needed to resolve a critical position, but other times a long sequence is just idle moves such as oscillating an Ace between two different columns. If you see a sequence ending in [(5,9),(9,5),(5,9),(9,5),(5,9)] then it’s almost certain the algorithm is waiting to reach the move limit, which I have hard-coded as 30 – and yes, you are most welcome to download my code, get rid of the hard-coded parameters and send me a pull-request 😊

My code was written using tkinter which is a standard GUI library for Python. I only picked this up a few days ago, so I can’t claim my code is the most efficient (in terms of speed or readability), but it seems to get the job done and I am not aware of any mundafental muck-ups yet. So it’s all good 😊

My GitHub username is hilarioushappystar, which by some strange coincidence also happens to be my username on the Cracking the Cryptic discord server 😊

And now we digress …

Fun fact: if I really wanted to design the best possible AI for Spider Solitaire, I believe it would be something to do with neural networks, trained over a large number of games just like the well-known Alpha Go program (or equivalent programs for Chess). Of course I may not have the resources to achieve this. But I am reasonably satisfied with a simple AI given that (AFAIK) there are no existing AI programs that play the Royal Game without rot13(haqb).

Look-Ahead Algorithms

By now you’ve all heard the great news: Monkey has proved it is possible to beat Spider Solitaire at the 4-suit level! All he needs to do is build in-suit at every opportunity and never break a suited connector. The bad news is the win rate is extremely low. Of course if the Monkey wins $3000 from the Eagle thanks to a single victory then we’re not complaining – but can we do even better?

First I wanna differentiate between a random hand and a custom hand. Recall that I discussed a number of you-tube videos where random players show off their awesome skills by beating a hand of 4-suit spider sans 85,78,68,79, but the unmentioned caveat was the hand was ridiculously easy.

If I pick a ridiculously easy hand then Monkey wins 6 games in 500, or 1.2 percent of the time. If I pick a random hand then Monkey wins a <sarcasm> whopping </sarcasm> zero games in 500.

So far so bad.

It turned out my simulated annealing/stochastic hill climbing/stochastic gradient descent/something else thingy was a red herring. It worked the first time, but later experimentation with the random number generator showed that “always Level 3” was the best way to go in the long run. But at least it made for a good story lol 😊

Guaranteed Minimum Evaluation Score

Good players (or avid readers of this blog or both!) will be aware of the concept of minimum guaranteed turn-overs. Basically we can calculate the minimum number of cards we are guaranteed to turn over even if the worst possible cards showed up. Of course we can do something similar for e.g. the number of suited connectors or some other “function” of the position. For instance, we might assign 10 points for flipping a card, 1 point for a suited connector and 100 points for removing a complete suit. And of course experienced players know that different programs will have different scoring systems. The important point is that we can (i) evaluate a given position (ii) compute a guaranteed minimum score even if the worst possible cards turned up.

This is an example of “looking ahead” since we are considering the consequences of a single move given the available information (i.e. face-up cards), which is obviously better than making a single move because it’s an in-suit build.

Here is a simple example: suppose the evaluation function was 10 points for exposing a card, 1 point for any suited connector, and no 1-point penalty for each move. Assuming best play in the start position below, what is your Guaranteed Minimum Evaluation Score (GMES) even if you turned over the worst possible cards?

I hoped you answered 54. Note that we are lucky in the sense we can build in-suit 4 times and off-suit once. Normally it’s the other way around.

A good first move is to build in-suit with the Jack-Ten of diamonds. The Ten is on one of six columns with equal-fewest cards and we also have spare off-suit Jack. But we can make many other moves and still come to the same GMES of 54. You could start with the Three-Two of Clubs or even the strange looking 9h-8d. But obviously the sensible option (for humans at least!) is Jd-0d.

The position you should be visualising is the one below, where a happy star represents any face-up card. We can easily check with a single glance that there are 5 turnovers and 4 in-suit builds, with a total evaluation score of 54.

If we want to train our Ninja Monkey to compute the best GMES we can simply iterate random sequences of moves from a given position. But we must be careful not to expose any new cards (e.g. if we shift the Ten of diamonds we will not flip the card underneath). Assuming we have enough iterations, Ninja Monkey will deduce e.g. shifting the Jack of diamonds onto the Club Queen is not a good move since it reduces the GMES from 54 to 53. Even worse would be 0d-9h, which reduces the GMES to 42. We lose one guaranteed turn-over and two in-suit builds with one bad move!

An example of a “good  enough sequence” is:  Jc-0d, 9d-8d, 3c-2d, 0d-98d, Jd-098d, Qc-Jc. Note that the first move was unnecessary but at least we got the correct GMES of 54.

Now Get Off Yer 65,82,83,79 And Start Procrastinating!

A lazy programmer would be satisfied with the above algorithm but of course it is possible to do even better than that. Note that there is no reason to commit ourselves to every move in the above sequence without examining the newly-turned over cards. After starting with Jc-0d, we can turn over a card and we always have the choice of 9d-8d, 3c-2d, 0d-98d, Jd-098d, Qc-Jc or the new card may give us an even better option. In effect we are procrastinating. In summary Ninja Monkey will find a sequence with the best possible MGES and execute it, but terminating the sequence prematurely if he turns over at least one new card.

How do we tell if it’s time to deal a new row of cards? Simples, as the Meerkat would say. If Ninja Monkey cannot increase the GMES then it is time to deal a new row of cards. If the stock is empty then either the game is won or the Monkey concedes defeat.

With the new algorithm, Monkey will beat an easy hand around half the time or a random hand 5 percent of the time. Random Hand statistics are shown in the graphic below

Let us say that a Spider hand is ImportNumbPieAsNP-Hard if Ninja Monkey cannot obtain one victory in 50 tries. In this case roughly half the hands are ImportNumbPieAsNP-Hard. At the other end of the spectrum there was also one very easy hand with 21 wins in 50 attempts. One implication of all this is if a Spider Solitaire server gives 40 ImportNumbPieAsNP-Hard games in a row then there’s a pretty good chance it’s biased.

Unfortunately all this comes at a price. Despite Ninja Monkey’s extremely fast metabolism, this takes a very long time to run.

Improved Monkey Algorithm

Every man dog and millipede in the animal kingdom knows by now Monkey’s famous random move algorithm is good enough to beat Spider Solitaire at the one-suit level more than half the time.

However there are a number of Spider Solitaire servers which seem to stack the cards against good players and the Monkey’s algorithm is too weak to “expose” them. I have played a number of hands which I consider difficult, yet Monkey kills at the 1-suit level. The “Opaque Spider Solitaire Server” that I referred to some time ago happens to be an exception. To cover my 65,83,83 I am not naming any servers without a p-value that is five percent or worse.

In the Monkey Algorithm, every move can be identified by specifying two columns (source and destination). Let us assume that if the destination column is empty then we move the largest possible number of cards from the source column. Thus if the source column starts with 3-4-5-6-7 in spades followed by an off-suit 8, then we move five cards. In this way, specifying two columns always identifies a unique move (assuming it is legal).

Note that it is possible to break an in-suit build. For instance if the source column has 3-4-5-6-7 in-suit and the destination column has an off-suit 6 then we are breaking in-suit.

One possible idea is to forbid the monkey from ever breaking in-suit. This means e.g. if we have built a J-Q of Spades then those two cards are forever joined together. The only exception is if we separate the Jack onto the other Queen of Spades (recall that Spider is played with two decks of cards). This means the number of suited builds never decreases. For the mathematicians among you, this is an example of “entropy” – a quantity that is guaranteed to never increase (or never decrease) which can serve as some measure of how close we are to a desired goal.

One problem with this rule is the Monkey will often get less in-suit builds then he should. Recall that the first Project Manager started a game with 2H/AC, 2D/AS 5D/4S, thus forfeiting the option of building in-suit with 2H/AH (If a project manager can 70,85,67,75 that up, what chance does a mere monkey have?). Therefore we might tell the monkey to always choose a move that increases the number of suited builds, or if no such move exists then choose a random move that does not decrease the number of suited builds.

Too Much of a Good Thing

Sometimes it is better to build off-suit than in-suit for the sake of some other gain (such as turning over a hidden card). In rare cases one might even be justified in breaking an in-suit build(*). We can experiment with a mixed strategy where Monkey will look for an in-suit build say, 90% of the time or off-suit 10% of the time. This way, Monkey should increase the number of in-suit builds in the long run, but still have the opportunity to take care of situations where off-suit happens to be the better play.

(*) A simple example: Consider the task of moving a sequence 5S-4H-3H-2D onto a Six of any suit, assuming you have one empty column and a “spare” Four of Clubs, and all other columns have picture cards.

In summary we have four different algorithms (which I refer to as levels) where moves are chosen randomly subject to constraints below. For the last level we have to specify a parameter determining the probability of choosing Level 3 or 2.

NOTE: We ignore other parameters such as number of moves before dealing a new row of 10 cards.

  • Level 1: Monkey can break in-suit builds
  • Level 2: Monkey never breaks in-suit builds
  • Level 3: Monkey will always build in-suit if possible, otherwise it will maintain the status quo (i.e. not break an in-suit build)
  • Level 4: With probability 0<p<1, choose Level 3, else choose Level 2.

NOTE: Moving e.g. the Nine of spades onto the Ten of spades is not building in-suit if the Nine is already on the other Ten of Spades.

Experiment

To test these algorithms, I chose the game [1]. Monkey’s 1-move algorithm says this is a relatively easy hand with an equity of 0.88 if the game is one-suited.

[1] https://www.youtube.com/watch?v=5b9SxWEZpbI

I recorded the number of wins and the average number of suits removed. The probability value is some multiple of 0.1

The important bit is the last row. If we choose level 3 with probability 0.9 then Monkey actually manages to win 1 game in 200. Admittedly this is terrible by Project Manager standards, but it’s not too shabby for a Monkey!

Level Number of wins Average suits removed
1 0 0
2 0 0
3 0 0.065
4 (p = 0.1) 0 0.015
4 (p = 0.2) 0 0.01
4 (p = 0.3) 0 0.04
4 (p = 0.4) 0 0.035
4 (p = 0.5) 0 0.04
4 (p = 0.6) 0 0.1
4 (p = 0.7) 0 0.06
4 (p = 0.8) 0 0.065
4 (p = 0.9) 1 0.095

You might ask what makes p = 0.9 work so well? (recall that 1 win in 200 is pretty good by Monkey standards) The simplest explanation is by way of analogy with a ball rolling down a hill. The y-axis represents the number of in-suit builds we need to complete the game. Thus if we reach 0 in-suit builds required then we always win.

rolling_ball

Suppose at any point the ball is only allowed to “look” within a local neighbourhood. You have three options:

  • Move randomly
  • Always take the local minimum
  • Take the local minimum with 90%, move randomly with 10%

A little thought should convince you that the latter option is the best. I will not go into the detailed programmatic or mathematic specificity. Doing any programming or mathematical 77,65,83,84,85,82,66,65,84,73,79,78 is left as an exercise for the reader!

Artificial Stupidity in Chess

You may remember some time ago I discussed an algorithm for Spider Solitaire that is not very good: it simply outputs random moves. It turns out somebody did a much better job in the game of chess. Some dude designed no less than 30 Artificial Stupidities and organised a Tournament of Fools, and published a number of papers in SIGBOVIK. Ideas for weird algorithms include color preference (e.g. White prefers to play pieces onto light squares), random moves, blindfold algorithms (simulating a novice trying to play blindfold), algorithms based on mathematical constants like π and e, single player (pretending opponent will pass) and linear interpolation between Stockfish and some other lousy algorithm (e.g. choose Stockfish’s best move with probability p, lousy move with probability 1-p. But my favourite algorithm was the Mechanical 68,79,82,75 that proved a forced win for Black after 1 d2-d4?? a7xd4!! checkmate 🙂

You can watch all the fun in the video below:

I’m not sure if these ideas will be applicable to Spider Solitaire. Color Preference is easy since we can prefer to move red cards or black cards, and single-player is even easier given the nature of the game, but I am not aware of any equivalent of Stockfish. Mathematical constants should be easy but probably not very interesting. It may be possible to simulate a blindfold (human) player who struggles to remember every card, but I’m, not sure how to do that yet. And I don’t know of a (sensible) variant of Spider Solitaire where all the red cards are replaced with chess pieces. Since Western chess has Black vs White, it may be more appropriate to use Xiangqi, which has Red vs Black pieces. Perhaps something to think about for next time.

Thanks to my good friend Tristrom Cooke for the heads up.

Introduction to the Monkey Algorithm

If you are reading this blog you probably have some familiarity with my friend Ninja Monkey by now. If you are not reading this blog … then that will just be weird.

As a brief reminder, the monkey looks like this:

The monkey plays really fast, but his strategy is not very good because he simply makes random moves. Now let us try to define a random move strategy.

The simplest random move algorithm is:

  • At any game state, enumerate the set of all legal moves
  • Assume each move occurs with equal probability

For instance, suppose we started a game with 44445555JK. There are 16 possible ways to move a Four onto a Five and we also have the option of dealing 10 cards from the stock. That gives 17 moves total, so each move occurs with probability 1/17.

As you might expect, the random move strategy doesn’t win too many games at the 4-suit level. So we will start with the 1-suit level.

We first observe that dealing from the stock is usually undesirable unless “no further improvement is possible”. So we can impose a move limit of, say, 1000 moves and specify the monkey must deal a new row whenever the move counter is a multiple of 1000 (if there are no legal moves then keep passing until the move counter reaches a multiple of 1000).

We all know that when a player has, e.g. 4 suits removed and all cards exposed the game can pretty much be won on autopilot. However making random moves can be problematic. Consider the following game state, which is about as easy as it gets for an experienced player:

In this diagram even my Dad can win this game in one move. However, let us count the number of legal moves available.

Assuming we don’t win in one move, the stack of 8-7-6-5-4-3-2-A can be moved to any of eight empty columns. Similarly the other stack of K-Q-J-0-9 can move into any of eight empty columns. That brings us to 17 moves including the one we want. Our chances of winning in one move is 1/17.

But wait, there’s more! (terrible cliché, I know). We can also split a sequence, e.g. moving the 4-3-2-A or the J-0-9 onto the left-most column. A little thought shows that any of the 13 cards can move onto eight empty columns, taking whatever is on top of it. Therefore we have 8*13 = 104 legal non-winning moves. Our chances of winning in one move is therefore 1/105.

To help the monkey we add the following rules:

  • Ninja Monkey cannot split a sequence onto an empty column. For instance the 5-4-3 from 8-7-6-5-4-3 cannot be moved onto an empty column (but can be moved to another exposed 6). Note that this move is only useful at the 2- or 4-suit level.
  • Ninja Monkey cannot shift the entire contents of a column onto another empty column.

With these constraints, one can see that there is now only one legal move in the above diagram, so our chances of winning in one move is 100%. It is also worth noting that specifying source and destination columns is enough to identify a unique move (assuming it is legal).

Another technicality I added was Ninja Monkey can deal another row of 10 cards, even if one or more columns are empty. This is mainly to prevent a stalemate if there are less than 10 cards in the tableau. It also simplifies the algorithm a little bit. In any case, I don’t see how this rule should seriously affect a player’s win rate.

The final algorithm is as follows (with legal moves described as above):

Fun fact: With this algorithm, I found that Ninja Monkey indeed beats the 1-suit level approximately 62% of the time. The reader is encouraged to experiment with this algorithm. Perhaps he she or ze can replicate my results, or find some further tweaks to improve the monkey’s win rate

Exercise for the interested reader: can you design an algorithm that does better than the random move algorithm? If yes, which animal in the animal kingdom should the algorithms be named after? 😊

Introduction to Artificial Stupidity

In a previous post, we looked at various features of a starting position such as the number of guaranteed turnovers and guaranteed suited turnovers. This means we can study two different starting positions and say that one is perhaps better than the other. However, ultimately we are interested in our chances of winning. If, for instance, my Dad was really awful at Spider and loses every game regardless of how good the start position is, the number guaranteed turnovers wouldn’t be particularly relevant.

Let us consider the following question: what are the chances of victory given we have N guaranteed turnovers at the start of the game? Obviously we would expect the more turnovers we start with, the greater our chances of winning.

I guess the obvious thing to do would be to play 1 million games of Spider Solitaire on my PC, record the number of guaranteed turnovers at the start, play each game to the best of my ability without 85,78,68,79 and record the result (either win or loss). After all, I’m addicted to Spider Solitaire, my blog is about the Spider Solitaire, the whole Spider Solitaire and nothing but the Spider Solitaire, and I consider myself to be the world’s greatest expert on Spider Solitaire. Unfortunately, playing 1 million games is time-consuming even by my standards. Perhaps I could program an AI to play the games for me. After all, I have a math Ph. D., I have published a number of math-related papers, I have a steady job, and I know a programming language or three. Last year, I created a Flappy Bird cover of the famous Wintergatan Marble Machine using Matlab and Audacity …. Uh oh, I’ve just realised that designing an algorithm to play Spider well is not so trivial. So perhaps we could compromise by designing an AS, where S stands for Stupidity

ENTER MY FRIEND, NINJA MONKEY

Fortunately I have an imaginary friend called Ninja Monkey (not to be confused with Monkey Magic) who is fluent in many languages such as Python-tongue (the language of pythons as well as other magical serpentine creatures), Javanese and C-plus-plus-speak. Thanks to an amazingly fast metabolism, Ninja Monkey is able to play 100 games of Spider Solitaire inside three minutes. On the down-side, his random move strategy is not very good, and he is yet to realise his dream of winning a single game of Spider Solitaire at the Four-Suit level. Nevertheless, he is able to win more than half the time if he pretends each game is played at the one-suit level. Not to mention that he is cute and friendly, and willing to give me a hug when I’m feeling down 😊

If you think about it, this gives us a way to estimate our chances of winning a game given a certain number of guaranteed turnovers. Given a large number of games, Ninja Monkey can record the number of guaranteed turnovers for a starting hand, play random moves pretending all cards are the same suit, report his result (win or loss). For instance, let us suppose that he plays 1000 games, 100 of which start with three turnovers. If NM wins 37 of those and loses the remaining 63, then we can reason that our winning chances are 37% given 3 turnovers. It’s likely not the most accurate estimate, but I guess we gotta start somewhere if you excuse the cliché!

With the help of my Ninja Monkey, I collated the following results

We immediately notice that the win ratio indeed increases as we increase our guaranteed turnovers. That is, if we ignore the division by zero error in column 9. That’s not so surprising when you think about it. After all, 9 turnovers implies a run of ten cards such as 3456789TJQ and the chances of starting with a run of ten cards are pretty slim. Also, there are very few games with 0 or 8 turnovers, so the stats are extremely reliable. With 1 or 7 turnovers we don’t have many data points so these may be doubtful. Around 4 turnovers, the results should be pretty reliable. I could have done more than 1000 iterations, but you get the gist.

ADVANTAGES OF AS OVER AI

You may be justified in asking why anyone would be interested in an algorithm that just outputs random moves and can’t even beat my dad at chess.

It turns out Artificial Stupidity has some important advantages over its well-known brother of Artificial Intelligence. As I already alluded to earlier, it’s often easier to come up with an AS than an AI. Also, Artificial Stupidities are easier to replicate than Artificial Intelligence, so it is easy for Joe Bloggs to design his own algorithm and indeed confirm my figures are correct, or approximately correct. But we shouldn’t be dismissive of AI either. Beating the top human players at Go, Chess or Backgammon is no small feat. I believe that any artificial entity can be beneficial to humankind, whether it be intelligent or stupid – provided it is used for the right purpose!

Steve N Brown (the author of a Spider Solitaire book I alluded to in an earlier post) attempted to compile his own statistics for win ratio vs guaranteed turnovers. He got the following results

We immediately notice there are only 306 games instead of 1000, so it is not surprising that division by zero occurs for 8 or 9 guaranteed turnovers. Also, there is a weird drop to 0.37 win-ratio for 3 guaranteed turnovers. This either suggests 62 data points is insufficient or that Guaranteed Turnovers is a very poor indicator of the overall chance of winning at the Four-Suit level. After all Spider Solitaire is not a Scratch-n-win Lotto Ticket – you can’t win a prize just because you start with 10 good numbers; you’ve got to earn your prize! It is certainly possible for an expert to recover after a poor start (or conversely a beginner to 70,85,67,75 up after a good one). And of course I can’t blame Steve for not playing 1000 games.

Steve has also compiled similar stats for other features such as Suited Guaranteed Turnovers or multiplicity, but this is outside the scope of this blog post.

SUMMARY

I believe neither the Ninja Monkey or Steve Brown has a definitive answer to the question of what is the win rate as a function of number of guaranteed turnovers (assuming expert play sans 85,78,68,79). Playing at the 1-suit level is far too large a handicap for Ninja Monkey’s results to be meaningful and Steve has too few games to make reliable conclusions. So perhaps I do need to design an Artificial Intelligence that can play well at the Four-Suit level. Or someone else can design one. If the latter, I would be perfectly happy to recognise I am not the world’s greatest Spider player (but I would still consider myself a GM 😊 )

Despite the negative result, I wouldn’t call this experiment an exercise in futility. The journey is more important than the destination, and hopefully this is food for thought for the interested reader, if you excuse the numerous clichés. For now, it’s farewell to the Ninja Monkey. Hopefully we might meet him again in better times. Oh, and I’m still looking for a way to get in touch with Steve Brown – so if you know him (or even better, if you ARE him) please leave a comment!

Oh, if your name is Martin Molin and you like my Flappy Bird cover of the Marble Machine, please leave a comment too!