Toy Blast – It’s A Force Delete

It’s official – I was finally forced to delete Toy Blast from my phone. Some time ago I thought this was one of the cool games, but it seems the random number generator is rigged, just like every other game. Of course, I recognise this is only my opinion, which I can’t force onto others. All I can do is turn to page 7374 of the Big Book of Clichés and say “judge for yourself”.

The problem is if you attempt to design a game with rigged RNG, it’s probably too easy to design an experiment that can expose certain issues. The culprit is level 7374 which looks like this:

It is beyond the scope of this document to explain the rules of Toy Blast in detail. The essential points are:

  • To beat this level, one of the requirements is getting at least three slices of bread. This is difficult since there is only one toaster which cannot be accessed with ordinary cubes. Power-ups are a must.
  • The basic power-ups are rotors, TNT and Rubik’s Cube. Rubik’s Cubes are always a particular color (one face is a solid colour, other faces are random).
  • Power-ups are obtained by matching 5 or more ordinary cubes of the same colour in one move.
  • Combos can be obtained by getting two or more adjacent power-ups. Adjacent means touching horizontally or vertically, but not diagonally.
  • If the player activates a Rubik’s Cube + rotor combo then every ordinary cube with matching colour turns into a rotor that can be oriented horizontally or vertically.

Let us define a “coin-flip” as follows. The player activates a Rubik’s Cube + Rotor combo when exactly one of the four cells in the top row is a matching colour. The following four images show example of coin-flips. For instance, the first diagram has Y-B-G-B in the top row and a Yellow Rubik’s Cube next to a Rotor in column 6.

Note that if more than one cell is the matching colour, we do not count it as multiple coin flips. For instance, if the top row was Y-G-G-G and we activated a Green Rubik’s Cube + Rotor combo then I do not count that as three coin-flips. It’s not even worth one coin-flip. It’s ZERO coin-flips – the same number of points you get for playing a phoney in Scrabble.

The term “coin-flip” should be self-explanatory. If (in the first diagram) every yellow cube turns into a rotor then the yellow cube in the top row will collect one bread with 50% probability. Any other yellow cube on the board cannot collect bread, even if the player were allowed to call directions for every individual rotor. As every student of probability knows, all this assumes rotors are indeed horizontal or vertical with 50% probability.

I played level 7374 multiple times with the objective of getting as many coin flips as possible (not necessarily maximising my chances of beating the level). With enough skill and luck, it’s possible to get three coin flips in a single game. After sufficiently many games I got 50 coin flips and every time the rotor was vertical instead of horizontal. I did not hit the toaster once in 50 coin flips. Yes – Every Single Time.

On one occasion, I was down to my last move (see images above) and was destined to fail the level for multiple reasons. And the software still saw fit to give me the vertical rotor in the top row.

Just out of interest, I have already beaten level 7374 (I’m up to 7377) and managed to beat all levels without spending any coins (e.g. getting 5 extra moves or strategically changing the colour of a single cube). It’s hard to avoid the conclusion that Toy Blast is trying to punish players that are playing a little too well.

It is quite possible that something untoward happened to Peak Games (e.g. acquired by another company) when I wasn’t looking – but I do not intend to discuss this in detail. I recommend the reader google search “Peak Games” and “Zynga”.

And if you happen to be a game designer – don’t even think about it. You are not fooling anybody.

Let me repeat that for the 7374th time:

You. Are. Not. Fooling. Anybody.

It goes without saying the same holds for Spider Solitaire. I believe we have reached the point if Joe Bloggs claims the game is rigged then the onus of proof should be on the game developers, not the player.

To be fair, I should point out that Toy Blast survived far longer than some other game apps on my phone. Some really awful games (both match-three and card games) basically “didn’t even try to hide it”.

Math Just Got Weird

TreeCardGames, the rogue Spider Solitaire company that started it all, is resembling the number 3, back in the spotlight and losing its religion. This time the subject is manipulating reviews.

We’ve all seen a number of game companies that were somehow born to suck, could never release a half-decent game, and figured the only way to make a presence is to fudge the reviews and make them look better than they really are.

There are many ways to game the system, terrible pun notwithstanding. You can cherry pick the good reviews and hide the bad ones. You can write a review for your own game by posing as a customer. You can create bots to sing the praises of Hero Wars or whatever your rot13(cvrpr bs fuvg) happens to be called. You could give players chance to give a 5-happy-star review for 10 extra coins after beating every single level. If you were the CEO of Shay Dee Games, then you could demand your employees do one or more of the above. It’s been done before, and googling is left as the proverbial exercise for the reader. The possibilities are endless – you can pretty much do anything, provided it obeys the laws of physics.

Treecardgames has a small number of reviews that can literally be counted on the fingers on a single human hand. We can see the scores are 1,1,1,2,3 and yet the overall score happens to be 2.6.

I’m not sure where the 2.6 comes from. It certainly ain’t the mean or median. I doubt it would be the skew or kurtosis. It might be some fancy formula I’ve never heard of, such as computing the arithmetic and geometric mean, calling them foo and bar, then computing the arithmetic/geometric mean of foo and bar, rinsing and repeating until you converge to the limit. Maybe some of the scores are “doublers”, an idea borrowed from Cracking The Cryptic. Then the math would actually work, if you apply doublers to the right scores.

Maybe TreeCardGames was thinking if there were about 500 reviews with an average score of 1.6 then they could pretend the average score was in fact 2.6 and then the average reader will be too lazy to verify the math were correct/incorrect. But as you can see, this is not a viable dodge.

Unless I hear back from TreeCardGames explaining how to derive an overall score of 2.6 given individual scores of 1,1,1,2,3 this is one more entry to the list of reasons why rot13(GerrPneqTnzrf fhpxf!!!!!). For what it’s worth, I think the overall score should be -11123 out of 5 and deriving the number -11123 is left as an exercise for the reader.

Have you come across similar examples of hilariously bad calculation of overall scores? Bonus points if the correct overall score is LOWER than the given overall score 😊 Let me know in the comments!

The Things That Really Matter

Now that the year is drawing to a close and holidays are upon us, I now have time to revisit the things that really matter in life, including the Royal Game.

Some time ago I uploaded my Ninja Monkey code to GitHub (my username is hilarioushappystar) and now I wish to discuss this in more detail.

To generate a “result” I need two seeds which I call “Game seed” and “Monkey seed” for lack of better alternative. Let us assume a pair of seeds is written as (g,m), where g and m represent the game and monkey seeds respectively.

The game seed determines the initial position. The monkey seed determines the monkey’s behaviour. For instance, if the seeds are (27, 1) then the monkey would always start with the move “ca”, but if the seeds were (27, 2) then the monkey would prefer the move “cb” in the same starting position.

The screenshot below shows a game seed of 27 and a monkey seed of 1. The monkey loses with 24 face-down cards remaining. I do not claim this to be a paragon of virtue from a software engineering perspective.

Of course, the result is pseudo-random in the sense that with game seed = 27 and monkey seed = 1 the monkey always loses with exactly 24 facedown cards. At least that’s the result I get on my machine. Your machine may yield different results, but if your monkey loses with e.g. 16 face down cards the first time then it should lose with 16 face down cards every time.

Here are my results. Note that the game seeds are not indexed from 1 to 20. By using different seeds for monkey and game, I reduce the chances of confusion when trying to reproduce these results.

Note that these results encode the result of the game: if there is at least one face down card remaining the monkey must have lost. I assume the monkey always wins if it manages to expose every card in the tableau (otherwise I could always put an asterisk next to a zero if the unthinkable happens).


We can make a few observations:

  • Out of 20 games, there are 5 hands where Monkey scores one victory. For the other 15 hands the Monkey never wins
  • These five wins appear in only two columns. This is a statistical glitch and there is no logical reason why two games with same monkey seed and different game seed should be correlated. I blame the small sample size 😊 (about the only utility of columns is to assist in reproducing the raw results).
  • Some hands look really bad. For instance in game 214 the Monkey always has at least 20 face down cards at the end of the game.
  • Other hands look promising, for instance in game 484 the Monkey as at most 14 cards remaining

This is an example of an “exploratory analysis” (as opposed to explanatory analysis). I’m trying to get familiar with the data and I don’t have a specific hypothesis that I’m trying to prove. Of course, the more data you collect, the more chances of finding something interesting. For instance, I could have chosen to have 15 monkey seeds instead of 10, or 50 game seeds instead of 20.

Once you have completed your exploratory analysis, you might be able to form a concrete hypothesis about a spider program which you suspect to be dodgy. For instance, suppose that Shay Dee Games releases a new version of the Royal Game and we find that in every hand, either the Monkey consistently gets 10-or-less cards face-down or consistently gets 20-or-more cards face-down at the end of the game. We would suspect something is amiss, even if Shay Dee has the “correct” average win rate computed over all games. Of course, all this assumes we are able to determine the identity of every face-down card.

When testing a concrete hypothesis, things start to get technical. T’was brillig slithy toves gyre gimble wabe blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah Kolmogorov-Smirnov test blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah – or if you want the less technical version, yes we have formally proved that Shay Dee Games is indeed rigged.

Note also I have omitted certain stats. For instance, I could have recorded the maximum number of empty columns obtained at any stage of the game, number of suits removed or the number of levels in Toy Blast I manage to beat before the monkey finishes the game. I have also omitted discussion of specific programs that I suspect to be biased. The important point is the reader has something to go on if he wishes to investigate the veracity of a specific Spider Solitaire program.

Until next time, happy Spider Solitaire playing!

I guess that got pretty pathetic

Some time ago I played with a Spider Solitaire app for iPhone. And I have to say its features are more bewildering than the infamous “At first condemn our very feeble excuse for everything that follows constant negative press (7)” tweet from 2017 US politics.

Earlier this year I could load the program and deal a hand. Unfortunately I couldn’t see any cards!  I eventually decided to update all my apps, but only because Gmail also needed an upgrade for some reason. A few days ago I played my first game and lost narrowly. On the very next game this happened:

(Text version):

Suits removed: Diamonds, Spades

Stock: 0 cards remaining

Column 1: Ks

Column 2:

Column 3: Kh,Qh,Jh,Th,9h,8h,7d,6d,5d,,4c,3c,2c,Ad,8d,7c,6c,5c,4c,3c,2c,Ac

Column 4:

Column 5: 6h,5h,4h,3d,2d,As,2s,Ah,7c,6s

Column 6: ?,8c,Qd,Jd,Tc,9d,8s,7s,6c,5s,4s,3s,2h,Ah,9c,Kh,Qh,Js,Td,9s

Column 7: Kc,Qc,Jc,4d,7h

Column 8:

Column 9: ?,Kd,Qc,Jc,Tc,9c,8c,5h,4h,3h,2h,Ac

Column 10: Kc,Qs,Jh,Ts,9h,8h,7h,6h,5c

Ummm … excuse me? Do I not even get a chance to turn over the last card in Column 9? Like I said, this is more bewildering than the infamous “At first condemn our very feeble excuse for everything that follows constant negative press (7)” tweet from 2017 US politics. Don’t say you weren’t warned.

At the risk of insulting the intelligence of Captain Obvious, I have removed two suits and obtained three empty columns. I know from experience this does not automatically guarantee victory (especially if you are playing a certain server). But this game state is not even close to being one of the exceptions. I leave it as an exercise for the reader to verify the game is mathematically won regardless of the permutation of unseen cards.

Not every Spider Solitaire player has above average IQ and all game developers must cater to the whole market instead of a single expert player – yes, I get that. But whoever wrote this software doesn’t understand the fundamentals of the game. Therefore I am deleting this app.

The good news is Humanity has not (yet) been completely surpassed by its Artificial Intelligence counterpart. While I cannot pretend to understand the detailed game-theoretic specificity of AlphaGo’s Move 37 against Lee Sedol I can still sleep at night knowing AI has some catching up to do when it comes to playing Spider Solitaire.

Out of curiosity I clicked the magic button to see what would happen. The app yanked a full set of clubs from various columns and moved that to the foundations. I’m not sure how this esoteric piece of knowledge will help me in future, but I guess knowing this fact can’t hurt either.

Just In Case You’re Really Bored: can you win the above game with only two empty columns instead of three?

Just In Case You’re Really Really Bored: can you win the above game with only ONE empty column instead of three?

Just In Case You’re Really Really Really Bored: Write a 5000 word essay explaining why move 37 at P10 is more awesome than my attempt to “improve” Grant Woolard’s Classical Music Mashup IV.

Is Joe Bloggs Ltd a legit company?

Following the success with my Spider Solitaire Sudoku puzzle, I think now is a good time to talk about estimating the legitimacy of a game product.

We’ve all been there. We happily downloaded the latest match-3 game. The graphics are slick, the music is polished and – well – the game turns out to be completely rot13(fuvg). Those with good memories may recall the Evony controversy involving some interesting images that had nothing to do with their game play. And the less said about those incessant Hero Wars ads on Facebook, the better.

There are some really shoddy products out there. The worst I’ve seen is a game called “Jewel Swap” by Shanghai New Dragon Restaurant Ltd. Yes, that name is not a typo or a cut-n-paste from the wrong document. A restaurant means what you think it means and it has nothing to do with the Fundamental Theorem of Calculus. One level had “6 purple gems” in the goal section increase to “7 purple gems” for no reason at all – and the player had to earn their purples. They also had a different game with the exact same levels, same music but different graphics. The lesson I learnt was some developers are so egregiously bad they don’t even know how to hide the fact they are cheating.

Here are some indicators of a good or bad game:

Location Location Location:

A company’s location must be easily searchable. If Joe Bloggs Ltd sells happy star widgets but I need to pay an arm, leg and sixteen hours of my life just to find its location then forget it. Similarly, if you were applying for a job at Joe Bloggs Ltd you ought to know where it’s located. Like it or not, we have a thing called “competition” and users can easily find a better product out there.

Check the reviews.

Ideally a review should mention something specific about the game, or at least give some impression the reviewer has actually played the game. Otherwise, it fails the lost-sense-of-smell-due-to-COVID test. In other words, if a review is favourable then ask yourself “is it plausible that Joe Bloggs was bribed to write a good review despite knowing nothing about the game play?” If it’s not plausible then there is a good chance the review is legit. If all the reviews mention nothing specific then the flag is coloured red. Reviews should obviously be independent of the company otherwise Joe Bloggs Ltd can cherry-pick the good ones.

Social Media presence:

A good game will have lots of positive user comments on Facebook or Twitter (or some equivalent). A great game will go the extra mile and find creative ways to engage users, e.g., an informal fan art competition. A good example of a great company is UsTwo (of Monument Valley fame). A bad game will have Joe Bloggs Ltd singing its own praises with very little interaction from users.

Does the game stink after a dozen levels?

This is a double-edged sword since it’s easy enough for poor players to throw around incorrect accusations of cheating. One interesting example is Backgammon NJ for the Android Phone. But if you know your match-threes (*) you can quickly get a sense of when something doesn’t add up. If the other dot points above point in the same direction, then the flag is definitely coloured some strong shade of red. Obviously “dozen levels” doesn’t really apply to Spider Solitaire, but you get the gist.

(*) or substitute suitable game-genre here

Does Joe Bloggs Ltd have form?

If the company has other bad games then that’s a strong indication something is off. Although I didn’t mention this in my paper, the company that developed the “rogue” Spider Solitaire software had an even worse “Mah-Jong Solitaire”. We all know how many words a picture is worth so I will dump this gem below and let the reader judge for himself. Of course, I am assuming the reader has elementary knowledge of Mah-Jong tiles.

Needless to say, the company that developed the Spider Solitaire server failed miserably on all the above dot points.

What are your thoughts about good or bad game products? Are any important indicators missing? Do you have any favourite examples worth sharing? Of course, favourite examples don’t have to be bad!

Game Over – Spider Solitaire wins!!!!

Okay, apparently Spider Solitaire level 18 on iPhone is too hard, even for yours truly.

Trevor seemed to be gaining the ascendency with four consecutive wins on 7-10 (unfortunately not a horizontal Connect Four), but then it all fell apart on 11,12,13,14.


The game on 12 was brutal. Ninja Monkey’s famous 1-suit random move algorithm estimates a win rate of only 0.04. Game 13 was even worse, with Trevor unable to determine the identity of seven (7) face-down cards even with unlimited undo. Assuming random guessing for these seven cards, Ninja Monkey reported a win rate of 0.06.

Game 14 shows an example where Ninja Monkey badly misjudges the win rate (0.70). Trevor never looked like getting a hole at any stage of the game with a critical shortage of Jacks until the final deal. I will let the reader examine the game shoe and judge for himself.

So the jury is still out. Is the iPhone spider solitaire software rigged? My gut suggests I need a better algorithm that can get a decent win rate at 4-suit solitaire. Perhaps this is an exercise for the reader, if you excuse the terrible cliché!


Okay so what is this Solitaire Cube thingy all about?

Solitaire Cube

It would be nice if the Solitaire Cube combined my talents of playing Spider Solitaire well and solving Rubik’s Cube (and if there is no cool music I can always play piano at the same time) but apparently they have tournaments where you can play for money. We’re not talking small amounts of virtual money plus a small percentage of dot com stock options indexed to inflation but real money.

Solitaire Cube is your regular i-Phone app with the usual eye candy, cool music and/or sound effects – and best of all it takes the tedium out of shuffling the cards. It was developed by Tether Studios and powered by Skillz, an eSports platform that manages the $$$$

Players are matched with opponents with similar skills in real-time and world-wide. You are scored according to certain rules (which will not be discussed in detail), so even if you can’t win you are still rewarded for partial achievements, such as exposing most of the cards. If you score more than your opponent, then you win the $$$$.

There is also a 5-minute timer, so the game ends as soon as you run out of time. Or you can quit early, cut your losses and take the bonus for time remaining. There is a practice mode where you have virtual currency (Z coins, minus the dot com stock options as described above). Once you are comfortable with practice mode then you can go to the Pro League.

There is something similar for Spider Solitaire Cube, but I described Solitaire Cube first because that seems way more popular (Klondike is much better known than Spider). Besides I would expect former and latter to have much in common.

So that’s the theory, but don’t give up your day job just yet

If I got word of mouth from a trusted work colleague then I might seriously consider wanting in on this. But I heard about Solitaire Cube only because I play way too much match-three games on my mobile and can’t be bothered getting rid of the ads.

There seems to be a growing scourge of low-quality games that are designed to cheat. For instance, a game might be advertised as free-to-play but in reality it is pay-to-win. Or the gameplay itself is lame. Or there is false advertising (think Evony). And don’t get me started on Hero Wars. Solitaire Cube seems to be no different: a simple search (hint: name the largest subsidiary of Alphabet Inc.) reveals a lot of negative reviews. Without going into detail here is a list of complaints:

  • Player’s score is less than it should be
  • Practice hands are much easier than Real money hands (sound familiar?)
  • Frequently crashes
  • Lousy customer service
  • Don’t know if opponents are humans or bots (or if they are same skill level as you)
  • Can’t review opponent’s video ergo don’t know if he legit won. Don’t even know if they play the same hands.
  • The vigorish is worse than Las Vegas
  • You have to deposit $10 into Paypal account, then they ask you for your location to see if you’re eligible for tournaments (wrong location -> no entry).
  • Fake positive reviews.


I’m not sure how many of these complaints are legit. For example, players are more apt to remember the time when the game crashed when they were doing well, but not remember the 10 times the game crashed and they were doing badly. But there are some undisputable facts. If you are betting 25 cents to win 42 cents then the vigorish is 16%, which is worse than Las Vegas. Nobody can argue with the math. And there are things that don’t pass the sniff test, because IMNSHO game developers should not only be doing the right thing but be seen to be doing the right thing. I won’t go into exhaustive detail; I will let the reader draw his own conclusions.

Let’s test this software … or let’s not.

If you read this blog regularly, you will know how to test the Random Number Generator. But I believe it is not worth my time to do the same experiment, mainly because I need to set up a PayPal account. There are other issues, but the PayPal issue alone is enough to turn me off. I leave this as the proverbial exercise for the reader 😊