My Spider Solitaire experiment for September is complete

Earlier I made a promise with a friend that I will play the 4-suit daily challenges on my i-phone. For each game I estimated the probability a monkey playing random moves will win at the 1-suit level. I said an inversion occurred if for any two games the latter was harder than the former (i.e. estimated win rate was less for the monkey).

I got a p-value of 0.0571 so the null hypothesis barely stood up (Nevertheless, I do not regret the experiment: my data for July pretty much forced me to hypothesize the program was biased because I did get p < 0.05, but only just).

Due to time constraints, I do not wish to further test my i-phone Spider Solitaire. I won’t be surprised if the random number generator is rigged, but it’s not worth my time to prove this. (If you are interested, I recommend you test more than one-month worth of games. Dates are sorted by day as primary key then month by secondary key so for instance March 17 < February 23 even though February occurs before March).

In the diagram below the downward trend is not obvious, but I suspect there were too many “near-perfect scores” at the beginning and not enough near the end. It is also interesting that the result was very close in the sense that “changing one bad result” after the fact would have been enough to push the p-value below 0.05. The decision to accept/reject the null hypothesis was too close to call until the very last day of this month.

Note: for my Spider Solitaire paper in Parabola, I tested a different Spider server and the downward trend was much more obvious.

That’s it for now, till next time đź™‚

### Like this:

Like Loading...

*Related*