• Welcome to the Speedsolving.com, home of the web's largest puzzle community!
    You are currently viewing our forum as a guest which gives you limited access to join discussions and access our other features.

    Registration is fast, simple and absolutely free so please, join our community of 40,000+ people from around the world today!

    If you are already a member, simply login to hide this message and begin participating in the community!

Is an infinite scramble really "random"?

unixpickle

Member
Joined
May 25, 2013
Messages
12
Location
US
If you scramble a Rubik's 3x3x3 with 20 moves (htm), you are less likely to reach some configurations than others. For example, there are usually many more ways to reach a 6 move configuration than a 20 move one using exactly 20 moves.

I want to know if the probabilities "even out" after an infinite number of turns. Is there a uniform probability that any given configuration will be reached after doing an infinite number of turns? My intuition makes me believe that the probabilities are never uniform, but I do not know how to prove it.

EDIT: I am defining "random" in a very specific way. I consider a method of scrambling "random" if there is a 1/(4.3*10^19) probability that it will produce any given position from a solved state. Furthermore, by "infinite", I am talking about limits. I want to know the steady-state probabilities of the cube if you scramble it in half-turn metric.
 
Last edited:
You can't have an 'infinite' scramble because the cube would never stop scrambling ;)

Then, in your mind, replace "infinite" with "really long." I am really asking about the probability distribution that you approach as you continue to scramble the cube.
 
I'm inclined to say that probabilities will not be uniform.

I'd posit that rolling an uneven die an infinite number of times wouldn't give the same results as an even die. The results would only converge to being more accurate to the probabilities as quantity is increased.

I don't actually know though. Hopefully someone who can math will answer.
 
maybe you could use a discrete markov process with 43252003274489856000 states where the transition=do a random move and try and find the equilibrium probabilities. if ist 1/43252003274489856000 then yay

that would probably be hard though

This is how I would *like* to prove it, but I don't think a 4.3*10^19 x 4.3*10^19 matrix is reasonable...
 
Yes! Assuming I interpret your question right.
So I will formulate the problem in Markov-chain-theory:
We have a starting distribution of states, that is 0 for every state except for the solved state which gets a 1. (This means essentially we have the probability 1, that the cube is solved after scrambling with 0 turns). Now we have a transition matrix (some array of numbers) which has entries of 1/18 exactly when the "row-state" and the "line-state" are reachable after one move and 0 else. (This means, when being in a state the propability of going to a state, that is reachable in one move, the propability for going to this state is 1/18 and 0 else (as you cannot go to a state that is not exactly one-move-apart)). Your question is: Does our initial distribution converge (in distribution) to a uniform distribution over all states?

Now the ergodic theorem states: "If some power of the transition matrix has entries that are all strictly greater than 0, then there is the same limit (as defined above) for every starting position, that is an eigenvector of the transition matrix with eigenvalue 1".
The condition reformulated in normal language says: "Every state is reachable from every state in some number of moves". This is clearly true for that number being 20.
The conclusion says: "If we find a distribution of states that remains constant after applying one move, our initial distribution will converge to this".
The first thing we check is (of course) the uniform distribution over all states. ("Every position has the same propability in the end"). Let A be the number of cube states, then every position gets an assigned propability of 1/A in the uniform distribution. When applying one move this state X will give 1/(18A) to every neighbour-state, but will likewise receive 1/(18A) from its 18 neighbour-states. This clearly sums up to 1/A for our state X. As this works for every state X, we found a distribution that remains invariant under "applying one move", and the ergodic theorem implies, that our initial distribution converges to this (the uniform distribution).

Note that we can even change the propabilities for choosing moves (e.g. we can make 'R'-moves 10-times as likely) and the result is still the same, as long as we ensure that every state is still reachable of course (with a fixed number of turns).

EDIT: Oh i got ninja'd ALOT ;) while writing this post. I just wanted to add, that it doesn't matter that states are symmetrical in looking at them "from the outside". The key in this is, that the cube group itself is symmetrical. (So every state has 18 'entries' and 18 'exits', which makes every state in equal measure symmetrical as viewed from "inside"). Maybe it helps if you think about only the U-layer with moves U, U', U2. The state after "U2" has much more symmetry but to get there from any non-U2-state has propability 1/3 exactly as the U-state has propability 1/3 from coming after any non-U-state.
 
Last edited:
The first thing we check is (of course) the uniform distribution over all states. ("Every position has the same propability in the end"). Let A be the number of cube states, then every position gets an assigned propability of 1/A in the uniform distribution.

Certainly you have proved something, but I am not sure it is what I was asking. What I get from your proof is that if I start off with a random position and I perform a scramble on it, I will have a 1/A probability of landing on any given position. This would seem apparent for any group. I still do not see how you proved that a random scramble performed on a solved cube will generate a state with probability 1/A.
 
I was in the middle of creating the thread "Gods Second Number?" while you posted this. It took me a while to create because I stopped to eat halfway through.

Yes! Assuming I interpret your question right.

"Every state is reachable from every state in some number of moves". This is clearly true for that number being 20.
To be honest, I could not follow most of your post. I am not saying it is how you wrote it, it is just the terminology.

When you say "Yes!", are you saying yes that the OP was correct in his assumption, or yes that the probabilities even out. My intuition also leads me to believe they don't even out.

I am fairly certain that the second part of the above quote is incorrect. I do not think every position is reachable in exactly 20 moves.

Edit:
After reading whauk's edit, I was prepared to argue against it as it seemed to simple. I still don't understand how using just U, U', and U2 can be a good simulation of the whole cube, but I have arrived at the same conclusion in a different way.

If you did and infinite number of moves with the end result only affecting the last layer, you would still be more likely to get a case with 2 adjacent edges oriented correctly and the other two misoriented than you would to get a case with all four edges correctly oriented. There is also more states that have this orientation pattern. If you take the probability of getting any orientation case and divide it among how many states it has to cover, every individual state still has an equal chance to come up.

This logic should apply to the cube as whole.
 
Last edited:
I suppose the Markov analysis argument is correct if you scramble the cube using moves and their inverses. However, I do not think that the same argument holds if you scramble the cube using some other basis, like only R, U, L, D, F, and B moves (but no inverses or squares).
 
I assmue you are familiar with the mathematical concept of limits?
I proofed that, if you start with a solved cube (but it works with any different position too) and perform N random turns you will reach every state with the same propability in the limit for N to infinity. Or sloppily formulated: "After an infinite number of random turns you reach each state with the same propability.
Or let again \( A \) be the number of cube states and \( P_N(X) \) the propabilty of being in state \( X \) after \( N \) turns. (with \( X \) being an arbitrary cube state).
Then
\( lim_{N \to \infty}P_N(X)=\frac{1}{A} \).
I am fairly certain that the second part of the above quote is incorrect. I do not think every position is reachable in exactly 20 moves.
If a position is reachable in exactly 19 moves and the scramble leading there ends with an say R', you can replace it with R2 R to make it exactly 20. Similiarly for a position being reachable in exactly 18 moves you can use this method to make it 19 moves long and then make it 20 moves long (with the same method).
I suppose the Markov analysis argument is correct if you scramble the cube using moves and their inverses. However, I do not think that the same argument holds if you scramble the cube using some other basis, like only R, U, L, D, F, and B moves (but no inverses or squares).
Then you have the problem that only positions without parity are reachable after an even number of turns, and likewise only positions with parity are reachable after an uneven number of turns, and we arise at the question again whether infinity is even or odd. This seems to be the only exception from my "Note"-part in the original proof.
 
I have edited my post. Also, I am think the number in which every state can be reached in exactly this number of moves is 22, but it could very well be between 21 and 24.

Replacing R' with R2 R to make it 20 moves doesn't really prove anything because you just did 2 moves on the same face in a row and they should cancel.
 
Last edited:
If you did and infinite number of moves with the end result only affecting the last layer, you would still be more likely to get a case with 2 adjacent edges oriented correctly and the other two misoriented than you would to get a case with all four edges correctly oriented. There is also more states that have this orientation pattern. If you take the probability of getting any orientation case and divide it among how many states it has to cover, every individual state still has an equal chance to come up.

I talk about states whereas you collect many states in an "orientation pattern". I don't see any connection here.
Replacing R' with R2 R to make it 20 moves doesn't really prove anything because you just did 2 moves on the same face in a row and they should cancel.
When we talked about making "random moves" I assumed we choose them randomly without knowing what the last one was. After applying more moves than the number of cube states you clearly have some trivial identities in there (as some state has to have been traversed twice (is that even grammar?)). Do you wish to remove them as well? It gets really hard to apply "infinitely many moves" then...
 
I am still confused. you were talking about an orientation pattern with your example of only doing U, U', and/or U2 to the last layer?

I also don't see where it was stated that the moves were random moves, but it was implied (otherwise you could do an infinite amount of turns and cycle between just a few states). Would it not also be implied that you don't repeat the same move twice?

Also, you will only begin to traverse states more than once after performing a number of moves greater than the number of states if you are following the devils algorithm. I fail to see how this applies here though. How did this come from excluding turning the same face twice in a row to excluding multiple cube states?
 
Moritz has said most of the important things. The key words are "Markov" and "ergodic".

If you search old posts, you'll see me advocating "Markov Random State Scrambles": define a random position on a puzzle using the distribution of states you would get if you kept doing moves. There's no need to avoid "backtracking", to compute canonical sequences, or in any way hack things so that the probability evens out. The theory gets you to a steady state.
To avoid periodicity (e.g. when you're using QTM), use the average:

Let the probability distribution \( D_n \) at time \( n = 0 \) be 100% on the solved state, and suppose your set of generators is \( G \).
For a distribution \( D \) and puzzle move \( g \)$, define \( D \circ g \) as the distribution where the probability mass at state \( S \) is moved to \( S \circ g \).

The distribution after n random moves is:
\(
D_n = {1 \over |G|} \sum_{g \in G} D_n \circ g
\)

Then take the limiting Markov Random State distribution as:

\(
\underset{n \to \infty}{\lim}~mean(D_0, D_1, ..., D_n)
\)

Representation theory lets you find limiting distributions by simply raising a matrix to a power.

The distribution of Square-1 states in the official scramble program is based on this. There are two "natural" ways to define a Square-1 move, which give you different distributions. The one that I think more sense turns out to be the same as if you assembled the cube by slotting pieces randomly (and starting over if it doesn't work).

This line of inquiry also gets you to a fun question: What is the limiting distribution for the 15 puzzle?

I want to know if the probabilities "even out" after an infinite number of turns. Is there a uniform probability that any given configuration will be reached after doing an infinite number of turns? My intuition makes me believe that the probabilities are never uniform, but I do not know how to prove it.
Your answer depends on what exactly you're asking.

Even if you're in HTM, any *finite* number of moves will not get you an exactly even distribution.

In the limit, you will get an even distribution, in a (mathematically) fairly well-understood way.
 
Last edited by a moderator:
Even if you're in HTM, any *finite* number of moves will not get you an exactly even distribution.

So there is no number N where applying exactly N random moves to the cube will result in each case having the exact same distribution? How does that proof go?
 
Back
Top