• Welcome to the Speedsolving.com, home of the web's largest puzzle community!
    You are currently viewing our forum as a guest which gives you limited access to join discussions and access our other features.

    Registration is fast, simple and absolutely free so please, join our community of 40,000+ people from around the world today!

    If you are already a member, simply login to hide this message and begin participating in the community!

The Layer by Layer Podcast

Kit Clement

Premium Member
Joined
Aug 25, 2008
Messages
1,631
Location
Aurora, IL
WCA
2008CLEM01
YouTube
Visit Channel
@Kit Clement was the 2x version intentional as a special release or did Andrew mess up?

Definitely an accidental release. We've already pulled it out of our podcast distributor, but your podcast app might have downloaded it locally already. If you like the 2x version, most podcasting apps have features that let you play any episode at whatever speed you desire.
 

ProStar

Member
Joined
Oct 27, 2019
Messages
6,246
Location
An uncolonized sector of the planet Mars
WCA
2020MAHO01
SS Competition Results
Definitely an accidental release. We've already pulled it out of our podcast distributor, but your podcast app might have downloaded it locally already. If you like the 2x version, most podcasting apps have features that let you play any episode at whatever speed you desire.

Would've been better as a special release, but I have it anyway. And the only reason I like it is because it's a limited edition, it's not as cool if you just play it at 2x
 

ProStar

Member
Joined
Oct 27, 2019
Messages
6,246
Location
An uncolonized sector of the planet Mars
WCA
2020MAHO01
SS Competition Results
Andromedas "Andrew" Colorful Nathenson (Formerly Andromedas Colorful Pockets) has been put on trial for releasing an incorrectly edited version of the "Layer By Layer" Podcast. If proven guilty, he will be sentenced to 5 years of straight podcast editing, with each fatal mistake costing him another 6 months added to his sentence.
 

xyzzy

Member
Joined
Dec 24, 2015
Messages
2,876
Some comments about episode 32 (wall of text warning).
The hypothetical about half of the scrambles is interesting. Jaap Scherphuis once brought up how web browsers typically use 128-bit pseudorandom number generators, so for puzzles with large numbers of states (mainly a concern for 4×4×4), even if the scramble generator would have been truly random-state when using a true random number generator, the reality is that only a tiny section of the states can be produced at all by the scramble generator.

TNoodle uses Java's SecureRandom, which is guaranteed to be a cryptographically secure PRNG, but this could mean something like 256 or 512 bits of internal state, which is still finite. A 7×7×7 scramble uses about 564 bits of randomness, so even a 512-bit CSPRNG wouldn't be "enough". Then again, random-move big cube scrambles already have significant biases (not necessarily in a meaningful sense, but more in the "we know for sure some things are many orders of magnitude off (but still very rare)" sense), which shadow whatever minor biases come from imperfect random number generation.

j_sunrise and Ben Whitmore brought up megaminx scrambles in the r/layerbylayer subreddit. The R++/D++ megaminx scrambling method was often used as an example of how you can use a tiny portion of the entire state space and still have an "essentially" uniform distribution. What's important isn't really that the distribution is uniform, but that it's impossible to efficiently distinguish the actual distribution from the uniform distribution—in principle one could go through all 2^70 possible megaminx scrambles to pick out certain biases, but the hope is that being able to reliably tell apart uniform sampling versus sampling among 70-move Pochmann scrambles should be "difficult".

In the hypothetical situation where a random half of the scrambles just can't be generated, it would likely take somewhere on the order of billions of scrambles to reliably distinguish this from a true uniform distribution (cf. birthday paradox), which sounds like it should be good enough.

(For what it's worth, I'm almost certain that the 70-move scrambles we're using for megaminx have human-noticeable biases that can be teased out within maybe 100 scrambles, certainly much less than the billions mentioned above. I don't really have a solution to this besides "increase scramble length to 100+ moves", but I expect that that will not be received favourably, considering that people already complain about 70 moves being very long.)
Andrew already mentioned in the show itself some reasons why learning algs on mega has less impact than simpler/smaller puzzles like 3×3×3, and the one I agree with the most is the case count explosion. The number of cases typically grows exponentially with the number of pieces you want to solve; CLL on 3×3×3 (and variants like COLL, CMLL) has only 43 cases, but CLL on megaminx has almost 200 cases! (Usefulness of megaminx CLL aside; I'm just bringing this up as an example of combinatorial explosion.) The benefit per alg is much smaller, since each alg is useful less often.

It's also worth keeping in mind that there just isn't user-friendly software to generate algs for megaminx. Or big cubes, for that matter. You need to spend time messing around with ksolve or another solver, which is very unlike how for 3×3×3, you can paint the sides of a cube in Cube Explorer, click a button, and get hundreds of algs relatively quickly. Another thing is that many people treat megaminx (or big cubes) as side events, and are correspondingly less motivated to learn large-ish alg sets even if they're useful.

(I keep saying "or big cubes" because honestly I care more about big cubes than megaminx, but megaminx was what's mentioned in the show, so I'm focusing the discussion on that instead…)

Like Kit, I use comms to finish the last few corners on megaminx LL too. I used to know a couple of L4C algs and optimised 3-cycle comms, but have since forgotten them due to disuse.
bonus:
 
Last edited:

brododragon

Member
Joined
Dec 9, 2019
Messages
2,274
Location
Here
Some comments about episode 32 (wall of text warning).
The hypothetical about half of the scrambles is interesting. Jaap Scherphuis once brought up how web browsers typically use 128-bit pseudorandom number generators, so for puzzles with large numbers of states (mainly a concern for 4×4×4), even if the scramble generator would have been truly random-state when using a true random number generator, the reality is that only a tiny section of the states can be produced at all by the scramble generator.

TNoodle uses Java's SecureRandom, which is guaranteed to be a cryptographically secure PRNG, but this could mean something like 256 or 512 bits of internal state, which is still finite. A 7×7×7 scramble uses about 564 bits of randomness, so even a 512-bit CSPRNG wouldn't be "enough". Then again, random-move big cube scrambles already have significant biases (not necessarily in a meaningful sense, but more in the "we know for sure some things are many orders of magnitude off (but still very rare)" sense), which shadow whatever minor biases come from imperfect random number generation.

j_sunrise and Ben Whitmore brought up megaminx scrambles in the r/layerbylayer subreddit. The R++/D++ megaminx scrambling method was often used as an example of how you can use a tiny portion of the entire state space and still have an "essentially" uniform distribution. What's important isn't really that the distribution is uniform, but that it's impossible to efficiently distinguish the actual distribution from the uniform distribution—in principle one could go through all 2^70 possible megaminx scrambles to pick out certain biases, but the hope is that being able to reliably tell apart uniform sampling versus sampling among 70-move Pochmann scrambles should be "difficult".

In the hypothetical situation where a random half of the scrambles just can't be generated, it would likely take somewhere on the order of billions of scrambles to reliably distinguish this from a true uniform distribution (cf. birthday paradox), which sounds like it should be good enough.

(For what it's worth, I'm almost certain that the 70-move scrambles we're using for megaminx have human-noticeable biases that can be teased out within maybe 100 scrambles, certainly much less than the billions mentioned above. I don't really have a solution to this besides "increase scramble length to 100+ moves", but I expect that that will not be received favourably, considering that people already complain about 70 moves being very long.)
Andrew already mentioned in the show itself some reasons why learning algs on mega has less impact than simpler/smaller puzzles like 3×3×3, and the one I agree with the most is the case count explosion. The number of cases typically grows exponentially with the number of pieces you want to solve; CLL on 3×3×3 (and variants like COLL, CMLL) has only 43 cases, but CLL on megaminx has almost 200 cases! (Usefulness of megaminx CLL aside; I'm just bringing this up as an example of combinatorial explosion.) The benefit per alg is much smaller, since each alg is useful less often.

It's also worth keeping in mind that there just isn't user-friendly software to generate algs for megaminx. Or big cubes, for that matter. You need to spend time messing around with ksolve or another solver, which is very unlike how for 3×3×3, you can paint the sides of a cube in Cube Explorer, click a button, and get hundreds of algs relatively quickly. Another thing is that many people treat megaminx (or big cubes) as side events, and are correspondingly less motivated to learn large-ish alg sets even if they're useful.

(I keep saying "or big cubes" because honestly I care more about big cubes than megaminx, but megaminx was what's mentioned in the show, so I'm focusing the discussion on that instead…)

Like Kit, I use comms to finish the last few corners on megaminx LL too. I used to know a couple of L4C algs and optimised 3-cycle comms, but have since forgotten them due to disuse.
Great Read!
 
Top