Cuberstache
Member
- Joined
- May 7, 2018
- Messages
- 1,042
- Location
- Washington State, USA
- WCA
- 2016DAVI02
- YouTube
- Visit Channel
Glad to see another 69-minute episode
Power to thepeopleforums! (says the guy with 9 comments)
<3 the 2x speed version
I really love emmm, please keep them coming I hope you the best!As many of you have noticed already, Episode 32 - Hyperdimensional Supercomputer is out!
Woah, it's ColorfulPockets! Get discord pls and join the ZZ server (I know it has nothing to do with the podcast but pjk won't mind).Hello
Sent from my iPhone using Tapatalk
Power to thepeopleforums! (says the guy with 9 comments)
@Kit Clement was the 2x version intentional as a special release or did Andrew mess up?
Definitely an accidental release. We've already pulled it out of our podcast distributor, but your podcast app might have downloaded it locally already. If you like the 2x version, most podcasting apps have features that let you play any episode at whatever speed you desire.
Shouldn't it be a lifetime sentence because fatal mistake = a mistake that costs your life?with each fatal mistake costing him another 6 months added to his sentence.
Shouldn't it be a lifetime sentence because fatal mistake = a mistake that costs your life?
Great Read!Some comments about episode 32 (wall of text warning).
The hypothetical about half of the scrambles is interesting. Jaap Scherphuis once brought up how web browsers typically use 128-bit pseudorandom number generators, so for puzzles with large numbers of states (mainly a concern for 4×4×4), even if the scramble generator would have been truly random-state when using a true random number generator, the reality is that only a tiny section of the states can be produced at all by the scramble generator.
TNoodle uses Java's SecureRandom, which is guaranteed to be a cryptographically secure PRNG, but this could mean something like 256 or 512 bits of internal state, which is still finite. A 7×7×7 scramble uses about 564 bits of randomness, so even a 512-bit CSPRNG wouldn't be "enough". Then again, random-move big cube scrambles already have significant biases (not necessarily in a meaningful sense, but more in the "we know for sure some things are many orders of magnitude off (but still very rare)" sense), which shadow whatever minor biases come from imperfect random number generation.
j_sunrise and Ben Whitmore brought up megaminx scrambles in the r/layerbylayer subreddit. The R++/D++ megaminx scrambling method was often used as an example of how you can use a tiny portion of the entire state space and still have an "essentially" uniform distribution. What's important isn't really that the distribution is uniform, but that it's impossible to efficiently distinguish the actual distribution from the uniform distribution—in principle one could go through all 2^70 possible megaminx scrambles to pick out certain biases, but the hope is that being able to reliably tell apart uniform sampling versus sampling among 70-move Pochmann scrambles should be "difficult".
In the hypothetical situation where a random half of the scrambles just can't be generated, it would likely take somewhere on the order of billions of scrambles to reliably distinguish this from a true uniform distribution (cf. birthday paradox), which sounds like it should be good enough.
(For what it's worth, I'm almost certain that the 70-move scrambles we're using for megaminx have human-noticeable biases that can be teased out within maybe 100 scrambles, certainly much less than the billions mentioned above. I don't really have a solution to this besides "increase scramble length to 100+ moves", but I expect that that will not be received favourably, considering that people already complain about 70 moves being very long.)Andrew already mentioned in the show itself some reasons why learning algs on mega has less impact than simpler/smaller puzzles like 3×3×3, and the one I agree with the most is the case count explosion. The number of cases typically grows exponentially with the number of pieces you want to solve; CLL on 3×3×3 (and variants like COLL, CMLL) has only 43 cases, but CLL on megaminx has almost 200 cases! (Usefulness of megaminx CLL aside; I'm just bringing this up as an example of combinatorial explosion.) The benefit per alg is much smaller, since each alg is useful less often.
It's also worth keeping in mind that there just isn't user-friendly software to generate algs for megaminx. Or big cubes, for that matter. You need to spend time messing around with ksolve or another solver, which is very unlike how for 3×3×3, you can paint the sides of a cube in Cube Explorer, click a button, and get hundreds of algs relatively quickly. Another thing is that many people treat megaminx (or big cubes) as side events, and are correspondingly less motivated to learn large-ish alg sets even if they're useful.
(I keep saying "or big cubes" because honestly I care more about big cubes than megaminx, but megaminx was what's mentioned in the show, so I'm focusing the discussion on that instead…)
Like Kit, I use comms to finish the last few corners on megaminx LL too. I used to know a couple of L4C algs and optimised 3-cycle comms, but have since forgotten them due to disuse.
OK, so far the bell has been rung for the sail of a sail boat, what could that be?
He then said "Close enough" and ended the podcast, leading me to believe that he meant the sail on a sail boat.Andrew was trying to get Kit to say "Sail", but he said "Snail", which still rung the bell
Thread starter | Similar threads | Forum | Replies | Date |
---|---|---|---|---|
A | LSLL (Last Slot Last Layer) Intro | General Speedcubing Discussion | 19 | |
S | 4x4 Inner Layers Not Turning | Cubing Help & Questions | 3 | |
R | Megaminx last layer | Puzzle Theory | 5 | |
R | Square1 layer by layer TRUE | Puzzle Theory | 2 | |
2 look OLL question - rotate whole cube or top layer? | General Speedcubing Discussion | 5 |