shadowslice e
Member
I'll start with a warning: much of this post will be speculative and purely based on commonalities which appear to exist in all decent methods (in my experience anyway). As such, I will structure it in 3 parts: prerequisite ideas, reasonably concrete and immediate consequences and then blatant and wild speculation. Feel free to stop reading and ignore me if you think I've gone off the deep end halfway through the post.
Firstly, this idea is based on an "inversion" of my top-down vs bottom-up method classification strategies. In summary, the basic idea is that most methods (and as far as I can tell, all good methods) are based around a "key step" which is usually semi-intuitive and solves a large part of the cube. Thus, the classification suggests that methods might be best classified according to this "key step". However, the exact nature of this classification is largely superfluous to this post.
Since making the original post, I have noted a couple of additional features of many key steps:
So, with the prerequisite ideas out of the way, we come to the question: why does this matter for method analysis/classification/development? As stated above, I'll begin with the clearest and most obvious consequences and get progressively more speculative.
And so, here is where I potentially go off the deep end.
But again, thanks for reading and I hope to hear your thoughts and expansions soon!
e: clarity and spelling
Firstly, this idea is based on an "inversion" of my top-down vs bottom-up method classification strategies. In summary, the basic idea is that most methods (and as far as I can tell, all good methods) are based around a "key step" which is usually semi-intuitive and solves a large part of the cube. Thus, the classification suggests that methods might be best classified according to this "key step". However, the exact nature of this classification is largely superfluous to this post.
Since making the original post, I have noted a couple of additional features of many key steps:
- many key steps have substeps within them so may be treated as a "mini-method" or "subpuzzle" in and of themselves.
- quite a few of these key steps rely on ideas which are fairly unique to them. In particular, they largely do not make sense in the context of the whole cube.
I'll break down what I think of the key steps of the 2 most common methods here.
Here, the substeps are obvious: each individual pair. Additionally, we have the idea of ce pairs themselves. It is completely obvious to anyone who has ever done "proper" cfop that the idea of building ce pairs is absolutely critical to f2l and if such an idea did not exist, cfop would simply not be viable.
As with F2L, the steps are obvious (if we take the most common lse method): 4a, 4b and 4c. For the idea, I propose that most good lse methods are based around the idea of "opposite edge pairs" (such as ULUR, UFUB, DFDB; you could also argue it's even possible to use more exotic pairs such as UFDF or UBDB). As with F2L, it is possible to solve LSE without appealing to this idea, but it is clear that most of the best LSE methods (particularly if you want anything at all intuitive) rely on it.
So, with the prerequisite ideas out of the way, we come to the question: why does this matter for method analysis/classification/development? As stated above, I'll begin with the clearest and most obvious consequences and get progressively more speculative.
- I propose that anyone interested in method classification or development should consider looking into these "local ideas" for various steps. Personally, I view them as falling between the very "global" ideas eo, ep, co, cp, transformation and blockbuilding and the "most local" ideas which are what happens when you apply the aforementioned ideas to single pieces on the cube. Not only do I suspect that these ideas will allow us to find deeper and less obvious connections between methods, but if such ideas are defined in generality, then they could provide a wealth of ideas for optimising key steps (or even the entirety) of methods.
- Similarly, I believe some systematic study of the "tools" of methods would be beneficial to method ideas as a whole. This would be harder to do than the previous idea (insofar as it comes to creating new tools in abstract). However, like before, finding surprising similarities between the tools for key steps of various methods could spawn its own ideas for methods.
- Investigating "subpuzzles" of the cube may be a viable method of creating truly new methods (as opposed to the step-bashing which most attempts appear to rely on). In particular, using the aforementioned local ideas will, hopefully, free method creators from the constraints of the global variables. In essence, I propose a "meta-method": (note that the first 2 steps may require a lot of trial and error until we more concretely establish some basic rules as to what defines a promising subpuzzle)
- Choose a subpuzzle which consists of a set of initial states and a set of finished states. Basically, choose what bits of the cube you wish to preserve in the solving of the subpuzzle and which bits you want to solve.
- Try to find some unique ideas which allow the most efficient solution of the subpuzzle possible (personally, I have a bias towards doing this mostly intuitively using some "tools" such as F/B for orienting edges in ZZ or M' U* M for orienting and moving edges as in roux)
- If you find an elegant solution of the subpuzzle, see if you can design a method around it. The most basic way you'd do this is first by getting to the start state as quickly as possible (again with roux, notice how the steps seem to try to get to the lse state as quickly as possible) and then by getting from the end state to solved as quickly as possible.
- This is sort of an addendum to the previous idea. There are a few methods which appear quite good but do not seem to intrinsically have a "key step" and so do not immediately appear to fall under this model of creation. However, many of them could be found by adding a 4th step to the above process where the steps are tweaked slightly to improve one or move non-key steps (potentially at the expense of the key step itself). Whether this trade off is worth it is hard to say, but should still be considered. A simple (and, in my clearly biased opinion, positive) example of this would be the conversion of roux into 42 and a simple negative example would be roux into pcms. I suspect (without too much proof) that this tweaking will likely be most fruitful in the context of transformation.
And so, here is where I potentially go off the deep end.
- The seeds of this idea (aside from the key step idea) actually came to me when I was attempting to devise a method of automating methodspace search. In the course of that, my main frustration was that it is completely impractical to apply this approach to anything much larger than 2x2. However, using this meta-method, I believe it may be possible to instead automate the search for good subpuzzles. Without modification, this does have a few drawbacks. For example, while it's relatively easy to find subpuzzles which can be solve efficiently, it is much harder to guarantee anything about ergonomics or the existence of local ideas which enable good human-workable solutions. While you can absolutely get around this using a bit of human legwork, I would love for someone much smarter and more well-versed in ml techniques to take a crack at the problem.
- As I noted in the original post, the "key step" idea appears to only work in the context of 3x3. However, with the context of subpuzzles, it may be possible to apply the idea recursively to larger puzzles. That is, it might be reasonable to consider the "key step of a key step". Applied to reduction on big cubes, the "key step" would be the 3x3 portion of the solve and the "key step of the key step" is whatever the key step of your 3x3 method of choice is. While this example appears very obvious, hopefully it is possible to create interesting non-reduction ideas by only considering very small portions of the cube and building up methods from there. Further, it may be possible to create multiple "2nd order" key steps and combine them all into a single larger method. To be honest, this may well be too much to hope for, but this is the rampant speculation portion of this post.
- Continuing with the idea of multiple key steps in a single method, perhaps a truly revolutionary method could be created by combining multiple key steps into a method. Of course, what distinguishes a method with no key steps and multiple key steps is largely up to opinion. Indeed, since the idea of a key step was initially conceptualised as a a step which defines the form of a method, this may even be a contradiction in terms. Further, this fusion would have to be very intrinsic; something which is in direct opposition to the mashing together of the various cfop-roux or petrus-cfop or zz-roux hybrids.
But again, thanks for reading and I hope to hear your thoughts and expansions soon!
e: clarity and spelling
Last edited: