• Welcome to the Speedsolving.com, home of the web's largest puzzle community!
    You are currently viewing our forum as a guest which gives you limited access to join discussions and access our other features.

    Registration is fast, simple and absolutely free so please, join our community of 40,000+ people from around the world today!

    If you are already a member, simply login to hide this message and begin participating in the community!

On Key Steps and Meta-methods

shadowslice e

Member
Joined
Jun 16, 2015
Messages
2,923
Location
192.168. 0.1
YouTube
Visit Channel
I'll start with a warning: much of this post will be speculative and purely based on commonalities which appear to exist in all decent methods (in my experience anyway). As such, I will structure it in 3 parts: prerequisite ideas, reasonably concrete and immediate consequences and then blatant and wild speculation. Feel free to stop reading and ignore me if you think I've gone off the deep end halfway through the post.

Firstly, this idea is based on an "inversion" of my top-down vs bottom-up method classification strategies. In summary, the basic idea is that most methods (and as far as I can tell, all good methods) are based around a "key step" which is usually semi-intuitive and solves a large part of the cube. Thus, the classification suggests that methods might be best classified according to this "key step". However, the exact nature of this classification is largely superfluous to this post.

Since making the original post, I have noted a couple of additional features of many key steps:
  • many key steps have substeps within them so may be treated as a "mini-method" or "subpuzzle" in and of themselves.
  • quite a few of these key steps rely on ideas which are fairly unique to them. In particular, they largely do not make sense in the context of the whole cube.
I'll break down what I think of the key steps of the 2 most common methods here.
Here, the substeps are obvious: each individual pair. Additionally, we have the idea of ce pairs themselves. It is completely obvious to anyone who has ever done "proper" cfop that the idea of building ce pairs is absolutely critical to f2l and if such an idea did not exist, cfop would simply not be viable.
As with F2L, the steps are obvious (if we take the most common lse method): 4a, 4b and 4c. For the idea, I propose that most good lse methods are based around the idea of "opposite edge pairs" (such as ULUR, UFUB, DFDB; you could also argue it's even possible to use more exotic pairs such as UFDF or UBDB). As with F2L, it is possible to solve LSE without appealing to this idea, but it is clear that most of the best LSE methods (particularly if you want anything at all intuitive) rely on it.

So, with the prerequisite ideas out of the way, we come to the question: why does this matter for method analysis/classification/development? As stated above, I'll begin with the clearest and most obvious consequences and get progressively more speculative.
  1. I propose that anyone interested in method classification or development should consider looking into these "local ideas" for various steps. Personally, I view them as falling between the very "global" ideas eo, ep, co, cp, transformation and blockbuilding and the "most local" ideas which are what happens when you apply the aforementioned ideas to single pieces on the cube. Not only do I suspect that these ideas will allow us to find deeper and less obvious connections between methods, but if such ideas are defined in generality, then they could provide a wealth of ideas for optimising key steps (or even the entirety) of methods.
  2. Similarly, I believe some systematic study of the "tools" of methods would be beneficial to method ideas as a whole. This would be harder to do than the previous idea (insofar as it comes to creating new tools in abstract). However, like before, finding surprising similarities between the tools for key steps of various methods could spawn its own ideas for methods.
  3. Investigating "subpuzzles" of the cube may be a viable method of creating truly new methods (as opposed to the step-bashing which most attempts appear to rely on). In particular, using the aforementioned local ideas will, hopefully, free method creators from the constraints of the global variables. In essence, I propose a "meta-method": (note that the first 2 steps may require a lot of trial and error until we more concretely establish some basic rules as to what defines a promising subpuzzle)
    1. Choose a subpuzzle which consists of a set of initial states and a set of finished states. Basically, choose what bits of the cube you wish to preserve in the solving of the subpuzzle and which bits you want to solve.
    2. Try to find some unique ideas which allow the most efficient solution of the subpuzzle possible (personally, I have a bias towards doing this mostly intuitively using some "tools" such as F/B for orienting edges in ZZ or M' U* M for orienting and moving edges as in roux)
    3. If you find an elegant solution of the subpuzzle, see if you can design a method around it. The most basic way you'd do this is first by getting to the start state as quickly as possible (again with roux, notice how the steps seem to try to get to the lse state as quickly as possible) and then by getting from the end state to solved as quickly as possible.
  4. This is sort of an addendum to the previous idea. There are a few methods which appear quite good but do not seem to intrinsically have a "key step" and so do not immediately appear to fall under this model of creation. However, many of them could be found by adding a 4th step to the above process where the steps are tweaked slightly to improve one or move non-key steps (potentially at the expense of the key step itself). Whether this trade off is worth it is hard to say, but should still be considered. A simple (and, in my clearly biased opinion, positive) example of this would be the conversion of roux into 42 and a simple negative example would be roux into pcms. I suspect (without too much proof) that this tweaking will likely be most fruitful in the context of transformation.
And here is where I deem it prudent to provide a warning that I am about to verge into the extremes of this idea as it is in my nature as a mathematician to take any given propositions as far as possible. Before then, I'll provide a few closing remarks on the above. Firstly, while I do propose a framework for designing methods, I, by no means, believe that it is the best or should be the only such "meta-method". Indeed, I believe quite the opposite; I think it is quite an imperfect system based on conjecture and observations with lots of room for improvement. However, I do believe that it is the best option we have at the moment as I believe we are reaching the limits of the bce system which I believe is the only other existing meta-method. I would also love to see competitor meta-methods be developed. I will additionally note that this framework does not implicitly provide the community anything new; rather, it simply gives a different perspective. In this vein, I would liken it to constructor theory in physics or the use of matrices and quaternions in the context of mechanics. Finally, I do actually think that many of the well-known method creators have been intuitively using this framework without specifically realising it. In particular, lse for roux and eoline for zz appear to be clear examples of this (indeed, I do have a (possibly made up) memory of Gilles saying something similar when it comes to roux). Thus, by making it explicit, I hope that the approach will be opened up to many more people and that they will know about the basic ideas much more quickly.

And so, here is where I potentially go off the deep end.
  1. The seeds of this idea (aside from the key step idea) actually came to me when I was attempting to devise a method of automating methodspace search. In the course of that, my main frustration was that it is completely impractical to apply this approach to anything much larger than 2x2. However, using this meta-method, I believe it may be possible to instead automate the search for good subpuzzles. Without modification, this does have a few drawbacks. For example, while it's relatively easy to find subpuzzles which can be solve efficiently, it is much harder to guarantee anything about ergonomics or the existence of local ideas which enable good human-workable solutions. While you can absolutely get around this using a bit of human legwork, I would love for someone much smarter and more well-versed in ml techniques to take a crack at the problem.
  2. As I noted in the original post, the "key step" idea appears to only work in the context of 3x3. However, with the context of subpuzzles, it may be possible to apply the idea recursively to larger puzzles. That is, it might be reasonable to consider the "key step of a key step". Applied to reduction on big cubes, the "key step" would be the 3x3 portion of the solve and the "key step of the key step" is whatever the key step of your 3x3 method of choice is. While this example appears very obvious, hopefully it is possible to create interesting non-reduction ideas by only considering very small portions of the cube and building up methods from there. Further, it may be possible to create multiple "2nd order" key steps and combine them all into a single larger method. To be honest, this may well be too much to hope for, but this is the rampant speculation portion of this post.
  3. Continuing with the idea of multiple key steps in a single method, perhaps a truly revolutionary method could be created by combining multiple key steps into a method. Of course, what distinguishes a method with no key steps and multiple key steps is largely up to opinion. Indeed, since the idea of a key step was initially conceptualised as a a step which defines the form of a method, this may even be a contradiction in terms. Further, this fusion would have to be very intrinsic; something which is in direct opposition to the mashing together of the various cfop-roux or petrus-cfop or zz-roux hybrids.
And so, if you got this far in this mammoth post, thanks for reading! I look forward to reading your ideas with regards to strengthening and clarifying the meta-method and any additions to my observations. Unfortunately, the only reward I can provide is a few final notes. Firstly, I will obviously be attempting to make this idea more concrete. However, my focus will likely be in the context of computer automation. My hope is that this will enable clearer avenues of exploration for humans as well. While I may post from time to time but if you want to talk to me about it, it'd probably be best to message me on discord or strike up a conversation on the rare occasion I'm streaming on twitch. Of course, I can't guarantee anything very quickly since I now need to work around my job amongst other things. Secondly, perhaps someone could come up with a meta-meta-method which allows the development of meta-methods from relatively basic observation such as my key step idea. This is mostly me being silly, of course, but I still think it'd be fun to see. Lastly, as mentioned in point 4, some sort of modification of methods developed by this method may indeed be possible and even widely applicable. If this 4th step of the meta-method can be pinned down, I'd love to hear about it. In fact, I think this is actually the part of the meta-method which many people are already most familiar with, particularly in the case of cfop or zz. Unfortunately, I can't really provide a concrete way of doing this at all.

But again, thanks for reading and I hope to hear your thoughts and expansions soon!

e: clarity and spelling
 
Last edited:

GodCubing

Member
Joined
May 13, 2020
Messages
247
So basically substeps are defined by peices solved, peices persevered and EO EP CO CP preserved and solved. Then how is it determined that said substep is ideal?
 

shadowslice e

Member
Joined
Jun 16, 2015
Messages
2,923
Location
192.168. 0.1
YouTube
Visit Channel
This sounds to me a lot like the design principles behind kirjava's duplex method - getting to a good state as quickly as possible.
I think the general idea of is kind of similar but, if my recollection of duplex serves me well, that method is more to do with finding the smallest and fastest combination of algs to solve LL whereas this is much more general (and much more focused on semi-intuitive ideas and (almost) avoiding algs as much as is reasonable). That said, as I noted in the post, I do believe that many prominent method devs (including kir by my guess) intuitively use more or less the same system.

So basically substeps are defined by peices solved, peices persevered and EO EP CO CP preserved and solved. Then how is it determined that said substep is ideal?
For now, the "goodness" of a given subpuzzle is pretty much up to the designer. Personally, my criteria for how good a step is how easy the lookahead is, how ergonomic the triggers are and how efficiently the step can be done (which is probably some slightly modified pieces/move sort of thing). Personally, I'd lean towards having something which balances all three quite well but there are certainly others who would prefer to go for something very ergonomic and easy lookahead in order to max out tps.
 
Last edited:
Top