• Welcome to the Speedsolving.com, home of the web's largest puzzle community!
    You are currently viewing our forum as a guest which gives you limited access to join discussions and access our other features.

    Registration is fast, simple and absolutely free so please, join our community of 40,000+ people from around the world today!

    If you are already a member, simply login to hide this message and begin participating in the community!

High level speedsolving, the convergence of methods and method development

PapaSmurf

Member
Joined
Jan 4, 2018
Messages
1,164
WCA
2016TUDO02
YouTube
Visit Channel

Preface​

Firstly, it's quite pretentious to include a preface on a forum post, I know. Secondly, all these thoughts have bubbled away for a long time, but this video by Dylan Miller analysing Yiheng's 4.25 insane average has brought it all into focus for me. So here are a lot of thoughts which I hope are helpful/interesting.

Introduction​

If you're somehow unaware, there has been a lot of thinking about speedsolving methods over the years (see: cubinghistory.com), which has tended towards 4 main method types: layer by layer (LBL), blocks, edge orientation (EO), corners first (CF). In reality, any good method is really just a combination of these concepts. Yes, there are also edges first and corner orientation, but these ideas have mostly been insignificant in method development (except maybe for ECE, which is still arguably CF though). My main aim with this post is to show how the big 4 and variations all really depend on these concepts and are themselves combinations of these. Then I'm going to discuss how ideas from other methods can be imported to influence and optimise each one and how they converge to, it seems, 2 "method centres". Finally, I'll make a comment on how this then applies to method debates, our own speedsolving and other seemingly unrelated things.

Section 1 - how concepts combine to create methods​

The method that almost everyone learns first is LBL. You solve the cross, then corners (and so solve the first layer), then second layer edges, then the last layer in managable chunks. It's easy to see and make progress, and there's nothing too complicated apart from having to learn a few short algorithms. You cube for a few days, get faster, then cube for a few more days and think "what's next"? The next obvious step is to optimise what you already know. You do this by learning "pure" CFOP. You get better at cross, combine the next two steps into one and learn more algorithms. However, in optimising you lost a little bit of LBL, as you have blurred the lines between the first and the second step. (You could consider a method where you instead combine cross and corners, then do second layer edges. This kind of approach is worse, hence no one using it. The extreme of isolating the first layer means that, to optimise, you should combine the second and third layer, giving L2L). In combining these two steps, you have introduced an element of block building. Instead of solving individual pieces, you are solving 2 at a time in corner-edge blocks we call pairs. So to optimise LBL, we have to relax the rule of working in layers and introduce blocks.

We're now going to change tack and look at blocks. Try solving a cube intuitively only using blocks. You can probably do it, but it won't be quick. But that's because a (read: the) point of blocks is to bring you efficiently to a point where you can do some quick, predetermined moves (called algorithms) and solve the cube. If you think about it, that's kinda what CFOP is - you build a 2x3x3 and solve in two algorithms. There are other ways you can do blocks though. The family of Petrus-like methods (such as Petrus, APB, LEOR etc., which, imo, are actually just the same method), bring you to a state of (EO)2x2x3, and you go from there to build the rest of F2L to leave you with a LL. There are also other ways to do blocks in a Fransisco style for example, but all of these tend towards Petrus anyway. For a very long time, CFOP users have implemented more blocks through their use of extended cross (xcross) and other similar ideas. This doesn't mean that top level CFOP solvers use Petrus, but there are certainly ideas that are common to both methods that means they expand beyond the boundaries of strict CFOP or 2x2x3->EOF2L. Another idea that is now quite common in CFOP is to avoid diagonal last 2 pairs, which is equivalent to building a 2x2x3 and a D layer edge. Petrus suffered from certain factors that meant it never was a top top level method (including users, some non optimisations, lack of development) which similar methods normally try to fix, but top level solvers, instead of fixing Petrus, improved their own method by bringing Petrus ideas in, thereby making CFOP slightly less "pure", but faster.

Mr ZZ really wanted a reduction method. You go from <RULFDB> gen to <RUL> after the first step of EOLine, then to <RU> after solving left block in a way that would reduce the rest of the cube. When that didn't look feasible, the method vision was adjusted to EO (and middle layer) first, then the rest of F2L by blocks (kinda like the left and right layers, although it's probably not strictly true), then LL. ZZ is really a combination of EO first, LBL and block building. If you solved LB first, you would essentially have a Petrus solve. You could mix sides. You could really do what you want as long as EO wasn't broken, hence the huge number of variants. However, around 2018, people really started to realise that the ergonomics weren't great, so they imported an idea from CFOP (that had been used before, but never by the vast majority, nor never endorsed by the ZZ community) - cross. ZZ went from a LBL middle outwards method to LBL bottom to top, whilst still retaining its signature of EO. This is another example of a method expanding to improve, rather than sticking to a rigid and formulaic way of dong things.

CF was the first method used to solve the cube, it was the first method used to win a worlds and is dominating the OH scene today. Yes, Roux is a CF method. It has blockbuilding integrated in (in a way that makes it a blurry LBL method - left layer, right layer, middle layer), but for the sake of this, you should think of it as a CF method. There are other CF methods out there (LMCF, Skis, Waterman, PCMS) but whenever you try to optimise any of them, you get back to Roux. It just has the best ending of all of them and also the best way to get to that ending. (You could apply this CF approach to CFOP and end up with CFCE, another way to do LL, but this hasn't taken off as there is about 0 benefit over CFOP and everyone already knows the algs.) Roux is already so good it probably needs someone with Yiheng levels of potential to require further optimisation.

Hopefully you can see that all these methods have a main concept driving them (LBL for CFOP, blocks for Petrus, EO for ZZ and CF for Roux) whether knowingly or not, then extra concepts added on top to provide a proper structure and to optimise. This is really the TL;DR for the next part.


Section 2 - optimising methods​

\[ t = timer_{start} + timer_{stop} + \frac{T}{\left< Tps \right>} \]​
where t is time, T is turns, <Tps> is average Tps. This equation governs speedcubing at a fundamental level. If your Tps tends to infinity, your time will tend to the amount of time it takes you to start and stop the timer legally. Equally, if your solve uses 0 turns, you will get the same result. The whole point of speedsolving is to minimise the time taken, and therefore the equation. You could get a bit faster by optimising timer starts and stops (the whole sliding debate is really about this part of the equation), but the place where there can be most progress for all of us except maybe the top 2 2x2 solvers is with the fraction.
Let's probe that fraction from a CFOP point of view, starting with the OP. Some algs are faster than others, maybe due to higher potential Tps or fewer moves, so you would do them (compare L R U2 R' U' R U2 L' U R' to the normal J perm). Another easy thing to do is to drill the algs so that the time it takes you to do them decreases. You can also force better algs by maybe inserting the last pair with a sledge to change the EO or an alternative but near equal OLL (R' U2 R U R' U R vs R U R' U R U2 R' - they do different things to the EP) to force a different PLL. Both of these are ways to optimise your LL, but the last one is something you do the step before (influencing) and the first one is what you do during the step. Recognition is a factor that doesn't fit neatly into either camp (you can sometimes recognise the OLL during LS but sometimes you will have to recognise it after LS), but whatever you do, reducing the time to recognise will also speed up the alg steps as you are increasing the average Tps. Another way to improve LL is to do 2 steps in 1 with a 1LLL. All PLL skips really are is the case where you know the 1LLL for that OLL case without realising. Similarly, an OLL skip is where you know the OLS for that LS case without realising it. If you learn ZBLL, you know all the 1LLL cases for the EO solved OLLs (OCLL). This reduces the time because it reduces the movecount and often the recognition time as well. So there are multiple ways to optimise the alg steps but they all are ways to decrease that fraction.

The CF bit of CFOP can be improved again in 2 different ways: decrease the movecount or increase the turn speed. Xcrosses are good because they are often more efficient than cross->pair and they also mean that you've planned further, which decreases your pausing and in theory means you can turn faster. However, pure blocks is often bad, as you have to think way too much and therefore you sacrifice Tps. So there are some certain guidelines that someone using CFOP can follow that will reduce their pauses, keep their movecount low(ish) and their Tps high. I'll go through a few of these now.

The first one is to solve BL first (for right handers, BR for left handers), as it reduces the number of L moves that you need to do with your weak hand, hence increasing the raw Tps, and also fills in the biggest blindspot, which makes lookahead easier for a sacrifice of 0 moves and therefore increases <Tps>. You should also plan this in inspection if you want to actually be good, reducing the mental load during the solve and therefore increasing <Tps>.

The second one is to avoid having diagonal pairs. This is because it makes lookahead worse, any rotation will put a pair into BL and it requires grip shifts to solve both pairs in most cases. It isn't necessarily a cardinal sin to have this, as you could have a 3 mover into both slots and you'd obviously choose that over a mid solution into non diagonal slots, but it's definitely something to avoid for the most part. To phrase this in the language of the equation, it will reduce <Tps> on average, but there are special cases that, either because there is no effect on <Tps> or because the movecount reduction is great, you can have diagonal pairs. This one is Petrus-y, as you are essentially solving a 2x2x3 (as I mentioned above), so you are kinda blockbuilding in a specific way.

The third one is to try to orientate edges, or at least be aware of their orientation. This is beneficial as if you know which edges are good or bad, you can plan your solution around that to reduce rotations (which, in time, are equivalent to a move or two) or eliminate them entirely. Also, if all edges are good, you can really max out your burst Tps (a bit like driving on a straight in motorsports - you can really push that accelerator). Moreover, with F2L EO solved, diagonal pairs matters less as you're not going to need to rotate and you can focus on turning quickly (although having adjacent pairs remaining is preferred still). If you know that all your edges are good, you can utilise ZBLL and you have minor lookahead gains during F2L, as you can filter out the U edges because they will all have the U colour on a certain face. This is highly ZZ-like, but the benefits when you need to shave .1 seconds off a 4.25 second average are huge if you can implement them. All of these will have the effect of increasing <Tps>, although sometimes it may come at the cost of 1 or 2 moves. So it becomes worthwhile if you can really take advantage of that possible increased Tps ceiling.

These three rules (guidelines) get more important progressively as you get faster, but also so does learning when to break them. I would suggest that CFOP has not yet reached the point, but is nearing that point for sure, where EOCross (or at least partial EOCross) will become the norm at the highest levels. But I'm also going to predict that there is another point beyond that where that rule will be allowed to be broken as the solutions become more and more optimised, in the same way that people genreally try to avoid diagonal pairs, but sometimes they break that rule for something faster that may look more primitive.

To summarise this section: all ways to (meaningfully) improve really come down to either increasing the average Tps or decreasing the movecount, or playing about with both of them in a way that ends up reducing that fraction (such as choosing a longer alg that you can do faster). In CFOP specifically, at the highest levels ideas from ZZ and Petrus are (potentially unknowingly) creeping their way into solves as they allow for optimisations that a raw CFOP requires if times are going to decrease.

Meanwhile, Roux just carries on doing its own thing...

Section 3 - what does any of this mean?​

I started off by pointing you to all the method development that has gone on for the past 50 years. A small amount of that development has had a large impact (namely CFOP) and the more of that development you include, the lower the impact has been. So, in order of impact (to speedsolving), you could have a (non comprehensive) list that looks like this: CFOP, Roux, ZZ, Petrus, ZB, LEOR, 2 gen reduction, columns, COLL+1, NCP ZBLL recog (this is very cool and you should check it out), etc. The first few are things you definitely have heard of, then the further down you go, it's less likely that you have heard of it and that it's used. However, we are at the point where some good ZBLS algs are being used to force ZBLL (so ZB), maybe we'll see certain solutions that look somewhat like LEOR because there's a solved square on the scramble, or maybe even 2gr will be a used technique in the far flung future. There's no way we can know for sure without a time machine, but my point is that there are lots of techniques that have their own little niche and can all be used to optimise these base layers (of CFOP and Roux in the current paradigm) in certain circumstances.

With this in mind, surely method debates all are pointless beyond a conversation between LBL and CF type methods? If anything beyond these two just gets taken in, then surely we should really be talking in this way instead of CFOP vs ZZ or another combination. CFOP is tending to ZZ and ZZ to CFOP and to recognise that changes how we should think about the way we solve. There's also a finer point in here about method development. There are now two ways to think - do we create another method centre (you could probably argue that DR is this) and see how much it can be optimised and therefore compete with the LBL and CF methods, or do we try to optimise the LBL/CF methods? That's a question for you to think about. There's also a comfort that if you've had a very good idea that does save a move or two, such as non-matching (see here and here), it will probably be implemented eventually. Taking the case of non-matching, pseudoslotting is just that. You make it so the first layer doesn't match the second layer but with the advantage of saving some moves. I can see this being applied to conjugating L or R eventually, forcing people to learn LL recognition methods that can handle it. But whatever happens, these ideas are now out there for whoever to take advantage of, be it an existing cuber or someone who hasn't been born yet.

So to apply this to your solves. You have the equation and so go and reduce it. Don't think in terms of methods per se. Rather think in terms of the paradigm, the method center, Work through how you solve the cube and optimise each step as much as possible. But when you have reached that point, don't think that's all that you can do. Instead, think about how you can blur the lines of your method to help you reduce the time. Maybe you can implement EO ideas into your CFOP solves. Maybe you could learn how to do non-matching blocks in your Roux solves so that you can take advantage of other blocks during SB. Whatever the case may be, if you are a speedsolver the eventual aim is to get that time as close to 0 as possible so do whatever you can to do that.


Conclusion​

Hopefully I've made sense. Hopefully this has some sort of impact. Hopefully there's a little less looking down on different ideas (like EO or blocks or Roux or whatever). If anything, this has been therapeutic so it has already done most of its job. However, if it generates discussion, even better. Thank you for reading it and have a nice day, whoever you are wherever you are!

I just wrote a 3000 word essay (3063 precisely) on speedcubing stuff. Wow. I've struggled to write that much for any paper or essay I've ever needed to write. And I did it in 2 days. Truly, congrats for reading it all. I hope it actually means something to you. By the way, that's almost 5 pages of A4 on font size 12. I'm seriously impressed with myself and also a little worried that I could do this relatively easily, but I struggled on things like, oh I dunno, my literal dissertation! Anyway, I'll actually stop now. Thanks! :)
 
Last edited:
Meanwhile, Roux just carries on doing its own thing...
lol true

I agree with a lot of what you say. Vanilla CFOP can give quite good times, but there are also many techniques that top solvers use to blend steps or influence later steps.

This is true for Roux as well:
  • Influencing DR edge during first block
  • Controlling second block edges to make sure they are on top & oriented
  • EOLR (this is arguably a part of vanilla Roux at this point with how widespread it is)
  • EOLRb
  • 4c recognition methods
 
I agree with a lot of what you say. Vanilla CFOP can give quite good times, but there are also many techniques that top solvers use to blend steps or influence later steps.

This is true for Roux as well:
  • Influencing DR edge during first block
  • Controlling second block edges to make sure they are on top & oriented
  • EOLR (this is arguably a part of vanilla Roux at this point with how widespread it is)
  • EOLRb
  • 4c recognition methods
These are all good examples, especially that EOLR one. In the same way no one bats an eyelid when someone does an xcross even though it's not technically CFOP, EOLR is technically not Roux. But obviously no one cares because it's just better. I didn't delve into Roux too much because I'm just not that good at it and I don't know loads about it, but there could be huge potential for transformation techniques. I'm especially thinking about 42, which makes Roux closer to Waterman/LMCF. This is the other blurring between methods rather than between steps that we might see more of in the future.

as someone who does write for fun (both creative and reportedly) this made me smile
writing should be something people enjoy, not dread
awesome stuff!
Thank you! I'm the opposite, but I definitely should do it more. Whenever I have to write something I groan a little inside, but I know it could be fun, as this has shown.
 
Great work.

Looking at convergence on the corners first or corners first + blockbuilding hybrid methods:

The three main related methods I see are Nautilus, Roux, and 42. Nautilus is the algorithmic, TPS focused method and ends in L5E. Roux is the semi-intuitive method and ends in L6E. 42 is very open with more freedom of movement and ends in L7E. Could these eventually converge?

Pseudo use is also a question and these methods are where the majority of the development of pseudo techniques has occurred. There are four main types of pseudo.
  • Non-Matching Blocks: One layer offset by a single turn. For example, a different second block in Roux.
  • Transformation/Conjugation: Intentionally positioning pieces in such a way that it reduces a larger group of pieces to a known set of cases, solving those cases, and ending in a pseudo state. Like the technique used in 42's CCMLL step.
  • Retroactive Solving: The technique used in EG, TCLL, and ACMLL. Groups of pieces are formed without regard to orientation or permutation. They are then solved at a later time while solving another step.
  • Solving to Pseudo: Intentionally reducing from a normal state to a pseudo state, such that the ending pseudo state is advantageous for the proceeding steps. This is the technique used in SL5C.

Nautilus: This method's goal is the removal of blind spots and automating steps as much as possible. It primarily ends in L5E. But it can also expand to use L5C and even an alternate L6E. So there is openness and a lot of room for advancement. All four of the pseudo techniques can be used in Nautilus to further reduce the move count and improve ergonomics.

Roux: The goal of Roux was to have a completely intuitive method. It comes close with having just one algorithm based step. Roux sits in a kind of strange in-between. It may not be able to effectively use L5E without morphing into Nautilus and it can't use pure L5C or end in L7E without becoming 42. So this presents some trouble method name wise if Roux users want to eventually use L5E or L7E depending on the scramble. It can, however, use two of the pseudo techniques. Non-matching blocks is the first one. Taking advantage of formed or close to formed pairs and solving a different second block can reduce the move count. Retroactive solving from ACMLL is the other one. This is similar to using EG or TCLL on 2x2 and all of the same benefits apply. Deeper and easier inspection, fewer moves, and improved ergonomics.

42: This method is all about solving more pieces simultaneously. 42 can directly end in L7E or it can contract in the L7E step to end in L6E or L5E. 42 contains a variety of ways of solving L5C. Direct L5C, SL5C, and it can also contract to L4C (CMLL-like) through CCMLL. There is a lot that can be done with this method. Three of the four pseudo techniques apply well to 42. Retroactive solving is the only difficult one because it may mean a huge increase in the number of algorithms to learn.

People often say that Roux doesn't have any big advancements. That isn't completely true, but that means that it also isn't completely false. The second block of Roux kind of locks it in to a single path (versus the openness of Nautilus and 42). I like to imagine a Roux future where CMLL is integrated into the first two blocks. The two candidates here are ACMLL and SBC. Though I question SBC's L5C recognition, 324 algs, and mandatory same second square every solve unless you want to learn 324 additional algs for the back pair, and other things.

Even with the use of those advancements, Roux's move count is only reduced by a couple or few moves. This is versus getting edges oriented in CFOP then having a large move count drop from a single step last layer. Let's say current Roux averages 46-48 and CFOP is 55-60, or about a 5-10 move difference depending on the two solvers you compare. Then let's say that ZBLS + ZBLL gets CFOP down to sub-50 moves. That puts it right there with Roux's move count. There is one thing so far that can preserve that gap for the corners first blockbuilding hybrid methods. That is L5C. In the SL5C thread in the Solving to Pseudo link above, I show that with SL5C and other techniques, the 42 method can be a method that averages 37-38 moves. Nautilus can also use L5C and the same plus additional techniques to reach a similar move count. The current issue is that a great L7E method hasn't been developed and Nautilus L6E isn't researched.
 
Thanks, I appreciate the effort you put into your response and for adding more for the Roux section.

Could these eventually converge?
They will either converge or be discarded depending on if they're like actually properly good or not.

So this presents some trouble method name wise if Roux users want to eventually use L5E or L7E depending on the scramble.
This is true to an extent, but one thing that I think has become apparent is that calling a method a certain name is a bit silly eventually. We call it what we want to call it. However, what might have been considered a method 10 years ago (think ZB) is now really something you use in your solves. So I could imagine the definition of Roux expanding. 42 isn't really a different method, rather a technique that the CF+block methods use. Same with Nautilus or whatever. Dylan Miller called Yiheng the "perfect ZZ solver". You could probably say a very similar thing about SPV for 42 or another Roux-like method. That's why it's better imo to think about methods as techniques that all service a certain method centre than discrete islands.

I agree that L5C will eventually have to be implemented. It just seems the way to squeeze the most out of the method, even though it is incredibly high effort. I might be wrong, but it makes sense to me.
 
Great essay and read, i agree that if you want to be as fast as possible at any cost then you should be either doing Roux or the CFOP/ZZ/Petrus hybrid. Using standalone methods will get you very far but seeing how advanced cubing is getting, it would be the equivalent of still doing 2 look for some OLL cases, sure you can do it, but it is objectively better to just learn the OLL alg just like here with joining methods.

Dylan Miller called Yiheng the "perfect ZZ solver".
alright then
 
Great essay and read, i agree that if you want to be as fast as possible at any cost then you should be either doing Roux or the CFOP/ZZ/Petrus hybrid. Using standalone methods will get you very far but seeing how advanced cubing is getting, it would be the equivalent of still doing 2 look for some OLL cases, sure you can do it, but it is objectively better to just learn the OLL alg just like here with joining methods.


alright then
That's a good way to put it. And you go and change that fact!
 
Finally some discussion on the actual shape of methodspace!

Reading through the post, it seems like this idea of convergence is similar to the optimisation which I outlined in the top-down vs bottom-up method development strategies thread a while back (which hopefully means that both are correct!) although it appears your idea is birthed from a much more macro observance of current trends rather than optimising against some theoretical "optimal method" metric (which may just be the difference between physicists and mathematicians lol). It's also something of a formalisation of the BCE idea which kirjava talked about a while ago (although I don't seem to be able to find the thread anymore :/ )

Because of that more empirical approach, I'm curious as to whether or not you think the paradigms you've outlined above are actually the only paradigms good enough to be used in high-level speedsolving, or if there may exist more esoteric "method shapes" which may be just as competitive, but are not as simple an idea to start off with as "first 2 layers" or "corners first" (or "edges first" to throw in a non-viable example). The vast majority of method idea with potential do fall into only a few categories, but I wonder if that's because of the influence of f2l/cf/etc methods (and because they're easier to think of) rather than because they are necessarily the best.

As a final question, do you think it is worth trying to deviate from the most successful method shapes? Specifically, is it worth it to try to create a truly new method (I do have a few ideas on how that may be done) or is this mostly a fool's errand while the best methods are sufficient for even world class solvers?

Interesting post and I'm glad to see there are other people writing on the shape of method space as a whole!
 
Finally some discussion on the actual shape of methodspace!
You bet! There's actually more to this than going through all possible steps, who would've thought?

Reading through the post, it seems like this idea of convergence is similar to the optimisation which I outlined in the top-down vs bottom-up method development strategies thread a while back (which hopefully means that both are correct!) although it appears your idea is birthed from a much more macro observance of current trends rather than optimising against some theoretical "optimal method" metric (which may just be the difference between physicists and mathematicians lol). It's also something of a formalisation of the BCE idea which kirjava talked about a while ago (although I don't seem to be able to find the thread anymore :/ )
It's similar in the sense that top-down ≈ CF and bottom-up ≈ LBL from your post then mine respectively and I think yours is a bit more general, but there certainly is a good amount of overlap, and that generality does probably come from the different approaches (go physics). It's interesting that it seems that there's very little good for speed outside of this binary (see my reply to your next paragraph), however I reckon there's much more to be done to explore the sub-puzzles themselves and the overlap of steps and all that to make more optimised methods (such as how often should you do EOCross, when should you just go for blocks etc., all important questions for shaving .1, .2 seconds off at the top). I find it interesting that this is already happening at the top organically, which makes sense from a Darwinian perspective, but I guess one big question is whether we're tending towards a local or global time minimum. It could be that CF and LBL methods are both minimised in their respective regions of this methodspace by Roux (some other variant) or F2L+LL (FLL?) but one is lower than the other, or that they are equal, but also there could be another completely separate region which has a lower minimum that just hasn't been probed yet. This leads quite nicely to your next paragraph...


Because of that more empirical approach, I'm curious as to whether or not you think the paradigms you've outlined above are actually the only paradigms good enough to be used in high-level speedsolving, or if there may exist more esoteric "method shapes" which may be just as competitive, but are not as simple an idea to start off with as "first 2 layers" or "corners first" (or "edges first" to throw in a non-viable example). The vast majority of method idea with potential do fall into only a few categories, but I wonder if that's because of the influence of f2l/cf/etc methods (and because they're easier to think of) rather than because they are necessarily the best.
I think they are currently. DR and maybe BCE (if this isn't just the combination of CF and LBL) are the only other "shapes" that really exist. Unfortunately for DR, despite SSC and other methods, these ideas have only properly flourished in the realm of FMC, which I think is a shame. For BCE, I still have hopes and dreams that 2x2x3->L6C->L7E using <RUSF> has potential, but to take language from your key steps and meta methods post, I haven't found a nice solution to that subpuzzle. I would want a solution that is LSE-like, in the sense that it's fully intuitive and relatively close to optimal, but it seems that all "good" solutions are mostly brute force to set up to an algset (and this is why Roux is probably the best BCE method). As I said before, there may be other shapes out there, so to answer your final question, I don't think we should stop looking for them. There might be one that does indeed work and minimise that time further and if you find it you could change speedsolving forever, so I guess we should try to find it. However, this comes with the health warning that there shouldn't be any expectation to find this. It could be the case that indeed, for human hands, LBL and CF methods are the best, although this doesn't mean that CFOP (FLL) or Roux are the best methods in these method centres.

It would also be good to standardise terminology. Are they method shapes or centres (or even poles if we wanna talk analysis, something I barely understand)? Does a shape have a centre (CF with Roux or similar methods)? Should we really be calling modern LBL type methods "CFOP" or should we switch to FLL or something else? Should we call these Roux-like methods "Roux" or keep 42, Nautilis etc., or should they be more like ZB, which is now really something you use in CFOP? Is there a difference between bottom up and LBL type methods, then top down and CF type? I don't know.

I appreciate the read and the reply and hopefully it's given you things to think about.

And finally, I wish that we could just solve \( \frac{d}{dt}\left[ \text{method}\right] = 0 \) to quantify this all.
 

Preface​

Firstly, it's quite pretentious to include a preface on a forum post, I know. Secondly, all these thoughts have bubbled away for a long time, but this video by Dylan Miller analysing Yiheng's 4.25 insane average has brought it all into focus for me. So here are a lot of thoughts which I hope are helpful/interesting.

Introduction​

If you're somehow unaware, there has been a lot of thinking about speedsolving methods over the years (see: cubinghistory.com), which has tended towards 4 main method types: layer by layer (LBL), blocks, edge orientation (EO), corners first (CF). In reality, any good method is really just a combination of these concepts. Yes, there are also edges first and corner orientation, but these ideas have mostly been insignificant in method development (except maybe for ECE, which is still arguably CF though). My main aim with this post is to show how the big 4 and variations all really depend on these concepts and are themselves combinations of these. Then I'm going to discuss how ideas from other methods can be imported to influence and optimise each one and how they converge to, it seems, 2 "method centres". Finally, I'll make a comment on how this then applies to method debates, our own speedsolving and other seemingly unrelated things.

Section 1 - how concepts combine to create methods​

The method that almost everyone learns first is LBL. You solve the cross, then corners (and so solve the first layer), then second layer edges, then the last layer in managable chunks. It's easy to see and make progress, and there's nothing too complicated apart from having to learn a few short algorithms. You cube for a few days, get faster, then cube for a few more days and think "what's next"? The next obvious step is to optimise what you already know. You do this by learning "pure" CFOP. You get better at cross, combine the next two steps into one and learn more algorithms. However, in optimising you lost a little bit of LBL, as you have blurred the lines between the first and the second step. (You could consider a method where you instead combine cross and corners, then do second layer edges. This kind of approach is worse, hence no one using it. The extreme of isolating the first layer means that, to optimise, you should combine the second and third layer, giving L2L). In combining these two steps, you have introduced an element of block building. Instead of solving individual pieces, you are solving 2 at a time in corner-edge blocks we call pairs. So to optimise LBL, we have to relax the rule of working in layers and introduce blocks.

We're now going to change tack and look at blocks. Try solving a cube intuitively only using blocks. You can probably do it, but it won't be quick. But that's because a (read: the) point of blocks is to bring you efficiently to a point where you can do some quick, predetermined moves (called algorithms) and solve the cube. If you think about it, that's kinda what CFOP is - you build a 2x3x3 and solve in two algorithms. There are other ways you can do blocks though. The family of Petrus-like methods (such as Petrus, APB, LEOR etc., which, imo, are actually just the same method), bring you to a state of (EO)2x2x3, and you go from there to build the rest of F2L to leave you with a LL. There are also other ways to do blocks in a Fransisco style for example, but all of these tend towards Petrus anyway. For a very long time, CFOP users have implemented more blocks through their use of extended cross (xcross) and other similar ideas. This doesn't mean that top level CFOP solvers use Petrus, but there are certainly ideas that are common to both methods that means they expand beyond the boundaries of strict CFOP or 2x2x3->EOF2L. Another idea that is now quite common in CFOP is to avoid diagonal last 2 pairs, which is equivalent to building a 2x2x3 and a D layer edge. Petrus suffered from certain factors that meant it never was a top top level method (including users, some non optimisations, lack of development) which similar methods normally try to fix, but top level solvers, instead of fixing Petrus, improved their own method by bringing Petrus ideas in, thereby making CFOP slightly less "pure", but faster.

Mr ZZ really wanted a reduction method. You go from <RULFDB> gen to <RUL> after the first step of EOLine, then to <RU> after solving left block in a way that would reduce the rest of the cube. When that didn't look feasible, the method vision was adjusted to EO (and middle layer) first, then the rest of F2L by blocks (kinda like the left and right layers, although it's probably not strictly true), then LL. ZZ is really a combination of EO first, LBL and block building. If you solved LB first, you would essentially have a Petrus solve. You could mix sides. You could really do what you want as long as EO wasn't broken, hence the huge number of variants. However, around 2018, people really started to realise that the ergonomics weren't great, so they imported an idea from CFOP (that had been used before, but never by the vast majority, nor never endorsed by the ZZ community) - cross. ZZ went from a LBL middle outwards method to LBL bottom to top, whilst still retaining its signature of EO. This is another example of a method expanding to improve, rather than sticking to a rigid and formulaic way of dong things.

CF was the first method used to solve the cube, it was the first method used to win a worlds and is dominating the OH scene today. Yes, Roux is a CF method. It has blockbuilding integrated in (in a way that makes it a blurry LBL method - left layer, right layer, middle layer), but for the sake of this, you should think of it as a CF method. There are other CF methods out there (LMCF, Skis, Waterman, PCMS) but whenever you try to optimise any of them, you get back to Roux. It just has the best ending of all of them and also the best way to get to that ending. (You could apply this CF approach to CFOP and end up with CFCE, another way to do LL, but this hasn't taken off as there is about 0 benefit over CFOP and everyone already knows the algs.) Roux is already so good it probably needs someone with Yiheng levels of potential to require further optimisation.

Hopefully you can see that all these methods have a main concept driving them (LBL for CFOP, blocks for Petrus, EO for ZZ and CF for Roux) whether knowingly or not, then extra concepts added on top to provide a proper structure and to optimise. This is really the TL;DR for the next part.


Section 2 - optimising methods​

\[ t = timer_{start} + timer_{stop} + \frac{T}{\left< Tps \right>} \]​
where t is time, T is turns, <Tps> is average Tps. This equation governs speedcubing at a fundamental level. If your Tps tends to infinity, your time will tend to the amount of time it takes you to start and stop the timer legally. Equally, if your solve uses 0 turns, you will get the same result. The whole point of speedsolving is to minimise the time taken, and therefore the equation. You could get a bit faster by optimising timer starts and stops (the whole sliding debate is really about this part of the equation), but the place where there can be most progress for all of us except maybe the top 2 2x2 solvers is with the fraction.
Let's probe that fraction from a CFOP point of view, starting with the OP. Some algs are faster than others, maybe due to higher potential Tps or fewer moves, so you would do them (compare L R U2 R' U' R U2 L' U R' to the normal J perm). Another easy thing to do is to drill the algs so that the time it takes you to do them decreases. You can also force better algs by maybe inserting the last pair with a sledge to change the EO or an alternative but near equal OLL (R' U2 R U R' U R vs R U R' U R U2 R' - they do different things to the EP) to force a different PLL. Both of these are ways to optimise your LL, but the last one is something you do the step before (influencing) and the first one is what you do during the step. Recognition is a factor that doesn't fit neatly into either camp (you can sometimes recognise the OLL during LS but sometimes you will have to recognise it after LS), but whatever you do, reducing the time to recognise will also speed up the alg steps as you are increasing the average Tps. Another way to improve LL is to do 2 steps in 1 with a 1LLL. All PLL skips really are is the case where you know the 1LLL for that OLL case without realising. Similarly, an OLL skip is where you know the OLS for that LS case without realising it. If you learn ZBLL, you know all the 1LLL cases for the EO solved OLLs (OCLL). This reduces the time because it reduces the movecount and often the recognition time as well. So there are multiple ways to optimise the alg steps but they all are ways to decrease that fraction.

The CF bit of CFOP can be improved again in 2 different ways: decrease the movecount or increase the turn speed. Xcrosses are good because they are often more efficient than cross->pair and they also mean that you've planned further, which decreases your pausing and in theory means you can turn faster. However, pure blocks is often bad, as you have to think way too much and therefore you sacrifice Tps. So there are some certain guidelines that someone using CFOP can follow that will reduce their pauses, keep their movecount low(ish) and their Tps high. I'll go through a few of these now.

The first one is to solve BL first (for right handers, BR for left handers), as it reduces the number of L moves that you need to do with your weak hand, hence increasing the raw Tps, and also fills in the biggest blindspot, which makes lookahead easier for a sacrifice of 0 moves and therefore increases <Tps>. You should also plan this in inspection if you want to actually be good, reducing the mental load during the solve and therefore increasing <Tps>.

The second one is to avoid having diagonal pairs. This is because it makes lookahead worse, any rotation will put a pair into BL and it requires grip shifts to solve both pairs in most cases. It isn't necessarily a cardinal sin to have this, as you could have a 3 mover into both slots and you'd obviously choose that over a mid solution into non diagonal slots, but it's definitely something to avoid for the most part. To phrase this in the language of the equation, it will reduce <Tps> on average, but there are special cases that, either because there is no effect on <Tps> or because the movecount reduction is great, you can have diagonal pairs. This one is Petrus-y, as you are essentially solving a 2x2x3 (as I mentioned above), so you are kinda blockbuilding in a specific way.

The third one is to try to orientate edges, or at least be aware of their orientation. This is beneficial as if you know which edges are good or bad, you can plan your solution around that to reduce rotations (which, in time, are equivalent to a move or two) or eliminate them entirely. Also, if all edges are good, you can really max out your burst Tps (a bit like driving on a straight in motorsports - you can really push that accelerator). Moreover, with F2L EO solved, diagonal pairs matters less as you're not going to need to rotate and you can focus on turning quickly (although having adjacent pairs remaining is preferred still). If you know that all your edges are good, you can utilise ZBLL and you have minor lookahead gains during F2L, as you can filter out the U edges because they will all have the U colour on a certain face. This is highly ZZ-like, but the benefits when you need to shave .1 seconds off a 4.25 second average are huge if you can implement them. All of these will have the effect of increasing <Tps>, although sometimes it may come at the cost of 1 or 2 moves. So it becomes worthwhile if you can really take advantage of that possible increased Tps ceiling.

These three rules (guidelines) get more important progressively as you get faster, but also so does learning when to break them. I would suggest that CFOP has not yet reached the point, but is nearing that point for sure, where EOCross (or at least partial EOCross) will become the norm at the highest levels. But I'm also going to predict that there is another point beyond that where that rule will be allowed to be broken as the solutions become more and more optimised, in the same way that people genreally try to avoid diagonal pairs, but sometimes they break that rule for something faster that may look more primitive.

To summarise this section: all ways to (meaningfully) improve really come down to either increasing the average Tps or decreasing the movecount, or playing about with both of them in a way that ends up reducing that fraction (such as choosing a longer alg that you can do faster). In CFOP specifically, at the highest levels ideas from ZZ and Petrus are (potentially unknowingly) creeping their way into solves as they allow for optimisations that a raw CFOP requires if times are going to decrease.

Meanwhile, Roux just carries on doing its own thing...

Section 3 - what does any of this mean?​

I started off by pointing you to all the method development that has gone on for the past 50 years. A small amount of that development has had a large impact (namely CFOP) and the more of that development you include, the lower the impact has been. So, in order of impact (to speedsolving), you could have a (non comprehensive) list that looks like this: CFOP, Roux, ZZ, Petrus, ZB, LEOR, 2 gen reduction, columns, COLL+1, NCP ZBLL recog (this is very cool and you should check it out), etc. The first few are things you definitely have heard of, then the further down you go, it's less likely that you have heard of it and that it's used. However, we are at the point where some good ZBLS algs are being used to force ZBLL (so ZB), maybe we'll see certain solutions that look somewhat like LEOR because there's a solved square on the scramble, or maybe even 2gr will be a used technique in the far flung future. There's no way we can know for sure without a time machine, but my point is that there are lots of techniques that have their own little niche and can all be used to optimise these base layers (of CFOP and Roux in the current paradigm) in certain circumstances.

With this in mind, surely method debates all are pointless beyond a conversation between LBL and CF type methods? If anything beyond these two just gets taken in, then surely we should really be talking in this way instead of CFOP vs ZZ or another combination. CFOP is tending to ZZ and ZZ to CFOP and to recognise that changes how we should think about the way we solve. There's also a finer point in here about method development. There are now two ways to think - do we create another method centre (you could probably argue that DR is this) and see how much it can be optimised and therefore compete with the LBL and CF methods, or do we try to optimise the LBL/CF methods? That's a question for you to think about. There's also a comfort that if you've had a very good idea that does save a move or two, such as non-matching (see here and here), it will probably be implemented eventually. Taking the case of non-matching, pseudoslotting is just that. You make it so the first layer doesn't match the second layer but with the advantage of saving some moves. I can see this being applied to conjugating L or R eventually, forcing people to learn LL recognition methods that can handle it. But whatever happens, these ideas are now out there for whoever to take advantage of, be it an existing cuber or someone who hasn't been born yet.

So to apply this to your solves. You have the equation and so go and reduce it. Don't think in terms of methods per se. Rather think in terms of the paradigm, the method center, Work through how you solve the cube and optimise each step as much as possible. But when you have reached that point, don't think that's all that you can do. Instead, think about how you can blur the lines of your method to help you reduce the time. Maybe you can implement EO ideas into your CFOP solves. Maybe you could learn how to do non-matching blocks in your Roux solves so that you can take advantage of other blocks during SB. Whatever the case may be, if you are a speedsolver the eventual aim is to get that time as close to 0 as possible so do whatever you can to do that.


Conclusion​

Hopefully I've made sense. Hopefully this has some sort of impact. Hopefully there's a little less looking down on different ideas (like EO or blocks or Roux or whatever). If anything, this has been therapeutic so it has already done most of its job. However, if it generates discussion, even better. Thank you for reading it and have a nice day, whoever you are wherever you are!

I just wrote a 3000 word essay (3063 precisely) on speedcubing stuff. Wow. I've struggled to write that much for any paper or essay I've ever needed to write. And I did it in 2 days. Truly, congrats for reading it all. I hope it actually means something to you. By the way, that's almost 5 pages of A4 on font size 12. I'm seriously impressed with myself and also a little worried that I could do this relatively easily, but I struggled on things like, oh I dunno, my literal dissertation! Anyway, I'll actually stop now. Thanks! :)
Nice detailed description on method convergence. I am excited to see the work done on this. It can bring clarity on all novel and new speedsolving 3x3 methods!
 
Considering that inspection skill is improving at a scary rate, with Yiheng's cracked ability to virtually 1 look the solve, the mid-solve option select decision process issue dissolves. Someone teach the kid ZZ. He could probably plan EOcross+3 with no sweat:oops:--EOcross is only ~8 moves, and since all of the edges are oriented he kind of only needs to track corner orientation (which he has proven that he is more than capable of since the dude plans full F2L...)

I could realistically see a future where multislotting EOL2S (last two slots in ZZ) can be implemented too. Potentially possible to anticipate during inspection too--dude, 3x3 is turning into SQ1 where the whole solve is like 2 look...

Also, insert funny Jayden Mcneill quote of ZZ being "boneless" CFOP.
 
Considering that inspection skill is improving at a scary rate, with Yiheng's cracked ability to virtually 1 look the solve, the mid-solve option select decision process issue dissolves. Someone teach the kid ZZ. He could probably plan EOcross+3 with no sweat:oops:--EOcross is only ~8 moves, and since all of the edges are oriented he kind of only needs to track corner orientation (which he has proven that he is more than capable of since the dude plans full F2L...)

I could realistically see a future where multislotting EOL2S (last two slots in ZZ) can be implemented too. Potentially possible to anticipate during inspection too--dude, 3x3 is turning into SQ1 where the whole solve is like 2 look...

Also, insert funny Jayden Mcneill quote of ZZ being "boneless" CFOP.
Funny you mention it. We have a project going on to document the best L2P solutions for all 2 slot combos for garbage cases
 
It would also be good to standardise terminology. Are they method shapes or centres (or even poles if we wanna talk analysis, something I barely understand)? Does a shape have a centre (CF with Roux or similar methods)? Should we really be calling modern LBL type methods "CFOP" or should we switch to FLL or something else? Should we call these Roux-like methods "Roux" or keep 42, Nautilis etc., or should they be more like ZB, which is now really something you use in CFOP? Is there a difference between bottom up and LBL type methods, then top down and CF type? I don't know.
If we go the direction of combining Nautilus, Roux, and 42, it might make more sense to call it Waterman instead of Roux. Waterman is the original BCE method. Or at least the one that was fully developed from its era. Roux was influenced by Waterman and it can be seen as a variant of Waterman where you solve some edges with the corners. Nautilus, Roux, 42, SSRC, and WaterRoux have that common Waterman ancestor of building most of a layer at the start, something with the corners while solving edges where possible, then last x edges. Each of them vary in the number of edges solved after the initial blockbuilding step. Though I think Nautilus is a little different from the others with its 3D blockbuilding after the first step and the goal of blockbuilding to algorithmic LXE. Not saying I'm opposed to it being with the others, just that the goals feel different.

Or we just call it BCE or another acronym because it wouldn't be fair to group everyone's hard work under another person's name. We developers of the methods in the list have all contributed major steps and techniques that will be used if a convergence occurs.
 
Last edited:
Back
Top