• Welcome to the Speedsolving.com, home of the web's largest puzzle community!
    You are currently viewing our forum as a guest which gives you limited access to join discussions and access our other features.

    Registration is fast, simple and absolutely free so please, join our community of 40,000+ people from around the world today!

    If you are already a member, simply login to hide this message and begin participating in the community!

Are all "destroy-and-repair" algorithms just commutators?

Celeritardum

Premium Member
Joined
Aug 3, 2023
Messages
295
Location
USA
WCA
2022DUMA01
I was rewatching Jperm's video on how algorithms work the other day when I had a flash of insight. The sexysledge algorithm (RUR'U'R'FRF'), which he previously classified as "destroy-and-repair" as it destroys F2L (RUR'U') and then fixes it in a different way (R'FRF'), is actually just two commutators combined. The rUR'U'r'FRF' corner commutator and the RUR'U'M'URU'r' edge commutator. How does this cancellation work and does it apply to other destroy-and-repair algorithms like sune or the Uperm where you take out two F2L slots (of the form ABCA'BC)? Are all seemingly destroy-and-repair algorithms just some form of commutators (ABA'B'), conjugates (ABA'), or some other type of algorithm?
 
Loads of counterexamples:

R U R' F' L' U' L F
R U2 R' U2 R' F R F' (of course you could argue that this is just Antisune cancel into sexysledge…)
R U R2 F R F2 U F
R U2 R2 U' R2 U' R2 U2 R

(the last two could be contorted into commutators if you allow nonstandard moves like mirroring the cube state)
(edit: The first two are commutators, but not of the typical insert/interchange kind; see post #11 by Christopher.)

You should also be mindful as to how much you're willing to allow combining commutators, because any move sequence where the turns "balance out" can be rewritten as a sequence of commutators. Just to take a silly example, R U R' U' R' F R F' already is a concatenation of two commutators: (R U R' U') is a commutator and so is (R' F R F').

(Related reading: commutator subgroups.)

---

The actual feature you're noticing is probably more closely related to slicey shenanigans.

R U R' U' R' F R F'
r U R' U' r' F R F' = R M' U R' U' M R' F R F'
These two things differ only by two slice moves and so the second alg can be viewed as inserting M' U R' U M U R U' (which is a commutator that does an edge 3-cycle) into sexysledge.

This logic applies to any algs that are slice variations of each other. For another example, consider Sune vs wide Sune:
R U R' U R U2 R'
r U R' U R U2 r' = R M' U R' U R U2 M R'
This one is equivalent to inserting [M', U R' U R U2], which looks funny, but turns out to be just an edge 3-cycle. And indeed Sune and wide Sune differ in their effects by an edge 3-cycle.

---

Yet another perspective you can take is that you're really looking at different ways of taking out pairs/blocks and seeing how to combine them; the "difference" between two ways then gives you an alg that preserves everything you want to preserve (and possibly affecting things you don't want to preserve).

Look at just R U R' U' R' and r U R' U' r'; these affect the F2L pieces in exactly the same way, so in a sense they're "interchangeable". With any last layer alg that starts with one of those, you could modify it to start with the other and it'll still be a last layer alg.
Example: widened T perm r U R' U' r' F R2 U' R' U' R U R' F'.

Something else you can do here is to try R U R' U' R' R U R U' R' (which does nothing), then modify the first half to get r U R U' r' R U R U' R', which is now a last layer alg that does something. This is also equivalent to r U R' U' r' (R U R' U' R')'.

Or look at R U R' U versus R U2 R'. Start with Bruno:
R U2 R2 U' R2 U' R2 U2 R = R U2 R' R' U' R2 U' R' R' U2 R
Modify the initial R U2 R' into R U R' U:
R U R' U R' U' R2 U' R' R' U2 R
Modify the final R' U2 R into U R' U R:
R U R' U R' U' R2 U' R' U R' U R

Now you've created a U perm!

---

Something else to note is that r U R' U' r' F R F' is a corner commutator, but not when written like this. If you rewrite it without wide moves, the commutator structure becomes extremely apparent: L F R' F' L' F R F' = [L, F R' F'].

Same for R U R' U' R' F R F', which is a block commutator, but again, not when written like this. Rewrite it with wide moves to get: l F R' F' l' F R F' = [l, F R' F'].
 
Last edited:
While possibly not mathematically correct, I do have a fairly solid opinion on this. IMO all algorithms are commutators in a way. They take some unaccomplished task, use moves that take those pieces out of place and then put them back into place in the desired location or orientation.

However, in order for me personally to refer to a series of moves as a commutator, one or both of two things needs to be taking place. It needs to be a pure cycle, one in which only the desired 3, 4, 5 etc. pieces are being effected. The commutators in fixing big cube centers being a great example of pure 3-cycle commutators. Or/and I need to be able to visually see and verbally explain what is actually taking place during each step. Often occurring more during non-pure commutators, such as edge flipping or fixing pieces early in the solve during more complicated puzzles. For me to call it a commutator, I want to be able to point out which 3 e.g. pieces that I'm trying to move, and also point out exactly why they're being changed as I make certain moves.
 
While possibly not mathematically correct
Indeed it isn't. For any two states a and b, the commutator [a,b] = (a^-1)(b^1)ab has even parity, so if an algorithm brings the cube into a state with odd parity, this state cannot be written as a commutator of two states.
 
Indeed it isn't. For any two states a and b, the commutator [a,b] = (a^-1)(b^1)ab has even parity, so if an algorithm brings the cube into a state with odd parity, this state cannot be written as a commutator of two states.
Still, an interesting question: does every cube state with an even permutation can be reached with a single commutator (no matter how complicated it would be)?
 
Still, an interesting question: does every cube state with an even permutation can be reached with a single commutator (no matter how complicated it would be)?

Also, why I consider my opinion to be semi valid.
As proven by the 4x4 parities, the parts of the puzzle require an even number of turns to return to a solvable state. Modern algorithms simplify moving and rotating pieces. So yes, your OLL may be an odd number of turns, chances are that your PLL will also be an odd number of turns. Odd plus odd equals even. IMO, these modern algs balance each other out into an even number of turns in order to create what is essentially a commutator. However, not one that I'm able to see or explain.

I guess my original point is that modern algorithms use many simplifying techniques, such as r turns, that lower the number of turns. I still believe that most, if not all algorithms used to return a puzzle to a solved state are in some way a commutator. I just don't like referring to them in that way because of my inability to explain what's happening while I perform the moves.
 
I had a thought experiment a few years ago similar to this. It was a "reaction" to a Jperm video discussing algorithms where he describes them as similar to magic. This is paraphrasing - more specifically he was saying that we don't really understand the algorithms that computers generate. He setups, if I remember correctly, 2 types of algorithms (1) "destroy and repair" and (2) commutators/conjugates.

My main hypothesis was that algorithms, while not understandable from the perspective of how they are created (Technically understandable from a comp-sci perspective), can be be understood/described/analyzed as a composite of different commutators (Some type of structure where you would have Com (1) + Com (2) + Com (3) = Some algorithm; where there is some type of cancellation between coms). One analysis that I used was Y-Perm.

F - Setup
R U' R' - "Swap" (a)*
U' - Interchange
R U R' - "Undo Swap" (b)*
F' - Undo Setup
R U R' U' - "Mini Com"
R' F R F' - "Mini Com"

* - (a) and (b) are also "mini coms" - R U R' U' with some type of cancellation.

There are a lot of problems with my hypothesis, very obviously. R U' R' U' R U R' (U) is not a com. I breaks the structure of a commutators by nature, even if it comes in the correct form.

I don't think my hypothesis is correct - or at least it is not nearly as simple as I had proposed. I still think that this framework could be used to help analyze algs (I'm sure there are algs where this idea holds up but I never really expanded on the idea so this is pure speculation). I do still think that algs are far more understandable than Jperm had expressed or at least that we have the tools to breakdown, analyze, and understand algorithms (Not all but more than Jperm suggests). To really sum things up - I think that computer algs are magical but not pure magic.
 
Loads of counterexamples:

R U R' F' L' U' L F
R U2 R' U2 R' F R F' (of course you could argue that this is just Antisune cancel into sexysledge…)
R U R2 F R F2 U F
R U2 R2 U' R2 U' R2 U2 R
How did you prove that these aren't commutators? The last two aren't because they change center orientations, but the first two don't so that method doesn't rule them out. The more general method I know to prove that something isn't a commutator is to find a finite quotient of the algorithm group and check that it's not a commutator there, which you can do with a finite computation provided the group is small enough, but I can't find any good quotients
 
One analysis that I used was Y-Perm.

F - Setup
R U' R' - "Swap" (a)*
U' - Interchange
R U R' - "Undo Swap" (b)*
F' - Undo Setup
R U R' U' - "Mini Com"
R' F R F' - "Mini Com"

* - (a) and (b) are also "mini coms" - R U R' U' with some type of cancellation.

There are a lot of problems with my hypothesis, very obviously. R U' R' U' R U R' (U) is not a com. I breaks the structure of a commutators by nature, even if it comes in the correct form.
Yeah, this process is called decomposition. It can sometimes lead to understanding, but as you mentioned, it has its problems. For example, when I decomposed all popular (at the time, at least) 2-cycle PLLs that were on the PLL speedsolving wiki page (Thread), it really didn't help to understand the algorithms (for the most part . . . except to identify where the extra quarter turn is).


I don't think my hypothesis is correct - or at least it is not nearly as simple as I had proposed. I still think that this framework could be used to help analyze algs (I'm sure there are algs where this idea holds up but I never really expanded on the idea so this is pure speculation). I do still think that algs are far more understandable than Jperm had expressed or at least that we have the tools to breakdown, analyze, and understand algorithms (Not all but more than Jperm suggests). To really sum things up - I think that computer algs are magical but not pure magic.
Indeed.
Loads of counterexamples:
Just to add one additional "algorithm type", this post.
 
I'm not sure what the date "January 2014" in the bottom margin of that paper means, but I may have been the first to prove it (unless someone else proved it before then and didn't announce it or publish it online somewhere before), even if the proof was written in January of 2014 and not published until 8 years after the fact (for some reason or another).

(I provided a 2x2x2, several 3x3x3, and a 4x4x4 example in that thread to provide some evidence that I actually did it. I have a better 3x3x3 example here.)

I also proved it in a very similar way that the author of that paper (Timothy Sun) did on December 14, 2014 as well.
  • So I proved it in 2 distinctly different ways, because, as you can see, I first mentioned that I proved it in 2013.
  • I actually proved it in 3 distinctly different ways (the part dealing with piece orientation anyway, which was the main thing to prove).
    • A Mathematical proof (Those with a moderate math background can understand.)
    • A Programming proof (Those with a basic math background and a moderate programming background can understand.)
    • A "By-hand" proof (Those with no programming knowledge and a basic math background can understand.)

I plan to publish a book in year 2026 which contains both proofs that many more cubers can actually understand. (Busy with other things at the moment.)

I thought my second (shorter) proof that's similar to (but not exactly like) how Tim proved it was a very cheap (and unsatisfactory) way to prove it, because it still leaves the reader in the dark about every possible type of single commutator solution that can exist to solve/generate any nxnxn supercube position in the commutator subgroup (as well as not seeing the "bigger picture").
  • The way I handled presenting my second (cheap) proof is to provide all of the background information before I showed that (second) proof, where I provide internal links to previous parts of the text to make things clear. (People still would need to read most of the content in the book prior, but they can (in theory) claim that the "short summary" presented is "sufficient" proof of the theorem.)
  • For example on page 14 of the paper,
10. CONCLUSION Our method produces a single commutator for each even position of the Rubik’s Cube, but a few questions remain about the variety of such commutators. For the 3×3×3 cube, our commutators correspond to one of two conjugacyclasses. With a little extra work, Proposition 7.3 can be extended to show that only cycle type (10+, 1+, 1) is needed. Which other cycle types are sufficient by themselves?
In my book, I start from the ground up (I assume that the cuber has zero knowledge about anything with cube theory) and build and build until one can "easily" create every possible single commutator cycle type possible (with confidence and exposed to several concrete examples).

When considering every possible single commutator cycle solution, there are 4 "rules" (which I made from honoring the cube laws) which must be followed for corner and middle edge orientations when opting for "lower disjoint cycle" solutions.
  • All of these rules can be conveniently bypassed if we go for the higher cycle (like the 10-cycle as we both did, him in his proof, and me in my second proof).
  • These rules are significantly more complicated in concept than opting for larger cycle solutions.
  • (I can't explain why I always discover the hard way to do something before I discover the easy way, but I guess better it be that I toiled through the hard way first, so that I had the "motivation" to find a shorter and easier proof.)
  • Even though my second proof (and Tim's proof) are similar, ironically I cannot understand his proof entirely (at least at first glance).
    • If I cannot understand it at first glance, I really question how many people who have read (or will) read it will truly understand it and all of its implications.
    • I therefore cannot confirm that it is has errors or not (or that all of its "implications" follow the cube laws or not), but by the "sound" of it, it could very well be correct.
    • And with all due respect, I don't know if the "peer" who reviewed it has the experience to confirm that or not. (A strong knowledge of cube theory is required, not just a strong mathematical background!)

Yes, I understand that (after emailing Tom Rokicki on January 30, 2015 where he told me):
[...] Of course, a 1000 page document is probably not going to be published by any standard journal or conference [...]

So, even if I cannot understand Tim's paper in full, due to it's compact length that is "fit" for a mathematical journal, I really doubt if I were to try to give the "abridged" version of my proof in a journal, it would do anything other than sacrifice understandability for most readers. (And my goal is to make it understandable to as many cubers as possible.)

Congrats to Tim for being the first to officially publish a (potential) proof in a mathematical journal though! (I would have never have published my proofs in a mathematical journal.) Someone was going to do it before (or after) I got to publishing my book. Better than before than after I publish my book (so that I at least know they didn't get "inspired" from my "informal" / unconventional / long proof).

Also for those who are curious, from Tim's paper,
Cycle type (12), the one we had to avoid earlier, is particularly tantalizing: it would bemore aesthetically pleasing to have a solution that mirrors that of the corners. Perhapsone can guarantee conjugacy using more deliberate choices of 12-cycles, rather thanones arbitrarily generated by Bertram’s theorem
No, it's not possible for a 12-cycle solution of middle edges to mirror that of the corners (by the cube law of middle edge orientations).
  • By the second way I proved this theorem (the proof that's similar, but not identical, to Tim's), it's very apparent that this is the case.
  • However I did (as I do for the 8-cycles of corners) create a complete 12-cycle solution set for all middle edge even permutation cases (found by a solver I wrote in Mathematica). So it IS possible to use a two 12-cycle solution for all middle edge even permutation positions (mixed with middle edge orientations), but there is no one "method" which tackles all cases. (It's strictly a case-by-case basis.)
  • Furthermore, I have a complete:
    • Two 6-cycle corner solution set
    • Two 8-cycle corner solution set
    • Two 10-cycle middle edge solution set
    • Two 12-cycle middle edge solution set
  • (All are in PDFs dated January 1st, 2015, but I found them all sometime in 2013 or 2014. 1/1/2015 was just the date I recreated the PDFs to be placed in a folder as supplement material for my book.)
------------------
Not sure if this post is going to be taken as a rant, informative, annoying, an FYI, etc., from those reading, but I guess I had to comment on Tim's paper sooner or later!

But a TLDR is:
  • I proved this before Tim did (and provided concrete example 2x2x2 - 4x4x4 example solves online years ago).
  • Tim's proof may or may not be 100% correct. (It needs more peer reviews to confirm, IMO.)
  • Tim's proof doesn't give a glimpse of the "bigger picture".
  • It certainly cannot be comprehended in full by most cubers (and is therefore as useful to them as my claim that I proved it in the past without actually showing it, to be frank).
  • Some of the "open problems" mentioned at the end of the paper would have been covered in the paper, should his proof been what I would have been considered "complete".
  • I am kind of slow to publish my book (and proofs of this theorem), but I would have never been able to publish my proofs in a mathematical journal, because they are too long. (I wanted to allow any cuber who thirsts to learn about cube theory to be able to understand it with confidence.)
 
Back
Top