Kirjava
Colourful
There have been a few developments in ideas for LL methods that people have been throwing around. After several discounted ideas and attempts, I've settled on a general system that I think could be good. I've mentioned it to a few people in the past, but I want to finally get it completed.
The general idea is this; a combination of two algs to solve each LL case will be generated from a pool of good/fast algs, so for each LL position an algorithm will be performed to bring the last layer to a position that can be solved from the same pool. So you will never have a bad LL case, since both algs are fast. This is exactly the same as the OLLCP system I am using, and Petrus' 270 ZBLL system. Unlike those two implementations of this technique, no overhead is added from using this - 270 ZBLL is worse than ZBLL alone (assuming the system is fully learnt), but (ignoring issues with recognition and actually learning the system) this should be the best two alg last layer system, as it is technically an implementation of 1LLL. The recognition for the second alg should be better than PLL or ELL, as the cases to recognise are much more distinctive. The recognition for the first appears to be the same as 1LLL recognition on first glance, but I will go into an improvement on that that will also aid learning. Also, the pool of algorithms will likely end up being quite a bit smaller than the total for OLL/PLL, and will essentially be comprised of algorithms you already know. Recognition is the only thing that needs to be learnt.
While the system will be initially two look, the technique used to solve the layer layer in this way is originally intended to be one look, but I am unsure if this is feasable when applied to this large
number of cases. The ability to one look cases should come with practise, but may not.
When I learnt OLLCP, I generated combinations to solve any case and stored them in a table. While this is ok for a small case set like OLLCP (lol), estimated time for learning this system in table form
would be about a year.
I have a program that takes a list of algs and produces a list of every LL case with every possible combination of algs that solves it. This is the method's raw data. It can be used to learn from, but I think it can be made way more learnable.
Here's the current list of algs I'm using;
Please offer revisions (tell me which algs are bad and add good ones).
While the combinations for each case may appear to be random at first, reorganisation of the data brings order to this madness.
Instead of learning algs organised by cases, I propose the cases be organised by algs.
So for example, you would click on the Sune page, and see a list of all the cases that are solved with a Sune. You learn all the cases that correspond to each alg.
This still seems difficult, but there are many shortcuts that can be used to improve this.
You don't actually have to learn all the cases to be able to solve them. For example, if a single alg can be used to solve some given CLL case + various 3 cycles of edges, instead of listing all those cases, you can simply list one case with the edge stickers greyed out. This phenomenon isn't rare, and combinations for algs can be chosen in such a way that increases the case reduction oppertunities.
Each list of cases for an alg could be broken down into smaller groups, but as I haven't done the basic organisation yet (algs -> cases), I don't know what exactly will be the best way to present the data to be easily learnt.
The raw data table of all cases is being generated at the moment so I thought I'd write this post for advice on the next stage while I waited Here is an example of the output (ELL) while I will convert to look like something like this once every LL position has been solved. Hopefully tomorrow.
Advice and suggestions for improvements would be greatly appreciated. Mostly in regard to organising the data in a learnable format.
I was going to post this in the private lounge, but thought that there may be people outside of it that could help.
Please do not reply with endless variations on the idea as is the norm with threads like this. They are probably equally valid ideas, but I would like to proceed without changing the spec unless it is an obvious improvement.
The general idea is this; a combination of two algs to solve each LL case will be generated from a pool of good/fast algs, so for each LL position an algorithm will be performed to bring the last layer to a position that can be solved from the same pool. So you will never have a bad LL case, since both algs are fast. This is exactly the same as the OLLCP system I am using, and Petrus' 270 ZBLL system. Unlike those two implementations of this technique, no overhead is added from using this - 270 ZBLL is worse than ZBLL alone (assuming the system is fully learnt), but (ignoring issues with recognition and actually learning the system) this should be the best two alg last layer system, as it is technically an implementation of 1LLL. The recognition for the second alg should be better than PLL or ELL, as the cases to recognise are much more distinctive. The recognition for the first appears to be the same as 1LLL recognition on first glance, but I will go into an improvement on that that will also aid learning. Also, the pool of algorithms will likely end up being quite a bit smaller than the total for OLL/PLL, and will essentially be comprised of algorithms you already know. Recognition is the only thing that needs to be learnt.
While the system will be initially two look, the technique used to solve the layer layer in this way is originally intended to be one look, but I am unsure if this is feasable when applied to this large
number of cases. The ability to one look cases should come with practise, but may not.
When I learnt OLLCP, I generated combinations to solve any case and stored them in a table. While this is ok for a small case set like OLLCP (lol), estimated time for learning this system in table form
would be about a year.
I have a program that takes a list of algs and produces a list of every LL case with every possible combination of algs that solves it. This is the method's raw data. It can be used to learn from, but I think it can be made way more learnable.
Here's the current list of algs I'm using;
Code:
[qw(1 3 2 0 0 0 0 0 2 3 0 1 1 0 1 1 Sune)],
[qw(3 0 2 1 0 0 0 0 0 1 2 3 2 1 2 1 DoubleSune)],
[qw(0 1 2 3 0 0 0 0 2 3 0 1 0 1 0 2 TripleSune)],
[qw(0 3 2 1 0 0 0 0 0 2 1 3 0 0 0 0 TPerm)],
[qw(0 3 1 2 0 0 1 1 2 3 0 1 1 0 1 1 FatSune)],
[qw(0 2 3 1 0 1 1 0 0 1 2 3 2 1 2 1 DblFatSune)],
[qw(2 3 0 1 0 0 1 1 2 3 0 1 2 0 2 2 RUR2FRF2UF)], # PureSune
[qw(0 1 2 3 0 1 1 0 0 1 2 3 1 1 1 0 LU2L'U'LU'xU2R'U'RU'r')], # PureFat
[qw(2 3 0 1 1 1 1 1 2 3 0 1 2 0 2 2 FURU'R'F'R'F'U'FUR)], # AllSune
[qw(2 3 0 1 0 0 0 0 0 1 2 3 1 1 0 1 R'U2R2UR2URU'RU'R')], # SuneH
[qw(0 3 1 2 0 0 1 1 0 2 3 1 1 1 0 1 L'U2LU2LF'L'F)], # C2
[qw(0 3 1 2 0 0 1 1 2 1 3 0 1 1 0 1 FR'F'RU2RU2R')], # C3
[qw(1 2 0 3 0 1 1 0 1 0 3 2 0 2 1 0 FRUR'U'F')], # FRURUF
[qw(2 0 1 3 1 0 1 0 0 1 2 3 2 2 1 1 FRUR'U'RUR'U'F')], # DblFRURUF
[qw(0 1 2 3 0 0 0 0 3 1 0 2 2 0 0 1 L'R'D2RU2R'D2RU2L)], #Opp3
[qw(1 2 3 0 0 0 0 0 2 0 3 1 1 1 0 1 Niklas)], # RU'L'UR'U'L
[qw(3 1 0 2 0 0 1 1 1 2 0 3 1 1 1 0 rU'r'U'rUr'F'UF)], #WeirdNiklas
[qw(2 3 0 1 1 0 0 1 1 2 0 3 1 1 1 0 RU'R'U'F'U2'FU2RU2R')], # FatNiklas
[qw(0 1 2 3 0 0 0 0 0 2 3 1 0 0 0 0 RB'RF2R'BRF2R2)], # A Perm
[qw(0 3 1 2 0 0 0 0 0 1 2 3 0 0 0 0 M2U'MU2M'U'M2)], # U Perm
[qw(2 3 0 1 0 0 0 0 0 1 2 3 0 0 0 0 M2UM2U2M2UM2)], # H Perm
[qw(1 2 0 3 0 1 1 0 1 3 2 0 0 2 0 1 SexyHammer)], # RUR'U'R'FRF'
[qw(0 1 2 3 0 0 0 0 1 3 2 0 0 2 0 1 rUR'U'r'FRF')], # FatSexyHammer
[qw(3 1 2 0 0 0 1 1 1 0 2 3 2 0 0 1 R'FRUR'U'F'UR)], # HardP
[qw(0 2 3 1 0 0 1 1 1 0 3 2 1 0 0 2 R'U'R'FRF'UR)], # EasyC
[qw(1 0 3 2 1 1 1 1 0 1 2 3 0 0 0 0 r'UM2etc)], # ZFlip
[qw(2 3 0 1 1 1 1 1 0 1 2 3 0 0 0 0 MURUR'U'M2URU'r'U')], # HFlip
[qw(3 1 2 0 1 0 1 0 0 3 2 1 2 0 0 1 R2'U'RFR'UR2U'R'F'R)], # DiagT
[qw(0 3 2 1 0 0 0 0 2 3 1 0 2 1 0 0 FRUR'U'RU'R'U'RUR'F')], # TastyT
[qw(1 2 0 3 0 1 1 0 0 2 3 1 2 0 1 0 FRU'RDR'U2RD'R2'U'F')], # WeirdT
[qw(3 0 1 2 1 0 1 0 2 0 3 1 2 0 0 1 RUR'UF'L'ULFU'RU'R')], # PowerT
[qw(1 3 2 0 1 0 0 1 1 3 2 0 1 0 0 2 L2F2R'FRF2L2U2LF'L')], # CheckU
[qw(0 3 1 2 0 0 1 1 3 0 2 1 1 2 2 1 rUR'URUL'UR'U'LUM)], # RandomH
[qw(0 1 2 3 0 0 0 0 0 1 2 3 0 2 0 1 F'RD2R'FU2F'RD2R'FU2)], # PureL
[qw(0 1 2 3 0 0 0 0 0 3 1 2 0 2 0 1 R2DR'U2RD'R'U2R')], # L3
[qw(1 2 0 3 0 1 1 0 0 3 1 2 2 1 0 0 FR2DR'URD'R2U'F')], # E2
[qw(3 1 0 2 1 0 1 0 0 3 1 2 2 1 0 0 fR2DR'URD'R2U'f')], # FatE2
[qw(2 1 3 0 1 0 1 0 1 2 0 3 0 1 2 0 F'LFL'U'L'UL)], # D5
[qw(2 0 3 1 0 0 0 0 3 1 2 0 1 2 1 2 RU'L'UR'ULUL'UL)], # G5
[qw(0 1 2 3 0 1 0 1 1 0 3 2 1 1 2 2 R'U'RU'R'UF'UFR)], # G6
[qw(1 2 3 0 0 1 0 1 0 2 1 3 1 0 0 2 FRU'R'U'L'U'LULF'L2UL)], # F4
[qw(1 2 0 3 0 1 1 0 2 0 1 3 2 2 1 1 FRUR'U'RF'rUR'U'r')], # H2Opp
[qw(1 2 0 3 0 1 1 0 0 1 2 3 0 0 0 0 RUR'U'M'URU'r')], # ezell A
[qw(3 0 2 1 1 1 0 0 0 1 2 3 0 0 0 0 M'UMU2M'UM)], # ezell 1
[qw(2 1 3 0 0 0 0 0 2 3 0 1 1 1 2 2 Bruno)], # Bruno
[qw(0 2 1 3 0 0 0 0 0 2 1 3 0 0 0 0 JPerm)]
While the combinations for each case may appear to be random at first, reorganisation of the data brings order to this madness.
Instead of learning algs organised by cases, I propose the cases be organised by algs.
So for example, you would click on the Sune page, and see a list of all the cases that are solved with a Sune. You learn all the cases that correspond to each alg.
This still seems difficult, but there are many shortcuts that can be used to improve this.
You don't actually have to learn all the cases to be able to solve them. For example, if a single alg can be used to solve some given CLL case + various 3 cycles of edges, instead of listing all those cases, you can simply list one case with the edge stickers greyed out. This phenomenon isn't rare, and combinations for algs can be chosen in such a way that increases the case reduction oppertunities.
Each list of cases for an alg could be broken down into smaller groups, but as I haven't done the basic organisation yet (algs -> cases), I don't know what exactly will be the best way to present the data to be easily learnt.
The raw data table of all cases is being generated at the moment so I thought I'd write this post for advice on the next stage while I waited Here is an example of the output (ELL) while I will convert to look like something like this once every LL position has been solved. Hopefully tomorrow.
Advice and suggestions for improvements would be greatly appreciated. Mostly in regard to organising the data in a learnable format.
I was going to post this in the private lounge, but thought that there may be people outside of it that could help.
Please do not reply with endless variations on the idea as is the norm with threads like this. They are probably equally valid ideas, but I would like to proceed without changing the spec unless it is an obvious improvement.