Welcome to the Speedsolving.com, home of the web's largest puzzle community! You are currently viewing our forum as a guest which gives you limited access to join discussions and access our other features.

I think it is great that progress is finally being made on this idea, but I don't see why we try to cover all LL cases. It is ridiculously easy to make sure you finish F2L with the last layer only having 2 unoriented edges every time. It is also easy to make sure you always have at least 1 corner oriented. Phasing is also a really easy option to reduce cases. I know you can't do all of these at once, but any 1 of them isn't hard at all.

I would also like to see this done with ZBLL. Of course you could always orient 1 corner or do phasing here as well.

More than 19 is not a problem at all. When I tried this I used about 30. We want to minimise the amount of 'extra' algs as much as possible (they don't look very good).

The problem I see is that by choosing algorithms that are especially easy/quick to perform, you are probably choosing algorithms that are similar to each other in some way that is hard to define.

By way of analogy, if one considered all the paths one might walk from one's house, and ranked them all on how easy they are to walk, and decided to use only the paths deemed especially easy, the result would likely be that you always proceed downhill, and have no way reach the destinations that are uphill from your house. Within this analogy, my program would suggest uphill paths to you, which you would then find distasteful because they are harder to walk than the ones to which you have become accustomed.

Okay, over the last couple days I have been running scenarios using this list. I had to translate the notation because my program assumes a stationary core, and your wide turns and slice moves therefore make no sense to it, so when you see that some of the first 20 algorithms look different from yours, that is why. They're not actually different, just expressed in a more rigid language.

When your list of 20 algorithms is forced on the program, it is clearly unable to do 2LLL using 26 or fewer. It gets tantalizingly close to success with 27, but thus far it has not quite managed it. 28 appears to be easy for it. Here are three sets of 28, each of which begins with your list of 20 and is sufficient for 2LLL:

Spoiler: Set #1

R U R' U R U2 R' (7f)

R U R' U R U' R' U R U2 R' (11f)

L F R' F R F2 L' (7f)

R U R2 F R F2 U F (8f)

R' U2 R2 U R2 U R U' R U' R' (11f)

F R' F' R U2 R U2 R' (8f)

F R U R' U' F' (6f)

F R U R' U' R U R' U' F' (10f)

R U' L' U R' U' L (7f)

R B' R F2 R' B R F2 R2 (9f)

R2 L2 D' R L' F2 R' L D' R2 L2 (11f)

R2 L2 D R2 L2 U2 R2 L2 D R2 L2 (11f)

R U R' U' R' F R F' (8f)

L F R' F' L' F R F' (8f)

R' F R U R' U' F' U R (9f)

R' U' R' F R F' U R (8f)

F' L F L' U' L' U L (8f)

R U R' U' R' L F R F' L' (10f)

R' L F R L' U2 R' L F R L' (11f)

R U2 R2 U' R2 U' R2 U2 R (9f)

U' R U' R F2 U' F R F' R' U2 R U' F2 R2 (15f)

U2 R' U R U' R2 F R2 U R' U' F' R (13f)

U F U2 F' U' R' F R U' R' F' R U (13f)

U2 F' U2 F U2 F2 U F' R' F U' F U F R U2 (16f)

U R2 F2 R2 F U F' R' U' F' U R' F2 R2 U (15f)

F2 U' F2 R U' R2 F' R2 U' R' F R U2 R' U' (15f)

U2 F R' F' U' F R' U' R' U R U R2 U R' U' F' (17f)

R2 U' F' R' U F' U' F U' R U2 F2 R F' U2 R (16f)

Spoiler: Set #2

R U R' U R U2 R' (7f)

R U R' U R U' R' U R U2 R' (11f)

L F R' F R F2 L' (7f)

R U R2 F R F2 U F (8f)

R' U2 R2 U R2 U R U' R U' R' (11f)

F R' F' R U2 R U2 R' (8f)

F R U R' U' F' (6f)

F R U R' U' R U R' U' F' (10f)

R U' L' U R' U' L (7f)

R B' R F2 R' B R F2 R2 (9f)

R2 L2 D' R L' F2 R' L D' R2 L2 (11f)

R2 L2 D R2 L2 U2 R2 L2 D R2 L2 (11f)

R U R' U' R' F R F' (8f)

L F R' F' L' F R F' (8f)

R' F R U R' U' F' U R (9f)

R' U' R' F R F' U R (8f)

F' L F L' U' L' U L (8f)

R U R' U' R' L F R F' L' (10f)

R' L F R L' U2 R' L F R L' (11f)

R U2 R2 U' R2 U' R2 U2 R (9f)

R' F2 R2 U2 R' U F2 U' F2 U' R U' R2 F2 R (15f)

F' U R' F2 U2 F' U2 F' U2 F2 U2 F R U' F (15f)

U2 R2 F2 U R' U2 R F R' F' U F2 R' U R' U2 (16f)

R U R' U F R2 F2 R' U2 R F2 R F' R U (15f)

U2 F' U F U R' U' F' U F2 R F2 U2 F (14f)

F R' F' R U' R' U' F U R U' F' R U2 R' (15f)

F' U' R U2 F2 R2 F' R2 U2 R U R' F' R' F (15f)

F U' F' U2 F U F' R U2 F R' F R F2 U2 R' (16f)

Spoiler: Set #3

R U R' U R U2 R' (7f)

R U R' U R U' R' U R U2 R' (11f)

L F R' F R F2 L' (7f)

R U R2 F R F2 U F (8f)

R' U2 R2 U R2 U R U' R U' R' (11f)

F R' F' R U2 R U2 R' (8f)

F R U R' U' F' (6f)

F R U R' U' R U R' U' F' (10f)

R U' L' U R' U' L (7f)

R B' R F2 R' B R F2 R2 (9f)

R2 L2 D' R L' F2 R' L D' R2 L2 (11f)

R2 L2 D R2 L2 U2 R2 L2 D R2 L2 (11f)

R U R' U' R' F R F' (8f)

L F R' F' L' F R F' (8f)

R' F R U R' U' F' U R (9f)

R' U' R' F R F' U R (8f)

F' L F L' U' L' U L (8f)

R U R' U' R' L F R F' L' (10f)

R' L F R L' U2 R' L F R L' (11f)

R U2 R2 U' R2 U' R2 U2 R (9f)

U' F R' F' R U2 R' U' R2 U' R2 U2 R U' (14f)

U2 F' U R' F R U' F' U' F' U F' R' F' R U (16f)

R' F R2 F' U2 F' U2 F R' U2 R' F' U' F U R (16f)

U R2 F' R U R U2 R' F2 R' F2 R U R' F R2 (16f)

U2 F U F R' F' U' F U F' R F U' F2 (14f)

U' R' F R F2 U R' U' R F2 R' F' U R U' (15f)

F2 R F' U' R' F' U F2 R' F2 R' U2 R F R F' (16f)

U' R' U R U2 R2 U' F' U F R U R U' (14f)

In each case, the first twenty are the ones you provided, in some cases re-translated into a form the program will understand. The fact that it cannot improve on 28 highlights just how thematically similar your algorithms are to each other.

I think it is great that progress is finally being made on this idea, but I don't see why we try to cover all LL cases. It is ridiculously easy to make sure you finish F2L with the last layer only having 2 unoriented edges every time.

I think the reason is that this is the Puzzle Theory subforum, not the Puzzle Practice subforum, and theorists tend to be more interested in cases that seem more conceptually pure and complete. For this reason, an analysis of the last layer is less interesting than an analysis of the cube as a whole, and an analysis of an arbitrary subset of last layer cases is even less interesting than that. I will take the idea under consideration, however.

The problem I see is that by choosing algorithms that are especially easy/quick to perform, you are probably choosing algorithms that are similar to each other in some way that is hard to define.

I hadn't even considered this - I think when I made my initial list I tried to include algs that were 'versatile' but hadn't considered distinctivity. Thanks for the insight.

Okay, over the last couple days I have been running scenarios using this list. I had to translate the notation because my program assumes a stationary core, and your wide turns and slice moves therefore make no sense to it, so when you see that some of the first 20 algorithms look different from yours, that is why. They're not actually different, just expressed in a more rigid language.

When your list of 20 algorithms is forced on the program, it is clearly unable to do 2LLL using 26 or fewer. It gets tantalizingly close to success with 27, but thus far it has not quite managed it. 28 appears to be easy for it. Here are three sets of 28, each of which begins with your list of 20 and is sufficient for 2LLL:

I hadn't even considered this - I think when I made my initial list I tried to include algs that were 'versatile' but hadn't considered distinctivity. Thanks for the insight.

You're welcome. For what it's worth, I have identified a significant flaw in your list. Algorithms #13 and #17 are mirror-inverses of each other, so you can remove #17 and have no effect on the coverage. This reduces the number of algorithms needed from 28 to 27.

Today I've been working on an experimental scoring system for redundancy in a forced algorithm list. It scores one point for each way that it can find to express an algorithm from the list in terms of two algorithms from the same list. I tried running it on your list (with #17 removed) and it got a score of 844. For comparison, when I feed it my program's best list of 19 algorithms, the score is just 116.

But wait, there's more. After days of processing, and a bit of luck, my program has finally managed to find an n=27 soltuion for your original list of forced algorithms. Removing the offending #17, this means we now have an n=26 solution which includes your favorite algorithms...

Spoiler: ...and here it is:

R U R' U R U2 R' (7f)

R U R' U R U' R' U R U2 R' (11f)

L F R' F R F2 L' (7f)

R U R2 F R F2 U F (8f)

R' U2 R2 U R2 U R U' R U' R' (11f)

F R' F' R U2 R U2 R' (8f)

F R U R' U' F' (6f)

F R U R' U' R U R' U' F' (10f)

R U' L' U R' U' L (7f)

R B' R F2 R' B R F2 R2 (9f)

R2 L2 D' R L' F2 R' L D' R2 L2 (11f)

R2 L2 D R2 L2 U2 R2 L2 D R2 L2 (11f)

R U R' U' R' F R F' (8f)

L F R' F' L' F R F' (8f)

R' F R U R' U' F' U R (9f)

R' U' R' F R F' U R (8f)

R U R' U' R' L F R F' L' (10f)

R' L F R L' U2 R' L F R L' (11f)

R U2 R2 U' R2 U' R2 U2 R (9f)

U F' U' F2 R' F' R2 U' R' U' R' U' F' U F R (16f)

U F U2 F2 U' R' F2 R F R U' R' U2 F U2 F' (16f)

F U2 R' U' R F' R' U2 F U F' U' R U2 (14f)

U R2 F' R U R U2 R' F2 R' F2 R U R' F R2 U' (17f)

U R2 U' R F2 R2 F' R U2 F U F' U' R F2 R (16f)

F R2 U2 F R F' R2 U' F' U2 F R2 U' R2 U F' (16f)

U' F2 R2 F' U R' U' F' R' U R F2 R2 F2 (14f)

The computer generated algorithms are listed in red.

If you do, you'll probably want to use the n=26 list given above. I'm certain the total algorithm count cannot be lowered beyond this point while still including the 19 distinct algorithms you favor.

*Sigh*...You are not the first person to ask for my code since I began participating in this thread, and you may not be the last, so I will publicly voice my opinion about this issue.

There are a litany of reasons that I do not at this time intend to distribute the program to others, including but not limited to these:

It is built using four different reusable code modules that I also wrote for my own use, and it would be impossible to distribute the program without also distributing those.

I am secretive by nature.

I don't like to show my code to others because it is ugly.

On the rare occasion that I do distribute my code, it is code that was written from the ground up with distribution in mind. This is not that code.

The user interface barely even exists. (When I want to change a setting, I have to recompile it. Yes, it's just that bad.)

I don't need the extra liability.

In the wrong hands, it could be used for evil. (Evil cubing? Well, maybe not...)

It just makes me uncomfortable.

I figure I can work with you (and perhaps others) on this forum, whenever I have the time and inclination. I would prefer to keep said cooperation in public threads, however. I just finished installing a new air conditioner, so I should hopefully be able to carry on processing until the weather gets truly vicious.

Personally, I find it hard to believe that it could be fast in practice. I say this because it seems to me that it would be easier to memorize more algorithms (i.e. full OLL and full PLL) than it would be to deal with the issue of how to learn to rapidly recognize which one of a smaller set of algorithms to apply in a system lacking the orderly distinction between orientation and permutation. But if you want to give it a try, by all means, have at it. If it catches on, you can credit me as your assistant.

If you do, you'll probably want to use the n=26 list given above. I'm certain the total algorithm count cannot be lowered beyond this point while still including the 19 distinct algorithms you favor.

Thanks for the updated list. I need to sink some time into looking deeper into the results.

Joey brought up something interesting at the competition this weekend. Is 19 the minimum number of algorithms required to solve a two look last layer, or is it the minimum number of last layer algorithms to solve a two look last layer?

That is, could algorithms that influence F2L produce a shorter list?

Personally, I find it hard to believe that it could be fast in practice. I say this because it seems to me that it would be easier to memorize more algorithms (i.e. full OLL and full PLL) than it would be to deal with the issue of how to learn to rapidly recognize which one of a smaller set of algorithms to apply in a system lacking the orderly distinction between orientation and permutation. But if you want to give it a try, by all means, have at it. If it catches on, you can credit me as your assistant.

Easier, certainly - but potentially not faster. The only difficulty with a system like this lies in the learning of it. I have ideas and tricks that make it easier, but have never managed to document it to the extent that it is easily learnable. I think it is possible, but again, time will need to be sunk in to actually test that idea.

I think the first thing I need to do is recode my 2LL solver. Added to the todo list :U

I think it is great that progress is finally being made on this idea, but I don't see why we try to cover all LL cases. It is ridiculously easy to make sure you finish F2L with the last layer only having 2 unoriented edges every time.

Okay, I've been working on this problem. I wrote in an option for edge control, and I have managed, after days of processing, to come up with two sets of 17 algorithms, each of which, together with their mirrors, inverses, and mirror-inverses, is sufficient to solve the last layer if and only if partial edge control has been used, such that at least two edges are correctly oriented. Here they are.

It is also easy to make sure you always have at least 1 corner oriented. Phasing is also a really easy option to reduce cases. I know you can't do all of these at once, but any 1 of them isn't hard at all.

I would also like to see this done with ZBLL. Of course you could always orient 1 corner or do phasing here as well.

Sorry, but I'm actually relatively new to the speedcubing community, so I'm not well-versed in all the various solution systems and acronyms of which you speak.

Joey brought up something interesting at the competition this weekend. Is 19 the minimum number of algorithms required to solve a two look last layer, or is it the minimum number of last layer algorithms to solve a two look last layer?

That is, could algorithms that influence F2L produce a shorter list?

No offence to your friend intended, but I think that this is impossible. It's a shame that I can't think of any way to conclusively prove it, but I can certainly think of ways to statistically suggest it.

Let's say you're doing 2LLL using a set of n algorithms and their mirrors/inverses/mirror-inverses, all of which leave the F2L as they found it. In this case, you can apply 4 types of AUF, followed by one algorithm any one of 4 ways, followed by one of 4 types of AUF, and at this point it may already be solved. If it isn't, then you do one more algorithm any one of 4 ways and one more AUF. Taking into account the possibility of a complete LL skip, the total theoretical limit to coverage for that algorithm set is 1024*n^2+64*n+4. (Obviously it won't really work out that way because some combinations of algorithms will inevitably have the same effect as other combinations of algorithms.)

Now let's imagine a set of algorithms, half of which disturb the F2L and half of which do not. In this case, you can never apply one algorithm from each half because the result will always be a messed up F2L. This fact cuts your options in half right away, and there are other limiting factors which should become clear in a moment.

Now let's imagine a set of algorithms that all disturb the F2L by swapping some pieces in the F2L with some pieces in the LL. For maximum versatility and mobility, and to avoid the above pitfall, they would all have to swap the exact same F2L pieces to the exact same positions in the LL. The result of this, however, is that you lose the opportunity to do AUF between the two algorithms, resulting in a 75% cut to versatility and mobility. Furthermore, it would be a complete waste because one can just mentally swap the F2L positions and the LL positions in question, and thereby see that there must exist a set of n LL-only algorithms which have the same effect, but which would allow AUF in between.

Now let's imagine a set of algorithms that all disturb the F2L only by rearranging the F2L within itself, leaving all LL pieces on the LL and all F2L piece in the F2L. In the simplest case, imagine that all the algorithms just flip the FR edge and do nothing else to the F2L. This is better than the above scenarios, because you can use any two algorithms and you can still do AUF between them, but the coverage should still be smaller because it is now impossible to ever solve the LL using only one algorithm. This removes the middle 64*n term of the original polynomial, leaving a result of 1024*n^2+4 instead of 1024*n^2+64*n+4.

In every case there is a reduction in the theoretical mobility and versatility of the set by having it disturb the F2L. The only way I can possibly see it improving *ANYTHING* is if there were some unseen advantage to breaking LL parity between algorithms, but I can't imagine any reason why this would be so.

Easier, certainly - but potentially not faster. The only difficulty with a system like this lies in the learning of it. I have ideas and tricks that make it easier, but have never managed to document it to the extent that it is easily learnable. I think it is possible, but again, time will need to be sunk in to actually test that idea.

I'm not talking about the difficulty of learning it, but of speedy recognition. I figure that the speed of a solve relies on three things: speed of turning, speed of recognition, and efficiency of the solution system in reducing the number of turns. Some systems are very efficient in terms of turn count, but very slow in terms of recognition. Some are super-fast in terms of recognition, but require significantly more turns. The goal is a system that both reduces turn count AND aids rapid recognition, and I suspect that these efforts to reduce the number of algorithms actually work against that goal, slowing recognition and increasing turn count.

Don't get me wrong. I'm still interested from a theory standpoint, but I don't believe the result could ever be practical for me. I have trouble just finding the next F2L pair.

Perhaps this is a dumb question, but what does it do? "LL Solver" implies that it outputs a solution for a given LL position, but the 2 suggests that it doesn't. Does it perhaps try to solve a LL position using just two algorithms chosen from a provided list?

In every case there is a reduction in the theoretical mobility and versatility of the set by having it disturb the F2L. The only way I can possibly see it improving *ANYTHING* is if there were some unseen advantage to breaking LL parity between algorithms, but I can't imagine any reason why this would be so.

I'm not talking about the difficulty of learning it, but of speedy recognition. I figure that the speed of a solve relies on three things: speed of turning, speed of recognition, and efficiency of the solution system in reducing the number of turns. Some systems are very efficient in terms of turn count, but very slow in terms of recognition. Some are super-fast in terms of recognition, but require significantly more turns. The goal is a system that both reduces turn count AND aids rapid recognition, and I suspect that these efforts to reduce the number of algorithms actually work against that goal, slowing recognition and increasing turn count.

This system would be different from LL systems with large algorithm subsets. I'm inclined to think that the problem they have is algorithm recall, not recognition. The difficulty lies in remembering an alg from a pool of 400. There are more than just those two variables at play, and I think recognition isn't the problem you originally thought. There are also tricks with systems like this that can help aid recognition.

You can see more reasoning in my thread.

I think the biggest problem at the moment (aside from having to reorganise everything) is the 'bad' algs. I've considered generating more speed-optimal solutions, but haven't tried it yet.

Don't get me wrong. I'm still interested from a theory standpoint, but I don't believe the result could ever be practical for me. I have trouble just finding the next F2L pair.

Method development is reaching a point where you need to push into using more abstract concepts and trying ambitious weird things. Step concatenation and other standard structures have maxed out their usefulness and we need to do something new and different to improve on what we already have. I believe if I can circumvent problems with techniques like this (in this case with clever case sorting) they can prove to be a viable alternative.

Perhaps this is a dumb question, but what does it do? "LL Solver" implies that it outputs a solution for a given LL position, but the 2 suggests that it doesn't. Does it perhaps try to solve a LL position using just two algorithms chosen from a provided list?

Pretty much, I used an old version to automate the creation of this, but it isn't very useful at this point. I think a rewrite would be beneficial and there's just extra stuff I wanna add to it.

UPDATE: After interminable processing, I have managed to find a single case where 18 algorithms (and their mirrors, inverses, and mirror-inverses) are sufficient to solve any last-layer position in two looks.

Ladies and gentlemen, I give you the smallest complete 2LLL algorithm set in history!

U2 R2 U' R' F U R U' R2 F' R2 F R F' R2 U' (16f)

R U2 R2 U2 R2 U R' F R' F' U R U2 R U R' (16f)

F U R' U' F' U' F U R U' F' R' U2 R (14f)

U2 F U2 F' R F R' U2 R F' R' U2 (12f)

U2 R' U' F U R U' R' F' R U (11f)

U' R' F R U2 F' U' F U' F2 U2 F (12f)

R' U' R F U R2 U' F' R U R' U' R' U (14f)

F U R2 U' F' U R' U' F R' F' R' U R U' R' (16f)

F R2 U R' U' R' U2 F R F' R' U' R U' R' F' (16f)

U2 R U2 R' F R' F' R U' F' U' F U R U2 R' (16f)

U R' U2 R U R' F U R U' R' F' R (13f)

U2 F' U R' U F2 U F2 U' F2 R U2 F (13f)

U' R' U' F U R F U F2 U' F' U' F2 U2 F (15f)

U R' U' R F R' U R U' R' F' R U2 R U2 R' (16f)

R U2 R2 F' U' R' U R U' R' U' R F R U R (16f)

U R2 U' F U R2 U' R' F' R2 U2 F R F' U2 R' U (17f)

UPDATE: After interminable processing, I have managed to find a single case where 18 algorithms (and their mirrors, inverses, and mirror-inverses) are sufficient to solve any last-layer position in two looks.

Ladies and gentlemen, I give you the smallest complete 2LLL algorithm set in history!

Unfortunately, instructions for its use would basically be a list of 15551 distinct unsolved upper layer states, each followed by a number from 1 to 18 and an optional M (for mirror) and/or I (for Inverse), indicating which algorithm to apply in that situation. One would AUF until they could find their LL state in the list, and then apply the algorithm indicated. If it was not already solved, they would repeat the process once, and the LL would be solved once they did a final AUF.

This would be a long document, and too much, I think, for this forum. Especially when I don't know how to concisely describe a last layer state.

Unfortunately, instructions for its use would basically be a list of 15551 distinct unsolved upper layer states, each followed by a number from 1 to 18 and an optional M (for mirror) and/or I (for Inverse), indicating which algorithm to apply in that situation. One would AUF until they could find their LL state in the list, and then apply the algorithm indicated. If it was not already solved, they would repeat the process once, and the LL would be solved once they did a final AUF.

This would be a long document, and too much, I think, for this forum. Especially when I don't know how to concisely describe a last layer state.

Ah, I see, so it's a theoretical / computational set. Still, it is good to know what can be the lower bound for a human solution; or even it can be part of an LBL computer solving method.

If you have others reasons as to why you do not want to post these instructions with your alg set (perhaps you want to publish this result in a journal?) then I understand completely. However, we have no reason to believe that this is true (I personally believe your set does do what you say it does) unless you prove it by giving us instructions on how to use your set of 18 algs to solve every last layer case.

Maybe others on this forum have other ideas about how to abstractly represent a 3x3x3 last layer case, but this is mine.

Spoiler

You can post all last layer cases in a form like the following:

C{1,2,3,4}|E{1,2,3,4}

, where C stands for "corners" and E stands for "Edges". Pick a convention in which to number a cube (here's my 3x3x3 numbered cube, for example--note that I gave credit to the creators of CubeTwister for this image, despite I made it, just to let you know that I'm not them). I assume you already treated the cube as a numbered cube anyway in order to program this software.

In addition, for corner scrambles, let 1+ represent a corner twisted 90 degrees from being oriented and let 1- represent a corner being twisted -90 degrees, for example. In addition, let 1+ represent a middle edge unoriented. (If no + or - is to the right of a number in a list, we assume it is correctly oriented).

For example, the last layer case generated by the first algorithm in your list of 18 generating algorithms can be represented as:
C{1+,2,4,3-}|E{1,4+,3,2+} on my 3x3x3 numbered cube.

You do not have to post these in a post directly. Put them all in a txt file and attach it to a post, or if it is too large to be put on your share of the forums attachment storage space, then upload it to an external file hosting site you trust (or your own website) and then provide us with a link.

UPDATE: After interminable processing, I have managed to find a single case where 18 algorithms (and their mirrors, inverses, and mirror-inverses) are sufficient to solve any last-layer position in two looks.

I have another request; would it be much bother for me to suggest alternative lists to see if I can find a better subset that includes a group of forced algs? My intention is to minimise arbitrary algs. I don't know how much effort/processing time this requires on your part.

However, we have no reason to believe that this is true (I personally believe your set does do what you say it does) unless you prove it by giving us instructions on how to use your set of 18 algs to solve every last layer case.

UPDATE: After interminable processing, I have managed to find a single case where 18 algorithms (and their mirrors, inverses, and mirror-inverses) are sufficient to solve any last-layer position in two looks.

Very cool! Here are some optimal solutions (not counting AUF). Any of these can be rotated by any y rotation to make them nicer. No alg requires more than 14 moves.

Spoiler

F' U' F R2 B' R' U B U R' B' U2 B / F U' F' L' U R' F2 L F L' F L R / F U' F' L' B' R2 F R F' R B U L (13f*)

F R U' R' F D B' R B D' F2 (11f*)

B L' F' L2 B F' R' D2 R B2 F2 (11f*)

F2 L2 F' R2 F L2 F' R2 F' (9f*)

L' R U B U' B' R' U' L / L' U' B U L U' L' B' L (9f*)

F' L F U2 L' U' L U' L2 U2 L (11f*)

F R U R' F2 L' U' L F2 U' F2 U2 F / R' D' F L' U2 L2 F L2 U2 L F' D R / R' U' R F U R2 U' F' R U R' U' R' (13f*)

B' U2 B R B2 R' U2 F R B2 R B' R2 F' / L U B L' B' F' L' F D' U' L U2 L' D / D L U B L' B' F' L' F D' U' L U2 L' / L U B L' F' L' F L B' L' U' L U2 L' / F U R U' R' F D' B L2 B' D F U2 F (14f*)

B' U2 B2 L' B' L' B2 F' L' F L B2 L2 / B' R' U' R' D' R2 U R D R' B2 U2 B' (13f*)

B U2 R U' L' U R' U' L2 U' L' B' (12f*)

B' U2 B U B' R U B U' B' R' B (12f*)

B' U L' U B2 U B2 U' B2 L U2 B / B' U L' D L2 U L2 D' B2 L U2 B (12f*)

F' U2 F2 U F2 L D' L D L U L F (13f*)

B U2 F' L F L' B' L' B2 L' B2 L2 / B' R' F R' B2 U2 B' F2 U2 F' R2 F2 / B' R' F R' B' F' L2 B R2 B' L2 B2 (12f*)

F U' R U L U2 L' R' U2 B' U' F' U2 B / L2 F2 R' F' R F' L2 B2 D' F R2 F' D B2 (14f*)

B2 R F R B2 R B2 R B2 R2 B' F' R' B / B2 R F' R' D2 R' D2 R F2 D2 B F' R B / B2 R F' L' F2 R' F2 L F2 D2 B F' R B (14f*)

R U2 B F R2 B' R' B R' B' F' U2 R' / R U2 B2 U2 F' L' B L' B2 L2 B' F R' / F' U' F R2 B2 L' B' L B' R2 F' U F / F' D' L F2 R2 B' R' B R' F2 L' D F / L F L' R2 B' R' B L R2 F R F2 L' / L U2 L' U' L U' L' B L U L' U' B' (13f*)

R2 D' F' D' F2 D R U2 F U R2 D R' / B' L U L' U' L' B U2 F' L' F U2 L / F U' B2 U R' D B2 D' B2 R F' U' B2 / F2 D' F U2 F' D F2 U R B U B' R' / B' U B U' L F R U R' U2 F' U L' / B' F' L' F R' U F' U' F' L F2 R B (13f*)

If you have others reasons as to why you do not want to post these instructions with your alg set (perhaps you want to publish this result in a journal?) then I understand completely.

However, we have no reason to believe that this is true (I personally believe your set does do what you say it does) unless you prove it by giving us instructions on how to use your set of 18 algs to solve every last layer case.

You do not have to post these in a post directly. Put them all in a txt file and attach it to a post, or if it is too large to be put on your share of the forums attachment storage space, then upload it to an external file hosting site you trust (or your own website) and then provide us with a link.

"Attachment space"? This is a new term to me. I have been on other forums, but have never seen one with a dedicated attachment storage area before. Intriguing. Having looked into the matter, I believe that, with sufficient compression, I should be able to fit this document just within the file size limits for a .zip file attachment. If all goes well, it will be attached to this message at the end.

I have another request; would it be much bother for me to suggest alternative lists to see if I can find a better subset that includes a group of forced algs? My intention is to minimise arbitrary algs. I don't know how much effort/processing time this requires on your part.

Fire away, just don't hit me with a dozen lists all at once. It'll probably take a couple days of processing to get a semi-reliable result for one. Alternatively, if you'd like, I could take more than one list and just calculate the redundancy scores for each, as mentioned previously, and just process whichever one scores the lowest. It doesn't take long to calculate the redundancy score.

Very cool! Here are some optimal solutions (not counting AUF). Any of these can be rotated by any y rotation to make them nicer. No alg requires more than 14 moves.

Spoiler

F' U' F R2 B' R' U B U R' B' U2 B / F U' F' L' U R' F2 L F L' F L R / F U' F' L' B' R2 F R F' R B U L (13f*)

F R U' R' F D B' R B D' F2 (11f*)

B L' F' L2 B F' R' D2 R B2 F2 (11f*)

F2 L2 F' R2 F L2 F' R2 F' (9f*)

L' R U B U' B' R' U' L / L' U' B U L U' L' B' L (9f*)

F' L F U2 L' U' L U' L2 U2 L (11f*)

F R U R' F2 L' U' L F2 U' F2 U2 F / R' D' F L' U2 L2 F L2 U2 L F' D R / R' U' R F U R2 U' F' R U R' U' R' (13f*)

B' U2 B R B2 R' U2 F R B2 R B' R2 F' / L U B L' B' F' L' F D' U' L U2 L' D / D L U B L' B' F' L' F D' U' L U2 L' / L U B L' F' L' F L B' L' U' L U2 L' / F U R U' R' F D' B L2 B' D F U2 F (14f*)

B' U2 B2 L' B' L' B2 F' L' F L B2 L2 / B' R' U' R' D' R2 U R D R' B2 U2 B' (13f*)

B U2 R U' L' U R' U' L2 U' L' B' (12f*)

B' U2 B U B' R U B U' B' R' B (12f*)

B' U L' U B2 U B2 U' B2 L U2 B / B' U L' D L2 U L2 D' B2 L U2 B (12f*)

F' U2 F2 U F2 L D' L D L U L F (13f*)

B U2 F' L F L' B' L' B2 L' B2 L2 / B' R' F R' B2 U2 B' F2 U2 F' R2 F2 / B' R' F R' B' F' L2 B R2 B' L2 B2 (12f*)

F U' R U L U2 L' R' U2 B' U' F' U2 B / L2 F2 R' F' R F' L2 B2 D' F R2 F' D B2 (14f*)

B2 R F R B2 R B2 R B2 R2 B' F' R' B / B2 R F' R' D2 R' D2 R F2 D2 B F' R B / B2 R F' L' F2 R' F2 L F2 D2 B F' R B (14f*)

R U2 B F R2 B' R' B R' B' F' U2 R' / R U2 B2 U2 F' L' B L' B2 L2 B' F R' / F' U' F R2 B2 L' B' L B' R2 F' U F / F' D' L F2 R2 B' R' B R' F2 L' D F / L F L' R2 B' R' B L R2 F R F2 L' / L U2 L' U' L U' L' B L U L' U' B' (13f*)

R2 D' F' D' F2 D R U2 F U R2 D R' / B' L U L' U' L' B U2 F' L' F U2 L / F U' B2 U R' D B2 D' B2 R F' U' B2 / F2 D' F U2 F' D F2 U R B U B' R' / B' U B U' L F R U R' U2 F' U L' / B' F' L' F R' U F' U' F' L F2 R B (13f*)

I gather you mean that you refactored my set of 18 algorithms such that they are permitted to use D, L, and B turns in addition to the U, R, and F turns to which I restricted mine?

I can't imagine why anyone ever doubted me, what with my lengthy track record on this forum going back decades and all.

Oh, wait...no it doesn't. Resume doubting.

[HR][/HR] Okay, everybody, here's what you all wanted. I present you with a set of instructions for how to use the set of 18 algorithms to do 2-Look Last Layer:

You'll notice that I removed the superfluous U turns from the beginnings and ends of algorithms, and I sorted the algorithm list by number of turns, shortest first. I tried doing a couple LL solves with it, and I've gotta say...it's a lot slower than my usual method.

Fire away, just don't hit me with a dozen lists all at once. It'll probably take a couple days of processing to get a semi-reliable result for one. Alternatively, if you'd like, I could take more than one list and just calculate the redundancy scores for each, as mentioned previously, and just process whichever one scores the lowest. It doesn't take long to calculate the redundancy score.

I tend to believe it is, but I've been wrong before. The problem is that upper bounds for the number are determined by examples, and lower bounds are determined by theory, and until recently both were rather simplistic in nature. I have improved the upper bound with a better example, i.e. a set of 18, but improving the lower bound would entail improving theory, and I do not know off the top of my head how to do that. Theory, as it stands, can only say that the number is at least 8, so all we can prove is that it's somewhere from 8 to 18, inclusive. I can't say for sure that it's 18, but I am certain it's closer to 18 than it is to 8.

I was going to say that from a list of 30 there are 86,493,225 possible subsets of 18, and that even if my scoring function took a fraction of a second to run, it would take forever, but I have had an insight and have added a new feature to my program for handling this. What I have done is to make it so that if the forced algorithm list is shorter than or equal to the desired number of algorithms, my program will act as it did before, but if the forced algorithm list is longer than the desired number of algorithms, it will attempt to generate a subset of that list that is the desired length and which has maximal coverage of the 62208 possible cases. This should make it much faster and easier to do what you want. It should reduce the number of cases that rely on generated algorithms as much as possible.

So, shoot your list of 30 (or more) algorithms my way, and I'll see what my latest creation can make of it. All I ask is that they are expressed in a stationary-core format so that I don't have to hand-translate them.