• Welcome to the Speedsolving.com, home of the web's largest puzzle community!
    You are currently viewing our forum as a guest which gives you limited access to join discussions and access our other features.

    Registration is fast, simple and absolutely free so please, join our community of 40,000+ people from around the world today!

    If you are already a member, simply login to hide this message and begin participating in the community!

Is 1LLL possible?

Joined
May 25, 2016
Messages
16
To create the database you would have to have cube explorer automatically running through all the cases 24/7
 

Martin Orav

Member
Joined
Mar 9, 2017
Messages
5
Is it possible (in theory) to recognise 1LLL by looking at the top layer and two adjacent sides?
 

Escher

Babby
Joined
Jul 23, 2008
Messages
3,374
WCA
2008KINN01
YouTube
Visit Channel
Possible? Sure.

The problem is the relationship between case set size, algorithm uniqueness, and recognition factors. PLL reduces via AUF to a small number of cases (from technically speaking, 72), case solutions may use a decent number of mirrors/inverses, and can be recognised using knowledge of only 6 sticker values relationships from any AUF. The algorithms themselves tend to be quite distinct - or if not distinct, intuitively relatable.

Recognition time involves the comprehension of the case, pulling a distinct 'macro exec case' for the algorithm from a kind of memory, and execution relies on another kind - procedural memory.

At the basic level, maintenance of the procedural memory gets more and more time demanding as the number of unique algorithms rises. Attaching this procedural memory firmly to the correct cases has another (but related) time demand. A confounding factor is solution similarity - the larger the solution set, the more demanding it is to maintain the relationship of x case 'macro memory' to x case procedural memory when x and y case are near-identical.

One level higher, the complexity of sticker relationships to case identification increases the more powerful a system becomes, and so maintenance demand increases. Some minimum n of stickers to establish case uniqueness must be increased. Now maintenance of the connection between macro and procedural must be balanced alongside maintaining the relationship between recog state and macro memory. Managing this trade-off between the two factors is by itself another demand.

All of these demands create trade-offs for you, the human. When comparing two systems, we list the presupposed qualities, and this stage is where most cubers theories or systems live and die. The more important (and harder to measure) factor is that they must show a distinct advantage in the real world, in general. Otherwise speed advantages can be attributed to individual qualities, rather than the system itself having a quantifiable, proven advantage.

The currently-conceived 1LLL systems are still in this latter stage of theory-testing. I'm not sure if they will pass the empirical demands. One option that was explored a few years ago by Kirjava was the idea of generating solutions for all LL cases from two short-length OLL algorithms, plus one setup move in the chain. This solved some elements of the macro-memory problem rather succinctly. The next problem was to figure out how to establish recognition in a maintenance-efficient manner. Unfortunately no simple solutions were apparent - grouping cases by initial OLL didn't reveal higher-order patterns (except for the obvious fact that each OLL applies a fixed transformation). Nor did deep intuitive or mechanical analysis. 1LLL, outside of pre-processing for certain cases in the F2L stage - as ZBLL does - is simply massive.

Because of these research problems - both for figuring out the maintenance problem, and the uncertainty of real-world advantages - there are significant issues with pure 1LLL.

My personal view on the topic is that we take a less pure approach towards systems. Measurement of single solves is an impure test of 'speed', and the paradigm of avg5 selects for certain ways of thinking which may not be ultimately beneficial. Ideally we should seek to increase the *incidences* of 1LLLs, not tyrannically enforce them. We should also look to reduce redundant sticker recognition, as well as the number and relationship of stickers to apprehend at any one instance.

The pragmatic approach is already evidenced as successful with a close reading of the very, very fastest CF(OP) cubers. Generally speaking PLL has already been recognised during the OLL execution so that the entire LL is executed in (apparently) one 'instance'. Recognition has been integrated with execution at this level. Also, typically some pre-processing for better LL cases has been done during F2L. The move cost for doing so can be very low (or even free) if one is already at a certain speed and sophistication of lookahead.

The desire for methodological purity is a seductive one. Instead, I feel like formalising the construction of a general approach may be the next level of advancement in solving the 1LLL problem. There are many options for hybridising existing systems and we currently lack a rationalised guidebook for the better options.

My loose conception of what this looks like; heuristics for pre-processing in the F2L stage (efficiently preserving and increasing late-stage partial solutions as they appear in earlier stages); increasing the number of available opportunities to smoothly include full orientation with the end of the F2l; exploiting un-used macro storage for higher-level cases; training PLL recognition mid-OLL; and improving intuition for avoiding worst-boundary cases efficiently (such as diagonal corner PLLs).

Of course, this is not to say that theoretical research and exploration is a bad thing, necessarily. Simply there is a lot of low-hanging fruit, providing we choose the right perspective. Focusing efforts on a particular branch for it's 'pure qualities' isn't ipso facto the right method for increasing the harvest.
 

cuber314159

Member
Joined
Dec 20, 2016
Messages
2,984
Location
The United Kingdom of Great Britain and Norther...
WCA
2016EVAN06
YouTube
Visit Channel
while I think that it is possible and viable to do 1lll it may be better for speed solving to be able to recognise OLL and then execute the OLL but while executing the OLL recognise the CP + EP of the OLL case and then recall what PLL to do straight away, this is what caused feliks to get his 4.73 single, by knowing the PLL before doing it and if this could be done every solve then sub-6 averages could be possible within the next year or two.
 

Rcuber123

Member
Joined
Sep 27, 2014
Messages
873
Location
At your house stealing your cubes
WCA
2014TAMI01
while I think that it is possible and viable to do 1lll it may be better for speed solving to be able to recognise OLL and then execute the OLL but while executing the OLL recognise the CP + EP of the OLL case and then recall what PLL to do straight away, this is what caused feliks to get his 4.73 single, by knowing the PLL before doing it and if this could be done every solve then sub-6 averages could be possible within the next year or two.
I'm pretty sure feliks does this at least half the time.
 

shadowslice e

Member
Joined
Jun 16, 2015
Messages
2,923
Location
192.168. 0.1
YouTube
Visit Channel
while I think that it is possible and viable to do 1lll it may be better for speed solving to be able to recognise OLL and then execute the OLL but while executing the OLL recognise the CP + EP of the OLL case and then recall what PLL to do straight away, this is what caused feliks to get his 4.73 single, by knowing the PLL before doing it and if this could be done every solve then sub-6 averages could be possible within the next year or two.
This is ROLL and I'm pretty sure quite a few top solvers do it in many of their solves already. 1LLL is still faster though.
 
Joined
Jun 12, 2017
Messages
103
i have found a way to reduce 1lll alg count to 1289

you preform an M2 S2 , then you auf and adf, preform 1/1289 algs, auf and adf, then preform M2 S2
 
Top