• Welcome to the Speedsolving.com, home of the web's largest puzzle community!
    You are currently viewing our forum as a guest which gives you limited access to join discussions and access our other features.

    Registration is fast, simple and absolutely free so please, join our community of 40,000+ people from around the world today!

    If you are already a member, simply login to hide this message and begin participating in the community!

Computer solving: a new two-phase algorithm?

What are they? Have you tested them in Solver 2?
Things are not straight forward. Phase 2 of my two phase solver exhibits only 16 symmetries, so there are 3 different orientations of a position (namely rotations along the URF-DBL axis) which have different behavior concerning my two-phase solver. Additionally the inverse of a cube position has obviously also the same maneuver length but also different solving behavior in the two phase algorithm. For a fast solve we used in the 20-proof what we call six-axes search that is apply the two phase alg in all 6 different combination of rotation and inversion in parallel to find a 20 move solution for the left over positions which were not solved by the coset solver (which only went out to depth 15 and partially depth 16 in phase 1 and typically left over a few thousand positions in a coset which has size 19,508,428,800).
So here are positions which were quite hard to solve in all 6 "directions". Most of the also are antisymmetric that means that the inverse of the position is a rotated or reflected version of the position itself. The times at the end are approx. times when solved in 20 moves with my Cube Explorer software (using only three-axis search). The other informations give the symmetry/antisymmetry, the names are explaine on my homepage.

R2 B F' L D2 B F' L U' L' R' F' D' B2 D2 B' F' U R' F' (20f) //C3{C3v}, 170s
D2 U2 F' D F2 U B' D' R F L' U R D L' D' B U2 R F' (20f) //C2h(a){D2h(face)}, 130s
B2 F D F2 U B' D' R F L' U R D L' D' B U2 R F' (19f) //C2h(a){I}, 130s
L' U2 L D' R' U2 B' U' F L' U2 L2 B2 L' D U2 L2 B' F2 R' (20f) //C4h{D4h}, 110s
R' U2 L2 U B' L D' U2 B2 U L2 B F L F' D2 F' U F' R' (20f) //C2h(a){D2h(face)}, 100s
L B2 L2 U B2 D' U' B D' L D U F L' D B2 U2 R2 F' R' (20f) //C4h{D4h}, 100s
R2 U2 F L2 D L2 U2 F' R U' L2 R2 U L2 R' B D R F R (20f) //C2h(a){I}, 75s
B D2 F D2 U L R D' R F2 L' B' F' D2 B' R B' F L D' (20f) //C2h(a){I}, 75s
U2 R D2 L D L R2 B L2 R U B2 L2 R F' R' F' L' D' F' (20f) //C2v(a2){D2h(face)}, 70s
B U2 L2 F2 L F' L2 D2 B' D' B' U R' D2 L B R2 D' F2 R' (20f) //C2(a){C2h(a)}, 65s
L2 R B2 L' D L2 B2 D' L' F L' F D U' B F' R D U' F' (20f) //C4h{I}, 60s
B2 L' D2 R B' L D' U' B F D R B2 D' L U L' U L R' (20f) //C2(a){C2v(a1)}, 54s
U2 B' U2 B' R' F R D2 B D' L' B' U' R B U' L2 D' B U' (20f) //Cs(a){C2v(a2)}, 48s
F L2 U2 L2 U' L2 D' R' B2 U L D F' R B' R' B' U' F2 U' (20f) //Cs(a){C2v(a2)}, 47s
R F2 U2 R2 F D L2 R B R2 F U2 R2 B2 D B U' L R F' (20f) //C2h(b){I}, 43s
U2 B2 U2 B' L' B2 R2 U' B' F D' B2 R' B2 L' U' L U2 F' R' (20f) //Ci{C2h(a)}, 37s
B' F' U2 B2 L' F' U' B2 F' R' U L' D2 L B' R2 D' U' R2 F' (20f) //C2v(a2){D2h(face)}, 30s
U2 R2 D2 R B' U R' F D F' L2 B R F2 U L' U' B2 R2 F' (20f) //C2(a){C2h(a)}, 30s
D2 L' F2 U B L2 R' U B' L B2 L D B' L' F' R U2 F' (19f) //C2v(a2){D2h(face)}, 28s
D2 L2 R' F2 U' B L' D U' L U2 L2 B' R' F2 D' U R B' R2 (20f) //Ci{C2h(b)}, 26s
B2 U2 B U2 F L D L2 B U2 R' F' D U' F2 D2 L' U B' R (20f) //C2(a), 25s
D2 F L2 B F' L U L2 D R' U L2 D2 R2 D R B F L' U2 (20f) //C2(a), 22s
F2 L' D2 F2 U' B U L2 R' D U2 R U L B R2 U' B R F' (20f) //C2h(a){D2h(edge)}, 22s
F' R2 U2 B U L' B' F2 R F' D F' L2 B F D U2 R B' F2 (20f) //Cs(a), 22s
L2 R' B2 R' F D U F2 U' F2 R' B2 R' B2 R' D L B' F' R (20f) //Ci{C2h(b)}, 20s
B R2 U2 F' R' B' U2 B2 D U2 L U F' R' F2 U F2 L2 U' F' (20f) //C2h(a){D2h(edge)}, 20s
D2 L2 B' D2 R B2 U L2 F L D' U F R' B R2 U2 R2 F' U' (20f) //C2v(a2){I}, 20s
B' F2 R2 U2 L D L2 U2 R' F L' R B2 R B D' U' L D' F' (20f) //Cs(a), 16s
L' U2 B2 L' U' R2 B2 F' L D' R B L B' D B R U F' R2 (20f) //Cs(a){I}, 17s
B2 R2 B' F2 R' F' L' U2 B2 R2 B' U2 B D2 U' L' D B2 L F' (20f) //Cs(a){I}, 15s
B' R2 B2 U2 L' U2 R2 B' D L2 F L R' D2 U' B F R D' F2 (20f) //C2h(a){I}, 13s
F2 L2 B F2 U2 L U' L2 D' R' U' L2 D2 R2 D' R' B' F' L' F2 (20f) //C2(a), 12s
F R2 D2 F2 L' F' U' F2 L' F R D' R B' U2 L R' F' U' (19f) //C1{Cs(b)}, 12s
L D2 L2 R2 F2 D' B U B' D2 F L R B' F2 L' B' D L R2 (20f) //C2(a), 10s
L2 D2 L' R2 F2 U' F U2 B D F U2 B2 D2 B D L R U R2 (20f) //C2(a), 10s
D2 L2 B D2 R F' D2 R2 D' L' D U L2 F D' R' B2 U R2 F' (20f) //C2v(a2){I}, 9s
L' R' B2 L' F2 U L R' U2 B L B L' B2 D2 F' U B D' R' (20f) //C2(a), 9s
B U2 B2 F2 L D R B U F L' F2 D L' B2 R F2 L' R' F' (20f) //Cs(a){C2v(a2)}, 8s
F D2 U2 B' F R' B F' R' D2 R F2 D F2 U' B U L D F' (20f) //C2(a){C2v(a1)}, 8s
B2 L D2 B2 F' R' F2 R' B' F' R' D F D U' L' F U' F2 R' (20f) //Ci{C2h(a)}, 7s
L B2 L' U2 L' B' F' R' F2 U' B2 D R' F2 L2 B2 F' D U2 F2 (20f) //C2v(a1){I}, 6s
D2 R D2 F2 U R F' L U B R2 D R' D2 F R2 D' R2 F2 R' (20f) //Cs(a){C2h(a)}, 6s
R D2 F2 U2 B2 D' F' L F' R' D2 B' D L F' D U' L2 F2 R' (20f) //C2(a), 5s
F' R2 B' F R U F L D' B' F D2 F L2 B' D L' R2 F' (19f) //C2(a){C2v(a1)}, 5s
L R F2 L' B F L' R2 U B' L2 F' U' L2 D' B2 F' R' U2 (19f) //C1, 5s
U2 R2 B' L2 U2 R' B L B D F D' F' U L D' R' D2 U' F' (20f) //C2(a), 5s
B2 L' R' F2 L2 B' D2 F2 L2 F' U2 B2 D' U' F2 R D2 B F' R2 (20f) //C2v(a1){I}, 4s
R' D2 F2 L' R2 D' L' R' U B R2 D2 F' R2 D2 U2 L' B' F' R' (20f) //C1{Cs(b)}, 4s
D2 R2 B2 F U2 R D B2 R D B' U2 B' L B' R F' L2 B R' (20f) //C2(a), 3s
D2 B' F U2 F2 D L2 F2 U2 F2 U L R B' L2 D U' F2 R' F2 (20f) //C2v(a2){D2h(face)}, 2s
F' R2 F L2 F' L D' R D' F' U2 F U' B D2 R' D' R2 D' F (20f) //C2(a), 1s
 
Thank you for providing your list of hard scrambles. All of them need 16 or less moves in phase 1 of Feather's algorithm.

What would be more interesting for me is how far you will have to go out in phase 1 to get an overall solution with <=20 moves.

Although not really what you are looking for, some interesting statistics (for optimal solving) can be found here, indicating that most of the time you don't need to search at depth N to get an optimal N move count solution.
 
New performance summary for optimal solving (for 1 000 random-state scrambles) has been published. It takes less than 2 seconds on average to find an optimal solution now using a web browser and native/pure JavaScript solver. The solver itself is not a limitation for better performance, the browser is.

Judging from the data here, this new solver is not competitive yet.

Let me paraphrase your statement: Judging from the data above, this new solver has no competition in its category yet :).
I would be very curious to see how min2phase.js (or any other solver in this category) would compare in this specific test.
 
New performance summary for optimal solving (for 1 000 random-state scrambles) has been published. It takes less than 2 seconds on average to find an optimal solution now using a web browser and native/pure JavaScript solver. The solver itself is not a limitation for better performance, the browser is.



Let me paraphrase your statement: Judging from the data above, this new solver has no competition in its category yet :).
I would be very curious to see how min2phase.js (or any other solver in this category) would compare in this specific test.
You're right, because as far as I know, no one else has written an optimal solver specifically for browsers. The more efficient solvers are almost always written in C/C++.

The optimization goal of min2phase is to get a solution that is short enough as quickly as possible, including initialization time, not to get the optimal solution. So it doesn't make sense to use it for comparison in finding the optimal solution.

In addition, I have recently done a detailed analysis of various optimal solvers. Regardless of the specific programming language and execution environment, the average number of search nodes in 3ColorCube is 3 times that of nxopt or vcube, and 1.5 times that of cube explorer, with the same memory limitations. I believe it still has a lot of room for optimization.
 
Last edited:
as far as I know, no one else has written an optimal solver specifically for browsers.

I have been trying JS port of ACube and ksolve (ACube.js and ksolve.js) - although their performance is nowhere near Solver 2.

The optimization goal of min2phase is to get a solution that is short enough as quickly as possible, including initialization time, not to get the optimal solution. So it doesn't make sense to use it for comparison in finding the optimal solution.

Would it be possible to modify/extend/make a new version of min2phase.js so that it would be able to find optimal solutions? If not, what is the reason?

Fun challenge between Solver 2 and min2phase.js could be to test the search time needed for finding 20-move solutions for 1 000 20f* scrambles (i.e. finding optimal 20-move solutions but not necessarily proving them to be optimal).

In addition, I have recently done a detailed analysis of various optimal solvers.

Just for fun (results can not be mutually compared objectively), could you reproduce the test above (search time for optimal solutions for 1 000 random-state cubes, preferably with the same scrambles as being used in the test for Solver 2) to the solvers in your analysis page?
 
I have been trying JS port of ACube and ksolve (ACube.js and ksolve.js) - although their performance is nowhere near Solver 2.
It is obvious, since the two softwares you listed are not designed for solving rubik's cubes optimally. They have their own strengths, but they are definitely inefficient in optimal solving.
Would it be possible to modify/extend/make a new version of min2phase.js so that it would be able to find optimal solutions? If not, what is the reason?
Fun challenge between Solver 2 and min2phase.js could be to test the search time needed for finding 20-move solutions for 1 000 20f* scrambles (i.e. finding optimal 20-move solutions but not necessarily proving them to be optimal).
It's certainly possible, but it doesn't make sense. The memory usage between them differs by a factor of over 1000, and the solving time might also differs by a factor of over 1000, based on the time-space tradeoff.
Min2phase is not designed for solving rubik's cube optimally. So for the optimal rubik's cube solving problem in the browser, 3ColorCube is undoubtedly the best afak.
Just for fun (results can not be mutually compared objectively), could you reproduce the test above (search time for optimal solutions for 1 000 random-state cubes, preferably with the same scrambles as being used in the test for Solver 2) to the solvers in your analysis page?
I didn't have much time to do a full test, I picked the first 100 scrambles, and on my laptop (10-core Apple M1 Pro, chrome 131):
3ColorCube (with 1.8G dist4) averaged 3.7 seconds to solve,
nxopt31 averaged 1.3 seconds to solve.
Note that they use similar amounts of memory and both use 10 threads.
 
Last edited:
It's certainly possible, but it doesn't make sense.

Nice to see there is no technical problem in making optmin2phase.js, so I am hoping some day it will be made by you or another programmer :-)

For average Joe like me, it certainly makes a difference to prefer a user friendly solver running in a web browser as opposed to fiddle with a C++ solver with or without GUI.

Min2phase is not designed for solving rubik's cube optimally.

Not sure if this relates to my challenge between Solver 2 and min2phase.js above, but I have been proposing a challenge which doesn´t require optimal solving - I mean, yes, the solution will be optimal, however, it won´t be proved as being optimal (the solver would need to find 20f but not 20f* solution). Nevermind if your reaction is not related to my Solver 2 vs min2phase.js challenge.

I didn't have much time to do a full test, I picked the first 100 scrambles, and on my laptop (10-core Apple M1 Pro, chrome 131):
3ColorCube (with 1.8G dist4) averaged 3.7 seconds to solve,
nxopt31 averaged 1.3 seconds to solve.
Note that they use similar amounts of memory and both use 10 threads.

Thanks for your results, even though I was looking for answers to questions like what C++ solver is performing best for this specific test and testing conditions, how much differs Python solver and CE in this test, what was optimal solving back in 1997 like (using Reid´s solver with new technology) or what order of magnitude of time will be needed for the solvers to pass the test.
But I understand it might be very time consuming to test all the solvers in your analysis page, plus your testing seems to be focusing on different metrics than I personally would like to see.
 
Thanks for your results, even though I was looking for answers to questions like what C++ solver is performing best for this specific test and testing conditions, how much differs Python solver and CE in this test, what was optimal solving back in 1997 like (using Reid´s solver with new technology) or what order of magnitude of time will be needed for the solvers to pass the test.
But I understand it might be very time consuming to test all the solvers in your analysis page, plus your testing seems to be focusing on different metrics than I personally would like to see.

My motivation is as follows.

For an optimal solver, we can simply assume that the average solving time is roughly equal to the node traversal efficiency (time per node) multiplied by the average number of nodes.

1. Average number of nodes: The average number of nodes is usually only related to the algorithm itself, and has nothing to do with the programming language, operating system, etc. If I rewrite an algorithm from Python to C++, or from Pascal to JavaScript, the average number of nodes can be considered unchanged.

2. Node traversal efficiency: That is, the access time of each node is affected by many factors, including programming language, compiler, operating system, CPU instruction set, branch prediction algorithm, memory frequency, etc.

For example: the main difference between vcube and nxopt is actually here. vcube uses SIMD (AVX2 and related instruction sets), hugepage and other technologies to improve node access efficiency, so it can be about 1.5~2 times faster than nxopt on some specific CPUs, and their average number of nodes is almost the same.

At present, my test mainly focuses on [1], because it is independent of factors such as programming language and executing environment, and is a result that can be reproduced on various platforms. [2] is greatly affected by the environment, so program A often runs faster on one platform, while program B runs faster on another platform. Of course, [2] is not meaningless, but it is affected by more factors, so it can be used as a reference.
 
I mean, yes, the solution will be optimal, however, it won´t be proved as being optimal
A solver that is not designed to generate optimal solutions can possibly fail to find any optimal solution at all.

A common optimisation in 2-phase Kociemba is to never look for phase 1 solutions that solve DR, break DR, then go back into DR. This is a useful optimisation for the use case of generating short-enough solutions quickly. However, such phase 1 solutions can be necessary for some optimal solutions. Example scramble: L R U D R U D L R, which has DR already solved on white/yellow; optimal is 9 moves, but without breaking DR it must take at least 14 moves (5 moves worse!). In this specific case, 2-phase Kociemba can still find the optimal solution by searching on a different axis, but at least it hints that a priori a not-designed-for-optimal solver can fail to find optimal solutions.
 
qq280833822, I am wondering:

1) at https://github.com/cs0x7f/cube_solver_test/wiki#solvers, in the Variations column, 3Color835M is not an optimal solver as far as I know. Only Quad-Core (1.8 GB dist4) and Solver 2 (1.8 GB or above dist4) are, or rather can be used as optimal solvers.

2) there are parts of code in assembly at https://github.com/hkociemba/CubeExplorer/blob/master/CordCube.pas. Can you tell how much search code for CE is assembly code? If it is a big portion of the search code, shouldn't be changed pascal to assembly?

3) what is the reason behind choosing 15-17f instead of 17-19f in your tables? From distribution point of view, 17-19f interval makes a better sense to me.
 
Last edited:
A common optimisation in 2-phase Kociemba is to never look for phase 1 solutions that solve DR, break DR, then go back into DR. This is a useful optimisation for the use case of generating short-enough solutions quickly. However, such phase 1 solutions can be necessary for some optimal solutions. Example scramble: L R U D R U D L R, which has DR already solved on white/yellow; optimal is 9 moves, but without breaking DR it must take at least 14 moves (5 moves worse!). In this specific case, 2-phase Kociemba can still find the optimal solution by searching on a different axis, but at least it hints that a priori a not-designed-for-optimal solver can fail to find optimal solutions.

Based on this post I got the impression that Kociemba's algorithm based solvers can be running in the "find a 20-move (or less) long solution without proving it to be optimal, as fast as possible" mode, which is exactly what I was asking for.
 
I didn't want to start a new thread so I will ask here instead.

For Kociemba's algorithm based solvers (CE in the optimal mode, or see performance summary for Python optimal solver), it seems the algorithm is searching at depth N to provide an N-move optimal solution.

As indicating in this post, Feather's algorithm doesn't have to do that very often, thus saving some search time.

My question is: can Kociemba's algorithm be modified/speeded up so that it would match Feather's algorithm functionality from this perspective?
 
Question for the discussion participants: can you give a definition of an optimal solver?

The source , cited in this thread, gives a rather strange definition of an optimal solver:
“An optimal solver has to generate in principle all possible solving maneuvers with increasing length until a solution is found.”

As an example, the following solution is given:
U B2 D' B' R' L' U L2 B2 R D' R' B L' U2 B2 R D' (18f, 23q)

To the perhaps unenlightened eye, there is a more optimal solution:
F D' F2 D' F R U2 B' R U' D B L U' F' U D2 F' (18f, 21q)

Resolve our doubts, pls.
 
“An optimal solver has to generate in principle all possible solving maneuvers with increasing length until a solution is found.”

Please excuse me my befuddlement, but that sentence does not make any sense to me. Does it want to tell me that an optimal solver has to use brute force / breadth-first search?
 
The solution to the optimization problem begins with the selection of optimality criteria and only then the
mathematics and calculation methods are selected (by the way, brute force & breadth-first search are used together).
Reducing the problem to a single-criterion problem requires the introduction of significant assumptions, but simplifies the final choice.
Optimization, unlike the usual comparison of options, involves considering all solutions that fall within the range of acceptable parameter values. Those solutions, in the process of searching for which a full review of possible options was not carried out, are usually called rational, not optimal.
 
Question for the discussion participants: can you give a definition of an optimal solver?
Yes.

Optimization, unlike the usual comparison of options, involves considering all solutions that fall within the range of acceptable parameter values.
Finding an optimal solution is not the type of optimisation problem you're thinking of.

The task is usually to find one solution with the lowest move count in a specified metric and prove that no shorter solution exists. This metric is usually FTM without a tiebreaker, so any solution with the same move count in FTM is equally good.

There are exceptions, and of course it can sometimes be useful to instead use [FTM with QTM as a tiebreaker] as a separate metric, or [ATM with FTM as a tiebreaker], or any of the zillions of variations of the same, but that's not the topic of discussion here.
 
Please excuse me my befuddlement, but that sentence does not make any sense to me. Does it want to tell me that an optimal solver has to use brute force / breadth-first search?
At least there is no other method known to find optimal solutions for Rubik's cube in general. Breadth-first search does not work for most positions because it needs too much memory. A depth-first search with some efficient heuristics (IDA*) seems to be the only way to guarantee optimal solutions.
 
Last edited:
Please excuse me my befuddlement, but that sentence does not make any sense to me. Does it want to tell me that an optimal solver has to use brute force / breadth-first search?
The optimal solver actually consists of two parts:
1. Get a solution
2. Prove that the solution is optimal (that is, prove that there is no shorter solution)

For (1), there are many directions that can be tried, including two-phase algorithms, some heuristic algorithms based on neural networks/MCTS, which have a lot of room for optimization.
For (2), that is, assuming that you have obtained a solution in 17f or 18f, and now want to prove that there is no solution in 16f or 17f, this problem is the most difficult.
As far as I know, the idea of all current optimal solvers is an improvement on the two-way search, that is, first search for several moves from the solved state, then search for several moves from the target state, and finally prove that there is no intersection between them. Of course, there will be many optimization methods in actual implementation, such as appropriately storing the results of searching for several steps from the solved state, etc., and finally forming the IDA* algorithm.
 
As far as I know, the idea of all current optimal solvers is an improvement on the two-way search, that is, first search for several moves from the solved state, then search for several moves from the target state, and finally prove that there is no intersection between them.
This is definitely not correct. Two-way search does not play a significant role for optimal solvers. To be more specific, I do not know any well performing optimal solver which uses this technique. All the optimal solvers I know use IDA* and they differ more or less only in the heuristics they use. And a heuristic often used is the true distances to some subgoal, for example the 4 equator edges in place + all orientation of corners and edges correct. You eventually can generate additional information about which moves on your way to the subgoal bring you closer to the subgoal. Often you also can exploit some of the 48 symmetries of the cubes to significantly speed up the search. And of course you can combine the heuristics for several subgoals to take the overall heuristic as the maximum of the heuristics of all subgoals. But this is essentially all what tells apart the different optimal solvers.
 
Last edited:
This is definitely not correct. Two-way search does not play a significant role for optimal solvers. To be more specific, I do not know any well performing optimal solver which uses this technique. All the optimal solvers I know use IDA* and they differ more or less only in the heuristics they use.
I think you are talking about two-way search in a narrow sense. According to nxopt's point of view ( https://github.com/rokicki/cube20src/blob/master/nxopt.md ) and my understanding, IDA* can be regarded as a refinement of two-way search in a broad sense.
 
Back
Top