• Welcome to the Speedsolving.com, home of the web's largest puzzle community!
    You are currently viewing our forum as a guest which gives you limited access to join discussions and access our other features.

    Registration is fast, simple and absolutely free so please, join our community of 40,000+ people from around the world today!

    If you are already a member, simply login to hide this message and begin participating in the community!

Computer solving: a new two-phase algorithm?

I think you are talking about two-way search in a narrow sense. According to nxopt's point of view ( https://github.com/rokicki/cube20src/blob/master/nxopt.md ) and my understanding, IDA* can be regarded as a refinement of two-way search in a broad sense.
Ok, if you regard a pruning table as a manifestation/result of some search from the solved state and the actual search process as a search from the target state I think we both mean the same.
 
qq280833822, I am wondering:

1) at https://github.com/cs0x7f/cube_solver_test/wiki#solvers, in the Variations column, 3Color835M is not an optimal solver as far as I know. Only Quad-Core (1.8 GB dist4) and Solver 2 (1.8 GB or above dist4) are, or rather can be used as optimal solvers.

2) there are parts of code in assembly at https://github.com/hkociemba/CubeExplorer/blob/master/CordCube.pas. Can you tell how much search code for CE is assembly code? If it is a big portion of the search code, shouldn't be changed pascal to assembly?

3) what is the reason behind choosing 15-17f instead of 17-19f in your tables? From distribution point of view, 17-19f interval makes a better sense to me.

Considering point no. 3 above, I understand qq280833822´s reasoning for point no. 1 now.

Aside from random-state scrambles (including expected 17-19f* scramble distributions), I have another feature request: aside from optimal (one-phase) Kociemba's algorithm solvers and optimal (two-phase) Feather's algorithm solver, could you please also add any Korf's algorithm solver to your analysis page? I don't know what solving algorithm is being used by the stickersolve and cubeopt solvers, I am assuming some modification of optimal Kociemba's algorithm.

Is there any chance you could include the following test to your analysis page?

Just for fun (results can not be mutually compared objectively), could you reproduce the test above (search time for optimal solutions for 1 000 random-state cubes, preferably with the same scrambles as being used in the test for Solver 2) to the solvers in your analysis page?

Last but not least, it is not clear to me what exactly do you mean by "3Color835M" - Single Core, Quad-Core or Solver 2? Same for "3ColorOpt" - Quad-Core or Solver 2? If Solver 2, what Dist4 file size? Could you please specify?
 
Last edited:
Would it be possible to modify/extend/make a new version of min2phase.js so that it would be able to find optimal solutions?

It turned out min2phase.js can find optimal solution by default.

To address requests like this one, I made a simple PWA (progressive web app). My testing conditions were: cell phone, PWA in offline mode featuring both min2phase.js and Solver 2 solvers, free RAM of 1 GB, free disk space of 2 GB - by today´s standards, I consider this phone setup as rather a low-end device.

If it wasn´t obvious from previous debate in this thread what you can expect from both solvers on my specific device, here is one example (note that I am talking only about the search time below, not the solvers´s initialization time):



Finding a 13-move solution took: 0.02 s for min2phase.js, 0.03 s for Solver 2 (7+6 and 12+1 solutions can be seen on the above simulator).
Proving a 13-move solution to be optimal took: 620.47 s for min2phase.js, 0.19 s for Solver 2.

One difference I noticed when playing with both 2-phase solvers and random-state scrambles: while an intermediate state (solved 3-color cube) can be always directly seen in Solver 2 solution, an intermediate state (a DR state) can not be always directly seen in min2phase.js solution (as opposed to e.g. a solution in Cube Explorer) due to optimizations known from human fewest moves solving such as NISS or pre-moves.

My conclusion: if a user wants to launch an app and start using the solver on his offline mobile device as quickly as possible, min2phase.js solver is the way to go. If a user wants to search for optimal solution on his offline mobile device, Solver 2 is the way to go. If solver´s initialization time, memory and space requirements are not an issue for a user, any of these 2 solvers is the way to go.
Batch solving is not considered here because min2phase.js, by its current design, doesn´t have a function/parameter to stop/cancel the search, meaning it could be running literally forever in some max length < 20 test settings.

Fun challenge between Solver 2 and min2phase.js could be to test the search time needed for finding 20-move solutions for 1 000 20f* scrambles (i.e. finding optimal 20-move solutions but not necessarily proving them to be optimal).

For a fully initialized min2phase.js and using the 835 MB Dist3 file for Solver 2 (keeping the same conditions as for the tests above), on my mobile phone using PWA in offline mode, finding a 20-move solution for the Superflip took: 70.71 s for min2phase.js, 18.69 s for Solver 2.
 
Last edited:
I haven´t seen any implementation of the following two-phase algorithm, so I was wondering if its implementation is indeed a new approach in computer solving?

It turned out Feather's algorithm can be accessed online at least since 2015. Considering he came up with his 3-color cube method for humans in 1980, there is a good chance he came up with his computer algorithm (optimal 3x3x3 solver, to be specific) much earlier, only to publish it online in 2015 at the latest.
 
All the optimal solvers I know use IDA* and they differ more or less only in the heuristics they use. And a heuristic often used is the true distances to some subgoal, for example the 4 equator edges in place + all orientation of corners and edges correct. You eventually can generate additional information about which moves on your way to the subgoal bring you closer to the subgoal. Often you also can exploit some of the 48 symmetries of the cubes to significantly speed up the search. And of course you can combine the heuristics for several subgoals to take the overall heuristic as the maximum of the heuristics of all subgoals. But this is essentially all what tells apart the different optimal solvers.

Optimal Feather´s algorithm has one very specific feature in comparison with optimal Kociemba´s or Korf´s algorithm. While optimal Kociemba´s or Korf´s algorithm is using a one-phase approach, optimal Feather´s algorithm is using a two-phase approach.

From wiki: "When searching for an optimal solution, Feather's algorithm is finding suboptimal solutions on the way. This is an advantage because in many cases it does not have to search the depth of the optimal solution length because it has already found a suboptimal solution at a lower depth that is equal to the optimal solution length, hence it only has to complete search depth N-1 to prove that solution length N is optimal."

From this point of view, Feather´s algorithm seems to be more two-phase algorithm than Kociemba´s algorithm because it uses a two-phase approach for both optimal and suboptimal solutions. Also from wiki: "Kociemba's two-phase algorithm is not designed to search for an optimal solution, its purpose is to quickly find a reasonably short suboptimal solution. A randomly scrambled cube would be typically solved in a fraction of a second in 20 moves or less, but without any guarantee that the solution found is optimal. While it is technically possible to search for an optimal solution using Kociemba's algorithm by reducing a two-phase solver to only a one-phase solver (only phase 1 would be used until the cube is completely solved, no phase 2 operation being done at all), more hardware utilization would be neccessary in that case because a fast optimal solver requires significantly more computing resources than a fast suboptimal solver."
 
This is an advantage because in many cases it does not have to search the depth of the optimal solution length because it has already found a suboptimal solution at a lower depth that is equal to the optimal solution length, hence it only has to complete search depth N-1 to prove that solution length N is optimal.

Maybe I miss something but I do not see that this very remarkable. You can use any suboptimal solver to find some fairly short solution of length N and then obviously any optimal solver now only has to finish depth N -1 to show length N optimality.

While it is technically possible to search for an optimal solution using Kociemba's algorithm by reducing a two-phase solver to only a one-phase solver (only phase 1 would be used until the cube is completely solved, no phase 2 operation being done at all), more hardware utilization would be neccessary in that case because a fast optimal solver requires significantly more computing resources than a fast suboptimal solver."
This description is not very accurate. When you use only phase 1 to solve a cube optimally, you can apply phase 1 with the same pruning table in parallel in three different directions (UD, RL and FB). This gives a big perfomance boost.

And indeed, if you change the goal of the phase 1 from
all orientations of edges and corners solved + equator edges in their slice, pruning table takes about 33 MB.
to the 24 times smaller subgroup
all orientations of edges and corners solved + equator edges in place, pruning table takes about 24 x 33 MB, still less than 1 GB,
you get a very fast optimal solver. I implemented this method in my optimal Python cube solver (https://github.com/hkociemba/RubiksCube-OptimalSolver) because I was curious to know if a Python optimal solver would work at all.

So though I appreciate "Feather's Algorithm" as a solid framework for optimally solving a cube I still cannot see the advantage compared to a subgroup/coset based approach which from a mathematical viewpoint at least for me has a cleaner design.
 
Maybe I miss something but I do not see that this very remarkable. You can use any suboptimal solver to find some fairly short solution of length N and then obviously any optimal solver now only has to finish depth N -1 to show length N optimality.

You are not missing anything. If I am not mistaken, a complete search of depth N takes about 13.3 times longer for both Kociemba´s and Feather´s algorithm than a complete search of depth N-1. In fact, I have been asking whether this speed-up is possible for optimal Kociemba´s solver too, but so far I have got no answer. I find it quite a unique feature of optimal Feather´s solver in comparison with optimal Kociemba´s and Korf´s solver (Korf´s solver, however, is not designed to search for a suboptimal solution).

This description is not very accurate.

Feel free to tell me what is wrong with it. My thought process was following: when stating that Kociemba´s algorithm can find a 20 or less move solution in a fraction of a second, and it also can find an optimal solution, some readers might get an impression that it can find an optimal solution in a fraction of a second on their typical home computer. All I tried to express was that a user could either use the same hardware and it just takes significantly longer than a fraction of a second to find an optimal solution, or a user can use significantly more hardware to find an optimal solution in a fraction of a second. By computing resource I meant both hardware and time, more precisely hardware * time.
 
There is nothing wrong. But the three axis in parallel search is a fundamental part of the phase 1 optimal solver. Without this, the solution would take way too long.

Got it. I edited the last part from "a fast optimal solver requires significantly more computing resources than a fast suboptimal solver" to "a fast optimal solver requires significantly more computing resources than an equally fast suboptimal solver" to be more clear.

I think you got my point from my previous post. I took one scramble & solve (#5) from your optimal Python solver examples.

DBDBUFURUBLFBRLFFLFDRRFUBLDDDRBDURUUBDLDLFULRLRLUBFBRF
depth 15 done in 0.55 s, 1335570 nodes generated, about 2441181 nodes/s
depth 16 done in 7.5 s, 18770952 nodes generated, about 2502760 nodes/s
depth 17 done in 103.77 s, 260113815 nodes generated, about 2506756 nodes/s
depth 18 done in 1055.2 s, 2624360711 nodes generated, about 2487065 nodes/s
total time: 1167.05 s, nodes generated: 2904680714
L1 D1 L2 U2 F1 U1 D2 F2 U1 R1 F1 B1 R2 U3 R1 F1 U3 F3 (18f*)

As we can see, the search time at depth 18 takes more than a 90% of a total search time in this example. If a two-phase approach is implemented in Kociemba´s optimal solver, just like in Feather´s solver which found an optimal 17+1 solution, you might not need to search at depth 18 at all.

I do not see that this very remarkable

For this particular scramble, a 90% savings of the search time looks like a very remarkable feature to me.
 
Last edited:
Got it. I edited the last part from "a fast optimal solver requires significantly more computing resources than a fast suboptimal solver" to "a fast optimal solver requires significantly more computing resources than an equally fast suboptimal solver" to be more clear.
It is more complicated and the above does not make it clearer. The speed of a suboptimal solver depends on the qualitiy of the solution you want to have, so the expression "fast suboptimal solver" seems problematic. If you want let's say a solution <=22 moves, it usually will be found immediately. But take your example above. The suboptimal solver has to find a 18 move solution faster than the optimal solver to be helpful. This may happen, but it is in no ways guaranteed. In your example my algorithm finds a 19 move solution almost immediately, but it does not find a 18 move solution faster than the optimal solver (I stopped after 20 min without result).
So with my algorithm this does not work.
IF you can confim that with Feather's algorithm the suboptimal solver has a better behaviour and really is a shortcut to find optimal solutions, this would indeed be a progress.
 
the expression "fast suboptimal solver" seems problematic

Agreed. That is why I mentioned a time span of a fraction of a second ("A randomly scrambled cube would be typically solved in a fraction of a second in 20 moves or less...") to guide a reader what is meant by a "fast suboptimal solver" in this context.

IF you can confim that with Feather's algorithm the suboptimal solver has a better behaviour and really is a shortcut to find optimal solutions, this would indeed be a progress.

As a non-programmer, I can not tell you what is in the source code. I can only tell you my observations from a user interface (go ahead and make your observations too, after all the solver is online so anyone can use it).

On my crappy computer (so never mind the search times), this is what I see when using Google Chrome, choosing Solver 2, loading the 835 MB Dist3 file, changing Search Time from 5s to 500t, entering colors and hitting the Solve button:

Searching
Current Solution: 21 Moves
Current Solution: 20 Moves
Current Solution: 19 Moves
Current Solution: 18 Moves

39.56 17+1 W2
Optimal

Then in the shortened solve log, this is what I see:

Concurrent Processes = 4
Search Array Gen Depth = 10

Worker 1:
Depth 12 completed, 133 nodes, 0 / 0 tests
Depth 13 completed, 2338 nodes, 0 / 0 tests
R D L' F2 L U B2 L B2 R L2 F B' L B2 D2 L2 B' F R2 B' F' [14+8] (22f)
Depth 14 completed, 38440 nodes, 51 / 8 tests
R U L2 D' B U2 R' F' R2 B L F2 R2 B L' R2 D2 L2 F2 D2 U2 [15+6] (21f)
Depth 15 completed, 567193 nodes, 858 / 126 tests
Depth 16 completed, 7978342 nodes, 16791 / 2168 tests
R2 L' F U2 R2 D2 R2 D R B L D' L U' R' U F B2 F2 L2 [17+3] (20f)
R2 L' F U2 R2 D2 R2 D R B L D' L U' R' U F' B2 L2 [17+2] (19f)
Depth 17 completed, 107463697 nodes, 101325 / 12838 tests
39.56 Search Time

Worker 2:
Depth 11 completed, 19 nodes, 0 / 0 tests
Depth 12 completed, 256 nodes, 15 / 2 tests
Depth 13 completed, 2305 nodes, 51 / 4 tests
Depth 14 completed, 32260 nodes, 312 / 34 tests
F2 R2 F B U L B U B U2 D R' D' R B' R2 D2 L2 B2 R2 F2 D2 F2 [15+8] (23f)
U2 L2 F' D2 B' D2 L' U L F2 B2 L' U' D2 B' F2 L2 B2 R2 D2 B2 [15+6] (21f)
Depth 15 completed, 468313 nodes, 969 / 126 tests
F' R D2 F2 L B L' U R2 D B2 R' U2 D2 L F' L2 D2 F2 R2 [16+4] (20f)
Depth 16 completed, 6623509 nodes, 9864 / 1270 tests
F2 R' U R' B2 R2 L' U2 R2 B R' F' R D' F' D L B2 L2 [17+2] (19f)
U B U' R' F2 D R2 D R' D2 F R2 U F2 R' U' F' L2 [17+1] (18f)
Depth 17 completed, 91849453 nodes, 22296 / 2816 tests
35.09 Search Time

Worker 3:
Depth 12 completed, 83 nodes, 0 / 0 tests
Depth 13 completed, 1820 nodes, 0 / 0 tests
L B2 L2 F D' B2 D B' D' R' L U R2 D F2 R2 D2 F2 L2 F2 U2 [14+7] (21f)
Depth 14 completed, 31721 nodes, 174 / 26 tests
Depth 15 completed, 476369 nodes, 729 / 98 tests
Depth 16 completed, 6912152 nodes, 11073 / 1412 tests
D2 R2 U' F2 R' D L U R2 D B R' F L F2 U' R' U2 B2 F2 [17+3] (20f)
Depth 17 completed, 98085122 nodes, 133227 / 17146 tests
37.28 Search Time

Worker 4:
Depth 11 completed, 29 nodes, 0 / 0 tests
Depth 12 completed, 248 nodes, 0 / 0 tests
Depth 13 completed, 2624 nodes, 0 / 0 tests
Depth 14 completed, 37229 nodes, 228 / 36 tests
L2 D' B D' F' R' F2 L' B2 D B D L' D' R L2 U2 R2 D2 F2 R2 B2 U2 [15+8] (23f)
L2 F D' B2 R' B2 L' B R' F2 R2 B2 R' B2 U' L2 U2 L' R' D2 L R' [15+7] (22f)
L2 F D' B2 R' B2 L' B L' U2 R2 D2 L' F2 D' B2 L R F2 L' R' [15+6] (21f)
Depth 15 completed, 532667 nodes, 1101 / 152 tests
L2 D' F B U' L' D2 L' F' L U' R' F R D L B2 L2 U2 R2 [16+4] (20f)
Depth 16 completed, 7429256 nodes, 12267 / 1628 tests
B R' F2 D R2 D R' D2 F R2 B' U F2 B R' U' F' L2 [17+1] (18f)
Depth 17 completed, 99717425 nodes, 54291 / 7022 tests
37.74 Search Time

Although I would not formulate it as that with Feather's algorithm the suboptimal solver has a better behaviour and really is a shortcut to find optimal solutions - but go ahead with that if it makes a better sense for you to formulate it that way.

Here is a description how an optimal solution was found:
  • at depths 0-13 no solution was found
  • at depth 14 2 solutions were found, possibly worker 3 found [14+7] (21f) quicker than worker 1 found [14+8] (22f) - 21 moves shown as current solution
  • at depth 15 some solutions were found but none of them is shorter than already found 21f from depth 14
  • at depth 16 2 [16+4] (20f) solutions were found - 20 moves shown as current solution
  • at depth 17 some solutions were found, I can not tell whether [17+2] (19f) was found quicker by worker 1 or worker 2 - 19 moves shown as current solution
  • then worker 2 found a [17+1] (18f) solution quicker than worker 4, and completed the search of depth 17 in 35.09 seconds - 18 moves shown as current solution
  • to prove an 18f solution from worker 2 is optimal, all remaining workers must complete the search of depth 17 (searching for a [17+0] solution which there isn´t any), the slowest one is worker 1 in 39.56 seconds - now it is proved that a [17+1] (18f) solution from worker 2 is optimal
 
Last edited:
Back
Top