Welcome to the Speedsolving.com, home of the web's largest puzzle community! You are currently viewing our forum as a guest which gives you limited access to join discussions and access our other features.

I am working on a lego robot that can solve various size rubiks cube, I have it working for 2x2x2, 3x3x3 and 4x4x4. The robot is large enough to handle a 5x5x5, 6x6x6 and mini 7x7x7 but I am struggling to find solvers for those sizes. I found various threads about NxNxN solvers but haven't found one that produces a low enough number of moves for me to feasibly do with a robot. https://github.com/aditya-r-m/Rubiks-Cube for instance appears to work but produces 700+ moves to solve a reasonably scrambled 5X5X5, that would take my robot about 35 minutes execute.

Any recomendations on solvers for these sizes (or NxNxN)? Ideally it would be something I can run from the command line and would be lightweight enough to run on a raspberry Pi3 but I realize that may be wishful thinking.

If you did this 8 years ago you were pretty early on in the world of using LEGO to solve rubiks cubes, very nice There is a very popular robot out there called MINDCUB3R that can be built from a standard Mindstorms EV3 kit that can solve a 3x3x3. I built it a while back and made a little video:

It was designed by David Gilday who has designed several well known rubiks cube solving LEGO robots.

MINDCUB3R is pretty cool but I wanted to build something that was faster and that could solve various sizes of cubes. What I came up with was heavily influenced by some of David's robots so I definitely owe him some kudos. I need to take some more videos but here is a short one I took while troubleshooting something. Note that I only started recording after it had already snapped a pic of each side to figure out the colors and compute a solution. Also the cube wasn't very scrambled at the start but you can get the general idea of how it works:

Right now it takes about 50 seconds to:

take an image of all six sides

process those images to find the squares in each and extract the mean RGB value for each square

process the RGB values to classify each square as U L F R B or D

compute a solution

Then it is about 3100ms per move....so a 3x3x3 that takes 20 turns to solve can be done just under 2 minutes. My goal is to get that down to 90s, right now I spend almost 40s taking images of all six sides (the ev3 cpu is pretty slow) so I need to do some digging and make that faster.

I wrote a 5×5×5 solver in Java a while ago, but the code is pretty horrific and I've been too "busy" (read: lazy) to clean it up for public consumption. It generates solutions of about 90 moves (OBTM) in a few seconds, but also uses a few gigabytes of lookup tables, which might not be suitable for your use. (If you want to take a look at the code for my solver anyway, I could upload it somewhere.)

Herbert Kociemba is also working on another big cube solver, but it's not finished yet. There's also unsolved's solver and IAssemble's solver, which aren't publicly released either. I don't know of any other "low-move-count" big cube solvers in development.

I wrote a 5×5×5 solver in Java a while ago, but the code is pretty horrific and I've been too "busy" (read: lazy) to clean it up for public consumption. It generates solutions of about 90 moves (OBTM) in a few seconds, but also uses a few gigabytes of lookup tables, which might not be suitable for your use. (If you want to take a look at the code for my solver anyway, I could upload it somewhere.)

I wrote a 5×5×5 solver in Java a while ago, but the code is pretty horrific and I've been too "busy" (read: lazy) to clean it up for public consumption. It generates solutions of about 90 moves (OBTM) in a few seconds, but also uses a few gigabytes of lookup tables, which might not be suitable for your use. (If you want to take a look at the code for my solver anyway, I could upload it somewhere.)

Herbert Kociemba is also working on another big cube solver, but it's not finished yet. There's also unsolved's solver and IAssemble's solver, which aren't publicly released either. I don't know of any other "low-move-count" big cube solvers in development.

I did not read this thread before so maybe it is a bit late to answer. The word "lazy" may also apply to me. I did not work further on the solver since then and I fear when I dive into the code again it might be difficult to understand what I did before. In the moment I try to learn Python and the best way to do seems for me to port my two-phase algorithm (for 3x3x3) to Python. I am writing it in its fully developed form using symmetry reduction and large pruning tables in contrast to my Java implementation ( http://kociemba.org/download.htm ) which is much less powerful.
Once I have finished this I definitely will return to the nxnxn project sooner or later

@Herbert Kociemba do you (or anyone else) have any interest in working on a open source python big cube solver? (hopefully NxNxN eventually) I am working on a python based solver where the goals are:

Must be able to run on a Raspberry Pi3 (1G RAM, 1.2Ghz quad core)

solve a 4x4x4 cube

solve a 4x4x4 cube in a reasonable number of moves...hopefully less than 60 moves on average

solve a 5x5x5 cube

solve a 5x5x5 cube in a reasonable number of moves...hopefully less than 100 moves on average

solve a 6x6x6 cube

solve a 6x6x6 cube in a reasonable number of moves ("reasonable" is TBD)

solve a 7x7x7 cube

solve a 7x7x7 cube in a reasonable number of moves ("reasonable" is TBD)

solve a NxNxN cube

I currently have #2 and #4 done and I think I am getting close on #3 (I have not started on #5 -> #10). I can solve a 4x4x4 cube in an average of 93 moves. For 4x4x4 I solve via:

Solve UD centers via a lookup table (8 or 9 moves on average)

Solve LFRB centers via a lookup table (8 or 9 moves on average)

I am able to work around the Pi3's limited RAM by storing the lookup tables in text files where the entries are sorted, I then do a binary search through the file for the current state of the cube. So I never load the lookup tables in memory, I just open a filehandle and search.

My next goal is to build a lookup table that can pair the 4x4x4 edges without destroying the centers, once I have that working #4 should be done.

I know python really well but I am fairly new to the world of rubiks cubes and this is the first time I've attempted writing a solver so I'm sure there would be huge benefit to having folks who know way more about this than I do jump in on the project. I have been picking xyzzy's brain offline and he has been a big help with answering my questions.

The code lives here...warning that I haven't had a chance to clean things up yet as the code is churning at a pretty high rate right now. I'll clean it up and make it more readable once things settle down. https://github.com/dwalton76/rubiks-cube-solvers/tree/master/NxNxN

@Herbert Kociemba do you (or anyone else) have any interest in working on a open source python big cube solver? (hopefully NxNxN eventually) I am working on a python based solver where the goals are:

Maybe later. I want to get some more experience with Python first while programming my 3x3x3 two-phase solver. Btw I also have a Raspberry Pi3 (1G RAM, 1.2Ghz quad core) and the routines I have programmed so far also work there. The memory requirements for my solver (mostly for the pruning tables) are about 100 MB, this should be possible with the Raspberry without problems.

The fact that yours will run on a Pi3 is some great news.

I'll keep posting updates to this thread on where things are with my solver...I made some improvements to 4x4x4 edge pairing and have the average number of moves down from 93 to 83. There is still room for improvement on edge pairing...I have some ideas to explore.

It depends. Python is really slow compared to a static typed language. And Pi3 is really slow compared to a PC. This combination makes the perfomance (slow)^2 .
Let's take the pruning table for phase 1 for example. It gives the exact number of moves (modulo 3) to solve phase 1 for a given state. Reduced by 16 symmetries there are 140930280 entries. With Pascal/Delphi and a PC it takes less than a minute to compute this table. My python implementation takes about three hours on a PC ! I use numpy for the arrays here. To compute the table with python and the Pi3 takes about two days!
Once the tables are created they load very fast, within a few seconds, even on the Pi3. I am still quite optimistic that this will give a powerful solver on the Pi3.

Pi3 feels like a supercomputer compared to the 300Mhz LEGO Mindstorms EV3 brick

Just speaking for myself here I would be fine building the tables on a PC and copying them to the Pi3 or ev3 brick. Or heck if they compress down enough make them available for download.

The program will need about 80 MB of RAM, so I think it will not run on the EV3 brick.

Edit: I found that the numpy module is very slow for array access and that using the standard array module gives much better performance. It has no twodimensional arrays but it is not difficult to change the code to use only onedimensional arrays. I think it will run 3x as fast when finished.

The Python 3x3x3 solver is working now in a basic form. The two-phase-algorithm is working only in "one direction" yet. In the advanced form it should run in parallel in three different directions within three threads or even in 6 threads if we include the inverse cubes. On my PC I got the following test results:
Creating all tables: less than an hour
Solving 100 random cubes within <= 20 moves : 380 seconds
Solving 1000 random cubes within <=21 moves : 138 seconds

On a Raspberry Pi3 100 cubes were solved within <=21 moves in 150 seconds, so it is about 10 times slower than a PC

When you say python and threads...are you using actual python threads or are you breaking the work up into subprocess calls? Python threads kinda suck in that they won't use multiple cores so if you are CPU bound they don't end up helping that much.

When you say python and threads...are you using actual python threads or are you breaking the work up into subprocess calls? Python threads kinda suck in that they won't use multiple cores so if you are CPU bound they don't end up helping that much.

I will use Python treads. I know of the Python Global Interpreter Lock (GIL) and surely it will use only one CPU. But still I expect an increase in perfomance. I think I would have to load the pruning tables several times if I use several processes - or am I wrong?

Ack just wanted to make sure you were aware of the whole GIL thing. It can be a shocker the first time you break something into multiple threads to get around being CPU bound only to find that you are still CPU bound due to GIL

Agreed I think each process would have to load the pruning tables which I'm guessing will consume a lot of memory. I've been using text files (the content is sorted) for table storage and then I do a binary search through the file. It is pretty fast and doesn't use much memory as all you are doing is moving the filehandle around in the file.

Out of the 72 steps to pair the edges, 27 of them were to pair the last two wings. So 45 steps to pair the first 22 wings and 27 steps to pair the final two...there are some savings to be had here.

6x6x6 solver, I am just getting started, I can't solve a 6x6x6 yet.

I need to get the tables compressed enough to check in on github (there is a 100M limit per file) in case anyone wants to try this out.

Ack just wanted to make sure you were aware of the whole GIL thing. It can be a shocker the first time you break something into multiple threads to get around being CPU bound only to find that you are still CPU bound due to GIL

I got the multithreaded part working now and the results are much better than I expected with GIL in mind. I solved 1000 random cubes with the condition to return a solution with at most 20 moves. On my PC I got:

single thread: 3910 seconds
two threads, cube and inverse cube: 1026 seconds
three threads, cube and the cubes rotated 120° and 240° around the long diagonal: 641 s
six threads, all combinations of rotated and inverse cubes: 320 s

So there is a perfomance increase by a factor 12 when running 6 threads in parallel without using more CPU-time due to GIL!
The reason is that the solution time for a cube greatly depends on the depth in phase1 which is needed to find a solution within 20 moves. So if in one of the six combinations the phase1 length is one or two moves less on average, the factor 12 can be explained easily.

Edit: Using the six thread version, my Raspberry Pi3 took 1 hour to solve 1000 random cubes with <= 20 moves.

Increasing the phase 1 depth by 1 increases the computing time by a factor about 13. So the many cases in the first table with one thread and depth 13 and 14 are responsible for the longer solving times.