• Welcome to the Speedsolving.com, home of the web's largest puzzle community!
    You are currently viewing our forum as a guest which gives you limited access to join discussions and access our other features.

    Registration is fast, simple and absolutely free so please, join our community of 35,000+ people from around the world today!

    If you are already a member, simply login to hide this message and begin participating in the community!

Neural Networks for determining whether algorithms are good?

Joined
Sep 28, 2008
Messages
464
Likes
23
Location
Germany
WCA
2008KARL02
YouTube
whauk
Thread starter #1
The general Idea
I have an idea:
We now have many easily accessible computer programs that can easily search the shortest algorithms for a certain position. (May it be a ZBLL, OLL-CP or something else. It doesn't matter). However the shortest is not always the best. Often people then search for two/three-gen algos or something like F <R,U> F', which might find decent algs. However this involves very much experimentation and time and one cannot be sure, that one didn't miss any good algorithm.

So my idea is essentially to have a computer program, that is able to determine, whether any algorithm is good or bad. Then it would be possible to generate thousands of algs for a certain position and just let the computer program search for the best ones.
How can this be achieved? Let me give a short introduction to neural networks:

Neural Networks
A neural network is a computer program that is given a certain input (a move sequence in this case) and produces an output (here how fast the algorithm is) based on numerical data (many example execution times for certain algorithms).
Let me elaborate a little bit:
Single Neurons
The smallest element of a neural network is a neuron. A neuron takes several inputs (numbers) and makes a weighted sum out of the inputs and decides whether this is above or below some threshold. Then it outputs a number, that carries the information how much above or below the threshold the weighted sum was. This can be interpreted as the probability, that the neuron should output "yes" resp. "no".


An example might be the following: A neuron has to answer the question "Am I going to the movies today?". As inputs it has the following:

A: Are my friends coming with me? (Value between 0 and 10 equaling the expected value of friends)
B: Is there a movie I like? (Value between 0 and 10 equaling how much I will like the movies)
C: Is there a rat in my basement? (Value between 0 and 100 equaling the expected number of rats)

Clearly C is completely unrelated so we will give it the weight 0. However A and B will influence my decision. A social person might give a huge weight to A, say 5 and a smaller weight to B, say 2. The threshold might encode how motivated the person is in general to go to the movies, say in this case it is 25.
So the neuron computes 5A+2B+0C and compares* it to 25. When it is much bigger it outputs something close to 1 (="yes") and when it is much smaller it outputs something close to 0 (="no"). The output value can be interpreted as the probability, that I will indeed go to the movies.

*The actual computation going on is: ql_7b1d889169bc63fb43a02cebb9b68592_l3.png . Some clever people came up with this, I don't know how, but I trust them at the moment.

Networks consisting of Neurons
This was the process in a single neuron. However to make it useful we have to create a network of many neurons, that interact. For this we encode the algorithm into neuron speech at first, e.g. we have a neuron, that outputs 1 if and only if the first move is an R' and 0 for anything else. For every possible move in any possible position in the move sequence we have a single neuron. Then there is a second layer of neurons, receiving inputs from the first layer, and a thrid layer receiving inputs from the second layer and so on. I don't know how many layers would be practical. The last layer consists of neurons (labelled (x,y)), that have to decide the following question: "How probable is it, that the algorithm can be executed in exactly x.y seconds?"
Taking the highest probability occuring in the last layer we get a definitive answer on how fast the algorithm will be.

Initial Data
What is needed to program something like this, is some (maybe a few hundred) sample inputs and desired outputs**, e.g. Sune should yield 0.6, T-perm should yield 0.8 and so on. The neural network will then find the most suitable weights and thresholds on its own (this is the neural network "learning" based on on data).

**However in order to actually do this, one should probably normalize the data. So you don't take the actual time as data, but somthing like actual time divided by time it takes the person to do (RUR'U')6. Then multiple persons can contribute to the learning data in a meaningful way.

How can this even work?
There might for example be a neuron, that essentially says "The longer the input, the higher the output". Then there might be a neuron, that detects whether RUR' is a part of the move sequence and if it is, it subtracts a little bit from the total time... and so on...
It is mostly impossible for a human to think of all kinds of little subsequences, that might make an algorithm a little faster. However it is easy to program thousands of neurons, that learn on their own, what makes algorithms faster, and so the neural network as a whole can indeed produce useful values.


Some blah blah
So much for the process. How likely is this going to work? I think very! Neural networks can do all sorts of clever things, like read handwriting, recognize faces... recognizing how fast an algo can be executed seems almost trivial, compared to those.

A problem, that is still to be overcome is, that different executions lead to very different times. E.g. Rw U' Rw U2 is much faster than L F' L D2. I have no idea how to fix this... Including every possibility leads to a undesirable high amount of algorithms.

Thoughts on this?
I don't know how to program stuff. Feel free to do it on your own.

Further reading: I can recommend this book: http://neuralnetworksanddeeplearning.com/chap1.html
 
Joined
Jul 18, 2013
Messages
57
Likes
0
Location
Leeds, England
WCA
2014LOWE01
YouTube
madmatthew1
#3
If google can make a neural network to determine where a photo is taken, then I believe we can use this for finding/testing algorithms. Maybe if we feed a network with optimal solutions to states, it can find the best solution to any scramble.
 
Joined
Nov 11, 2014
Messages
1,079
Likes
636
Location
Gauteng, South Africa
WCA
2014GRAY03
#4
The company I work for uses machine learning and neural networks quite often. The programming involved is actually really minimal if you use a language with libraries available that handle the actual neural network theory part. You actually need to know pretty much nothing about how they work to apply them. At work we use Matlab with the Machine Learning toolbox, and at home I've tried Python with scikit-learn, and I know a few people who've used R with the neuralnet package. In all of them, the actual neural network part can be as little as 3 easy lines of code. The rest is just formatting the data into something suitable to train on.

The hard part is getting the right data to train your neural network on, and I think that's especially difficult here. The reason Google's image recognition stuff works so well is that they've got millions upon millions of examples to train the network with. If you want meaningful results, you're going to need hundreds of cubers to time hundreds of algorithms, and ideally who know multiple algorithms per case and are relatively competent with all of them.

As a first pass, you can maybe try train a neural network using the relative number of people using the alg (i.e. data from algdb.net). It probably won't be that accurate, but it will let you get an idea if it works and you'll be able to get familiar with the techniques needed.

I wish I had more free time. If I did, I'd do this myself.
 
Top