yea i saw that

actually lately i like to think of cube more in terms of linear algebra than groups... A cube can be seen as a column vector, and every move is just a linear transformation... associativity, commutativity, inverses of moves, everything works. I haven't thought about it too much but I think I'd like to sometime

although there are some issues i can think of, such as the fact that the "rubik's cube space" is discrete... and finite...

edit: actually the finiteness and discreteness is not much of an issue... and all transformations could i think be modeled with swaps...

also what would be the metric on this space?

also I tried thinking of it as a graph... each node being a state, edges for each operation you can do on a cube... solution being just a BFS search on the graph?

Sounds interesting. i have tried thinking on this lines but couldn't move forward. Can you suggest a basis, a linear combination for the identity(solved cube) and a transformation for a basic turn(say a U turn)

I've also thought about this some. I asked Bruce about this, who pointed to the book "Adventures in Group Theory", which talks about permutation matrices. Per Bruce:

A permutation matrix is a square matrix with all 0's except for one 1 in each row and in each column. So you could use a 48x48 permutation matrix to represent Rubik's cube group elements. Each 1 in the matrix indicates a "facelet" (or facet) going from one location to another. The basic moves (such as U) move 20 facets and leave the other 28 facets fixed. So the matrix for U would have 1's on the diagonal for 28 rows/columns and 20 1's that are off the diagonal. This is a rather simple but not very compact way of representing cube group elements with a matrix. If you were considering the group where the centers can be arranged 24 different ways (such as via cube rotations), then you would need a 54x54 matrix.

This is one way of looking at it.

Another way that seems more natural to me is to simply use a column vector, where each individual element represents a piece. This works well for position of pieces, but can become a bit messy when you involve orientation.

To make things simple, consider a 2x2 (so just corners, if you like): you can label your corners from A to H (or 1 to 8 if you like) and create a vector out of this. Any moves would simply be transformations composed of elementary row operations (namely, interchanging rows in your column vector). The obvious basis for this would be your standard basis.

Including orientation makes this messier, because now you need to include extra information about each piece. The most natural way I could think of to do this is to make each element in the column vector a vector itself (namely, a 1x3 vector for corners, as there are 3 possible orientations per corner). Your transformation matrix now must contain appropriate elements, notably its elements must also be transformation matrices composed of elementary row operations.

To type out a whole example, a solved 2x2 would be:

[[A,0,0]

[B,0,0]

[C,0,0]

[D,0,0]

[E,0,0]

[F,0,0]

[G,0,0]

[H,0,0]]

And accordingly, a U could be something like:

[0,0,0,I,0,0,0,0

I,0,0,0,0,0,0,0

0,I,0,0,0,0,0,0

0,0,I,0,0,0,0,0

0,0,0,0,I,0,0,0

0,0,0,0,0,I,0,0

0,0,0,0,0,0,I,0

0,0,0,0,0,0,0,I]

Where I represents a 3x3 identity matrix, and 0 is a 3x3 0 matrix. This case, being a U turn, is slightly less messy since you can consider orientation to not be altered.

For edges, you can also use complex numbers to indicate orientation... your transformations/moves will just accordingly need to contain complex componets. So for example, you might have a position matrix:

transpose([A, B, C, D, E, F, G, H, I, J, K, L])

And a transformation matrix for an R turn (I hope I didnt make a mistake):

[1,0,0,0,0,0,0,0,0,0,0,0

0,1,0,0,0,0,0,0,0,0,0,0

0,0,1,0,0,0,0,0,0,0,0,0

0,0,0,0,0,0,0,i,0,0,0,0

0,0,0,0,1,0,0,0,0,0,0,0

0,0,0,0,0,1,0,0,0,0,0,0

0,0,0,i,0,0,0,0,0,0,0,0

0,0,0,0,0,0,0,0,0,0,0,i

0,0,0,0,0,0,0,0,1,0,0,0

0,0,0,0,0,0,0,0,0,1,0,0

0,0,0,0,0,0,0,0,0,0,1,0

0,0,0,0,0,0,i,0,0,0,0,0]

Using something like this, although I haven't done it, I assume you can deduce many things, like the fact that the transforms F, R, U, B, L, and D are not linearly independent.