I don't disagree. However, right now it seems that the best approach is to make things as accessible as possible. My original idea was focusing on fewer methods with large potential; I think is a much better approach but it seemed a bit to restrictive for the first iteration of this competition. I also had the idea of having teams competing against one another with a few specific methods. I already have plans to hold another competition down the line and hopefully if this competition gets enough engagement we can implement these more targeted ideas. PM me and we can brainstorm more.I had a concern about this:
Some quirky methods are novel only in-part, while some are completely novel.
Heise for example shares no steps with the big-4;
Tripod starts off like petrus upto 2x2x3, then deviates quite a bit;
Something like 42 is very close to Roux except instead of LS+CMLL+EO, you do corner+c-CMLL+EOedge
So if someone who has done lots of Roux does 42, s/he's going to get close to their Roux times with little practice because they already have FB, SB-square, and LSE down, and know CMLL algs. But no matter which of the big four you main, you'll have considerable trouble getting close to your times with Heise because you're doing something completely new on all steps. Tripod falls somewhere in the middle.
So going by solve-times alone is probably not a fair way to compare entires.
If there was a way to factor in the novelty of the method compared to big-4, like "somehow" separating the methods into 2-3 catagories depending on how similar they are to a big-4 variant, or some other way, it would be considerably more fair and the results could potentially even be useful.
Not sure how practical this is though.