Craig K. (canadianpuzzler) wrote,

First 2013 Mystery Hunt thoughts

I don't know whether or not I'm going to get the chance to put together a longer set of thoughts on this year's mystery hunt. Thus, this post, which is relatively brief and general.

I'm not going to mince words here, because I think some tough love is in order, because I fundamentally disagree with the philosophies I believe to have been behind the hunt. Please be advised that I don't think that the hunt itself was horrible per se - and in fact I think that there are some things that the Manic Sages did very well - but I do think it is very important to talk about some of the things that were not done as well, because they are things that will help future teams, possibly even including the Sages themselves, run a better hunt.

Some points of disagreement:

* interesting solves are better than longer solves by an order of magnitude. I fundamentally disagree with puzzles which are long at the expense of interesting. Hunt puzzles which are long *instead of* interesting are a serious problem.

* A puzzle with a really good "aha!" is like an optical illusion. An aha experience in a puzzle should come from seeing what you have in a new light, not from guessing what there is to the experience that you cannot see. This is for two reasons: firstly, because it makes the solver feel really, really smart; and secondly, because it protects the puzzle from being too hard or too grindy. There were puzzles which required you to be an Iron Chef contestant without knowing what the required ingredients were, and without knowing that the required ingredients were not even supplied in the kitchen, and that's a serious problem.

If the Manic Sages are somehow adamant that puzzles in which you must supply something you are not given and which you are not given the tools to identify, then the next time we win a hunt, I will propose a special guess-the-number-between-one-and-a-googol puzzle for Manic Sages and Manic Sages only, and man the call queue for that puzzle myself.

* Puzzles should be tested, not only to ensure that they are solvable -- that is, not broken, without identified alternate solutions, and with a clear termination point where the answer is apparently the answer -- but to ensure that the organizers have a handle on the difficulty level of the specific puzzles and in so doing have tuned them for difficulty appropriately. it is clear from the hunt experience itself that, the latter did not happen, and that tends to suggest that the former did not happen uniformly either, and if that's the case, then that's a serious problem.

A few words on why this is important... because hunt puzzles so often involve inferential solving rather than conventional solving only, it is important to have an unspoken contract between the puzzle constructor and the solver that the puzzle constructor is a) always playing fair in the first place and b) is providing a puzzle whose reward is proportional to the effort involved. Testing for difficulty ensures the latter part of the contract is fulfilled, while testing the for solvability and lack of brokenness ensures the former is. When the puzzle provider fails to fulfill the contract, the solver goes through progressive stages of disappointment and apathy and eventually disengages from the solving process altogether.

More specific and hopefully equally useful information may be coming after I've survived travel home and completed my work week. Until then, I feel better for saying this.
  • Post a new comment


    default userpic