January 21st, 2013
|08:09 pm - First 2013 Mystery Hunt thoughts|
I don't know whether or not I'm going to get the chance to put together a longer set of thoughts on this year's mystery hunt. Thus, this post, which is relatively brief and general.
I'm not going to mince words here, because I think some tough love is in order, because I fundamentally disagree with the philosophies I believe to have been behind the hunt. Please be advised that I don't think that the hunt itself was horrible per se - and in fact I think that there are some things that the Manic Sages did very well - but I do think it is very important to talk about some of the things that were not done as well, because they are things that will help future teams, possibly even including the Sages themselves, run a better hunt.
Some points of disagreement:
* interesting solves are better than longer solves by an order of magnitude. I fundamentally disagree with puzzles which are long at the expense of interesting. Hunt puzzles which are long *instead of* interesting are a serious problem.
* A puzzle with a really good "aha!" is like an optical illusion. An aha experience in a puzzle should come from seeing what you have in a new light, not from guessing what there is to the experience that you cannot see. This is for two reasons: firstly, because it makes the solver feel really, really smart; and secondly, because it protects the puzzle from being too hard or too grindy. There were puzzles which required you to be an Iron Chef contestant without knowing what the required ingredients were, and without knowing that the required ingredients were not even supplied in the kitchen, and that's a serious problem.
If the Manic Sages are somehow adamant that puzzles in which you must supply something you are not given and which you are not given the tools to identify, then the next time we win a hunt, I will propose a special guess-the-number-between-one-and-a-googol puzzle for Manic Sages and Manic Sages only, and man the call queue for that puzzle myself.
* Puzzles should be tested, not only to ensure that they are solvable -- that is, not broken, without identified alternate solutions, and with a clear termination point where the answer is apparently the answer -- but to ensure that the organizers have a handle on the difficulty level of the specific puzzles and in so doing have tuned them for difficulty appropriately. it is clear from the hunt experience itself that, the latter did not happen, and that tends to suggest that the former did not happen uniformly either, and if that's the case, then that's a serious problem.
A few words on why this is important... because hunt puzzles so often involve inferential solving rather than conventional solving only, it is important to have an unspoken contract between the puzzle constructor and the solver that the puzzle constructor is a) always playing fair in the first place and b) is providing a puzzle whose reward is proportional to the effort involved. Testing for difficulty ensures the latter part of the contract is fulfilled, while testing the for solvability and lack of brokenness ensures the former is. When the puzzle provider fails to fulfill the contract, the solver goes through progressive stages of disappointment and apathy and eventually disengages from the solving process altogether.
More specific and hopefully equally useful information may be coming after I've survived travel home and completed my work week. Until then, I feel better for saying this.
|Date:||January 22nd, 2013 03:18 am (UTC)|| |
Well put, Craig. That the Mystery Hunt is a big, epic, and hard challenge does not mean teams need to try to outdo the last big, epic, and hard challenge simply for the sake of one-upsmanship. The best puzzle designers know the solver will win the war, and should enjoy winning it, not sigh after somehow possibly defeating it or humbled by failing at same.
I'm reminded of my solving experience after the "Guinness Record Puzzle" from the Slovakian "WSC" where there was no logical route save for guessing. Certainly not fit for a competition, or even just as a puzzle outside of competition for people to enjoy. Yes, the people at that event were strong solvers. Yes, we could pound a sudoku answer onto a grid by guessing a whole lot. But getting through that experience by whatever means is neither what I want to do as a solver nor what I would ever - EVER - set up for other solvers as a puzzle constructor. That broke the unwritten contract you bring up, and is something all constructors should learn before diving so deep into such a large project.
Edited at 2013-01-22 03:21 am (UTC)
|Date:||January 22nd, 2013 03:47 am (UTC)|| |
I suppose it's all conjectural until we hear what on earth they were thinking. I can't begin to guess whether it's a fundamental difference in hunt philosophy, or editing gone absent, or test-solving by the hive mind not revealing how painful so many of these puzzles would be to an individual solver, or just plain old novice-level constructing. Or all of the above. It's painful to me because I'm friends with so many of the Sages, and I really hoped the community's low expectations for this hunt would be proved wrong. I suppose it's possible that their team is just so different from others that they've been experiencing an entirely different Hunt from the rest of us. I saw the first few minutes of the wrap-up on in-flight wifi, and it wasn't clear to me that they realized yet that the problem went deeper than just "too long", and was more like "too long and not very much fun". But maybe they just had brave faces on, or maybe I didn't see enough of the wrap-up.
Edited at 2013-01-22 03:51 am (UTC)
|Date:||January 22nd, 2013 07:00 pm (UTC)|| |
I was having similar thoughts as I watched the flow of errata e-mails from Manic Sages and as the hours ticked by. Given that the most recent "puzzle suites" I've seen prior to this weekend each contained a surprising number of goofs, I've come to the conclusion that, while test-solvers are necessary, proofreaders/editors are more necessary.
Edited at 2013-01-22 07:01 pm (UTC)
|Date:||January 23rd, 2013 07:14 am (UTC)|| |
I heard a rumor (maybe more like a random theory) that Manic Sages consider themselves a below-average hunt team who just got lucky to win one, and so they wrote a hunt for a team better than themselves. Thus, if a testsolver couldn't solve a puzzle, they gave the puzzle to another testsolver until they found one who could solve it.
Perhaps I should reserve judgement until I see the solutions, but the principle I think they violated the most is that a long/hard-fought puzzle should have a straightforward readout at the end. We had too many puzzles where we spent lots of energy getting to what we thought was the obvious end point, only to meet a brick wall of stuckness when the seemingly obvious/satisfying last step didn't work. The almost complete lack of flavor text exacerbated this (this is where testsolving would have told you what subtle hints needed to be dropped) and also contributed to a general lack of... flavor. I realize that no flavor text is better than overlong flavor text with lots of red herrings, but there should be a balance.
|Date:||January 23rd, 2013 07:18 am (UTC)|| |
Also, I am fine with the hunt sometimes going into Monday morning (which was another rumor I heard, that Manic Sages was targeting that length because they were unhappy with the past few short hunts). I'm not fine with the hunt needing many many handouts (including removing the requirement to find the coin!) in order to make it end by Monday morning.