A new preprint in Arxiv by Radek Pelánek takes a look at human problem solving of Sudoku, and has been picked up by various media sources (see The medium, and slashdot)
Arxiv is primarily a computer science and physics-based preprint publication outlet, and so they make some mistakes about claiming this approach is a new way of studying human problem solving. In fact, Pizlo's Journal of Problem Solving, which I have referenced here before, was initiated to publish research of this nature.
In order to understand what this research shows, you need to understand how easy it is to solve Sudoku on computer, and why this is nothing like humans do. AI researcher Peter Norvig, from whose book I learned AI (and who is now at Google) described a simple solution here: http://norvig.com/sudoku.html
A typical computer solution works through depth-first search, using basically the same mechanisms as crossword creators use to set the fill of a crossword. Essentially, if you identify the rules of Sudoku, you can establish whether any partial solution is legal. If you find a partial solution that is legal, you can try any one of the solutions you get by adding a single additional number, and see if this one is legal. If not, you backtrack and try something else. Sometimes you will have to backtrack several steps because all the next steps are bad, but eventually you will find the right answer.
This method is of course very time-consuming; too time-consuming for a computer (or human) to ever solve Sudoku. Norvig computes the search space is like 4 × 1038, which is probably more than the number of atoms in the universe. Instead, constraints are used to make good guesses about what options are feasible for each square. Norvig shows that simple puzzles can be solved deterministically by applying two rules, which are essentially similar to what humans use--1. if a number is in the same row/column/subcell, it can't be in another cell; 2. If only one number can work in a cell, fill it in.
So, by constraining the possible legal fill-ins using a simple rule, we can reduce the problem to something that can be solved in a manageable time by a computer (ultimately only solving a handful of possibilities). But this is far too tedious for people to do--we find it difficult to keep track of search like this, and in order to test any possible solution, we need to fill it in, check, erase. So instead, humans try to apply more logical rules.
This is really interesting paradox, because it turns out a computer is able to solve these using approximate search strategies, whereas people must use the a completely logical strategy (because search is too hard for us). And anyone who has played Sudoku and tried a guessing strategy has probably failed, just like I have.
So, what did Pelanek, who wrote the Arxiv paper, show? His goal was to find an improved way of evaluating the difficulty of Sudoku puzzles. Current ratings schemes evaluate the difficulty by measuring the number of steps needed to solve the puzzle. Palanek tried to predict solution times for about 2000 puzzles based on a bunch of metrics. These included things like number of logical rules applied, number of search (or backtracking) steps, and so on. That is, to predict how hard a problem will be for a human, you have the computer solve it, and count how many times it had to apply certain rules. Some of these worked pretty well, but he was able to predict even better when he combined the metrics. This is not surprising, and is a bit different from how it is being reported.
The paper concludes by talking about other problems this approach could be applied to. Crossword puzzles aren't mentioned, and although the spirit of this approach could be used, the application is pretty remote. However, my lab is taking a similar approach by building models that solve crossword puzzles. Although we are not looking at predicting difficulties of entire puzzles (this would be possible), we do predict solution times of individual clues, and have a pretty good ability to do so. This is work that is still in-progress, and I will blog about it sometime in the future, but it shares a common notion with the Sudoku work: we are using computational models to assess the difficulty of a problem, in order to learn something about how humans solve problems.