A couple recent papers by a research group by Fredric Walinsky have come out of a large study using cognitive training to see if there are benefits of a specific video-game style intervention (in the image) a year after practice. They happened to compare their speed-training with an 'attention control' that was actually a crossword puzzle, and so I thought it would be interesting to review the results here.
The PLoS paper from 2013 on this should be accessible to anyone, although the JAH paper is paywalled. These both came out of the same study, but looked at different outcomes. The PLoS paper looked at a set of cognitive tests, whereas the JAH paper looked at activities of daily living and depression
Saturday, September 27, 2014
Sunday, September 21, 2014
Moxley et al Study of crossword experts
When it rains it pours--In the last month or so, three studies using crossword experts have been published, including a study by Toma I blogged about earlier, a study published by my own lab, and one that just appeared by Moxley, et al. Remarkably, each of these studies looked at crossword players during different years of the ACPT, dating back to 2005 for the Moxley paper. In fact, I think that Tuffiash, on of the authors, appeared in the Wordplay movie.
The Moxley paper is coauthored by Anders Ericsson, who is known for noticing the 10-year 10,000-hour rule of experts that now often gets attributed to Malcolm Gladwell. This paper was only looking partly at past experience of players in order to predict performance at the tournament. It turns out there is indeed a relatively strong correlation between tournament performance and past experience, although it only accounted for 19% of the total variance. There must be other factors that can explain why some people are better than others then, and one of these turned out to be how long it has been since someone took a 'break' or 'hiatus'.
Many players have taken time off from playing over their history. As Moxley et al, write, as far back as Ebbinghaus (basically the first modern memory researcher), people have been interested in how long it takes to relearn knowledge, and how that impacts retention. It turns out that taking a hiatus from solving negatively impacted final standings at the ACPT. Furthermore, the time since that hiatus was also important--the more time that had passed since the hiatus, the better the performance. Hiatus predicted another 5% of the variance, which is about 20% of the total explained variability.
One of the remarkable aspects of this study is that the reported time spent playing each week is a relatively poor predictor of their final performance. All history-based measures together accounted for about 20% of the variance, and one of the best ones here was age (with older people doing relatively worse on in the tournament).
I have a theory about this: the best players actually do not spend a lot of time playing puzzles! This is evident from watching the bloggers who report their puzzle times. The best players may solve 5-10 puzzles a day, but they do it in a 30-60 minutes. Although this is on par with the 3 hrs/week spent solving puzzles across participants that Moxley reports, it might be that the time spent solving puzzles by fans of every level is constrained by their free time, and they solve as many puzzles as they can do in that time. Here, time would only be a poor predictor. Moxley did not ask about how many puzzles were solved, but rather how much time. I myself have collected this data in our most recent study, and so we may have a good answer to this question before too long.
Anyway, the upshot of this study is that the amount of time you take off of any deliberate practice can hurt you, possibly for years afterwards.
The Moxley paper is coauthored by Anders Ericsson, who is known for noticing the 10-year 10,000-hour rule of experts that now often gets attributed to Malcolm Gladwell. This paper was only looking partly at past experience of players in order to predict performance at the tournament. It turns out there is indeed a relatively strong correlation between tournament performance and past experience, although it only accounted for 19% of the total variance. There must be other factors that can explain why some people are better than others then, and one of these turned out to be how long it has been since someone took a 'break' or 'hiatus'.
Many players have taken time off from playing over their history. As Moxley et al, write, as far back as Ebbinghaus (basically the first modern memory researcher), people have been interested in how long it takes to relearn knowledge, and how that impacts retention. It turns out that taking a hiatus from solving negatively impacted final standings at the ACPT. Furthermore, the time since that hiatus was also important--the more time that had passed since the hiatus, the better the performance. Hiatus predicted another 5% of the variance, which is about 20% of the total explained variability.
One of the remarkable aspects of this study is that the reported time spent playing each week is a relatively poor predictor of their final performance. All history-based measures together accounted for about 20% of the variance, and one of the best ones here was age (with older people doing relatively worse on in the tournament).
I have a theory about this: the best players actually do not spend a lot of time playing puzzles! This is evident from watching the bloggers who report their puzzle times. The best players may solve 5-10 puzzles a day, but they do it in a 30-60 minutes. Although this is on par with the 3 hrs/week spent solving puzzles across participants that Moxley reports, it might be that the time spent solving puzzles by fans of every level is constrained by their free time, and they solve as many puzzles as they can do in that time. Here, time would only be a poor predictor. Moxley did not ask about how many puzzles were solved, but rather how much time. I myself have collected this data in our most recent study, and so we may have a good answer to this question before too long.
Anyway, the upshot of this study is that the amount of time you take off of any deliberate practice can hurt you, possibly for years afterwards.
Tuesday, September 16, 2014
On Toma, Halpern, & Berger (2014)
Two crossword-related papers were published in the last month; one by my student that I will post on soon, and another that came out last month by Michael Toma, Diane Halpern, and Dale Berger.
The Toma study compared both crossword and scrabble experts, and looked at whether there were any substantial differences between experts in these two areas. What is probably most interesting about this study is in the differences they did not find, rather than the differences they did find.
The Toma study compared both crossword and scrabble experts, and looked at whether there were any substantial differences between experts in these two areas. What is probably most interesting about this study is in the differences they did not find, rather than the differences they did find.
Subscribe to:
Posts (Atom)