% fortune -ae paul murphy

Comprehension and Retention

I need your help in doing a simple experiment - and yes, I'll report the results here.

The purpose of the thing is to see whether there are grounds for believing that a more controlled experiment would reveal systemic differences in information comprehension and retention depending on whether that information is conveyed on paper or on screen.

This is important because a difference in either, or both, comprehension and retention, would mean that a wide range of significant endeavours, from stock market and related financial decision making to public education, have been more affected by the transition from paper to screen than we previously understood.

What I'd like you to do, is to recruit a few other people, have them read a document, and then answer some on-line questions about it.

The structure is simple:

  1. The document has about 3,000 words - so please allow your subjects about 12 minutes to read it carefully.

  2. about equal numbers of people should be asked to answer the questions immediately after reading the document; others about 24 hours later.

  3. about equal numbers of people should be asked to read the document on paper, others using a PDF reader of their (or your) choice.

I have set up a phpsurveyor page to ask the questions and collect responses so all respondents should get the same questions in the same way.

Please be aware that we want to end up with roughly equal numbers of people in all four categories, but that most people will find it easier to provide on-screen/immediate responses than on-paper/delayed ones. In other words, please try to recruit some people for each category with special emphasis on the later (more difficult to get) groupings as listed below:

  1. on screen, immediate answers
  2. on screen, day later answers
  3. on paper, immediate answers
  4. on paper, day later answers

Obviously the more participants you get involved, the better. In a controlled experiment random assignment of subject to category would be a given, but I don't think random has meaning here - just make sure that if you have several volunteers you don't accidently group them into categories by obvious control variables like age.

I'm not trying to settle anything here - the goal here is to see whether more controlled research is likely to prove warranted. I.e. your work will ultimately support a grant proposal for somebody - not necessarily me.

Please download the test document (happy.pdf) from here and point people at the questions here.

The story, incidently, is something I wrote for LinuxWorld a few years ago and reflects a rather bitter real life experience.

Implicitly the present experiment has the hypothesis that it's possible to get interesting results in this way. In a more formal, i.e. controlled, setting my hypothesis would be that people reading the case on paper will do consistently better than people reading the case on screen, with the greatest agreement between the two groups on immediate questions relating to pervasive emotional content and least on day-later factual questions whose source lies near the middle of the case document.

Please help - the results won't have value unless lots of people - 100s - contribute - and feel free to contact me directly if you have questions or concerns that you don't want to raise in the discussion/comments pages here. That's murph at winface, .com, of course.


Paul Murphy wrote and published The Unix Guide to Defenestration. Murphy is a 25-year veteran of the I.T. consulting industry, specializing in Unix and Unix-related management issues.