Monthly Archives: August 2014

Too Early to Say that the Pen is Mightier than the Keyboard

Recently there was an article that captured the attention of the popular press and those who teach. A few months ago, The Atlantic trumpeted, “To Remember a Lecture Better Take Notes by Hand .” Scientific American also got into the act with the article “A Learning Secret: Don’t Take Notes with Your Laptop”.  Even the research article upon which these news reports were based had a catchy title, “The Pen is Mightier than the Keyboard: The Advantages of Longhand over Laptop Notetaking.”   Soon education listserves began to advocate banning the laptop from the classroom. What’s not to like about this finding that fits into our sneaking suspicions about the digital devices?  There is much to admire about the Mueller and Oppenheimer (23 April 2014) study that found handwritten notes were superior to laptop notes; it’s a tightly constructed study. Based on the Mueller article, should educators be telling students not to use their laptops for notetaking?

Let’s step back for a moment. Isn’t it a bit rash to conclude from a single experimental study that all note-taking by hand is always better than notetaking from a digital device? Just because a research finding seems plausible or agreeable doesn’t mean that the finding is valid or generalizable.

What is the evidence for superiority of handwritten notes to laptop notes? Mueller, et al. (2014) found that students who took notes by hand did better on reading comprehension tests than those taking notes on a laptop, that the laptop notes tended to be verbatim transcripts of the lecture and that the handwritten notes maintained their superiority even after the laptop students were directed not to write verbatim notes. Is this study enough for us to advise students to write notes only by hand?

Rarely does one single study prove a hypothesis. This point is made by educational psychologists Stanovich and Stanovich (2003) who write about:

[T]he mistaken view that a problem in science can be solved with a single, crucial experiment, or that a single critical insight can advance theory and overturn all previous knowledge. This view of scientific progress fits nicely with the operation of the news media, in which history is tracked by presenting separate, disconnected “events” in bite-sized units. This is a gross misunderstanding of scientific progress and, if taken too seriously, leads to misconceptions about how conclusions are reached about research-based practices.

One experiment rarely decides an issue, supporting one theory and ruling out all others. Issues are most often decided when the community of scientists gradually begins to agree that the preponderance of evidence supports one alternative theory rather than another. Scientists do not evaluate data from a single experiment that has finally been designed in a perfect way. They most often evaluate data from dozens of experiments, each containing some flaws but providing part of the answer. (Emphasis supplied.)

Thus, it’s too early to conclude that laptop notes are inherently inferior to handwritten notes. At present, the studies are far too narrow or limited to generalize them to broad types of notetaking from all kinds of lectures.  The lectures in the Mueller study were based on TED talks, which were 15-20 minute lectures on “topics that would be interesting but not common knowledge.” By contrast the lectures that are encountered in college as well as in law school synthesize readings students had earlier been assigned. It’s a big leap to argue that laptop notetaking is inferior to all types of lectures.

Even if the results from the Mueller study could be replicated, that doesn’t necessarily mean that the laptop disadvantage is immutable. An alternative explanation for the Mueller results is that students find laptops to be easier to type and that this greater typing facility induces students to type exactly what they hear. If that is the case, laptop students will have to learn how to process their notes in real time during the lecture or in the time after the lecture.

The Mueller findings have to be replicated by other scientists, and the experiment has to be expanded to cover other types of lectures and situations. More studies—involving different subjects, different source-lectures and lectures that resemble the actual lectures occurring in higher education classrooms–need to be performed. Until other studies confirm the Mueller findings and extend it to broader types of classroom lectures, scientists cannot yet conclude that laptop notetaking is inferior to notetaking by hand.

What can or should we tell students concerning laptop notetaking? We can say that based on a single study, in which students listened to short 15-20 minutes lectures on general topics, laptop notes tended to be more verbatim than handwritten notes.   We can tell students that there is a viable yet unproven hypothesis that there is a disadvantage. Yet it is too early to conclude that there is a laptop disadvantage, too early to say what exactly is the basis for the disadvantage and too early to say whether such a disadvantage can be overcome by training.

Caution should rule the day. Recent history is populated with examples of the public –and even scientists—leaping to conclusions from a sparse number of studies. That was the case several years ago, when initial studies had indicated that beta-carotene had cancer-fighting properties. After more studies were done, the  scientific community later rejected the initial finding.

It could very well be that laptop notetaking is inferior to long-hand notetaking, but as it stands now, the body of evidence supporting that hypothesis does not exist.

 

 

 

2 Comments

Filed under Educational Psychology

Is There a Gender Gap in Performance on the New York Bar Exam?

As thousands of law graduates take the bar exam, women graduates may wonder whether an important section of the bar exam—the Multistate Bar Exam (MBE)–is stacked against them. Evidence of this is tucked away in an obscure report of the NY Board of Bar Examiners (2006, Oct. 4), the agency that is responsible for administering the licensing exam. The numbers below show men outperform women on the MBEs[1] by about 27 points while women do slightly better –10 points–than men on the Essay[2] portion of the NY State Bar exam.

Score Means, Standard Deviations and Standard Errors

Domestic-Educated First Time Takers, Females and Males, July 2005**

Gender MBE Scaled Score x 5 Essay Scaled Score NYMC Scaled Score Total NY Bar Score*
FemaleMean 713.28 734.08 719.75 724.34
(SD )(n-3,264, SEM = 1.2) (72.53) (69.21) (76.85) (63.74)
MaleMean  740.04 724.12 724.62 730.54
(SD)(n=3,299), SEM 1.30 (73.96) (70.80) (77.84) (64.47)
Total   Mean  726.69 729.07 722.20 727.44
Total (SD) (73.96) (70.80) (77.84) (64.47)

*Total score is computed by taking the weighted average of the adjusted MBE scaled score (40%), Essay score (50%) and NYMC (10%). New York Board of Bar Examiners (2006, Oct.4).

**New York Board of Bar Examiners (2006, Oct.4).

On the MBE, men scored better than women by 26.76 adjusted points. This 27-point difference between men and women seems big and pervasive. The gap is consistent across ethnic/racial groups, first-time versus repeat takers as well as between domestically and foreign schooled graduates. But a closer analysis of the numbers tells us that the gap isn’t that alarming.

Presumably the 27-point gap meets the statistical test of significance, which tells us whether two averages from two different groups are statistically different or not. Typically a .05 probability is set and that means that there is a 5% or less chance that the two means are really different due to pure dumb luck. Put differently, 5% or less represents the acceptable level of risk of being wrong when we conclude that two averages from two groups are different.

But the fact that two means are statistically different doesn’t tell us whether that difference is big enough to really have practical significance. To paraphrase Cummins (2014), it’s wrong-headed to make too much of the statistical significance because it is largely a product of the enhanced sensitivity that comes from with a sample size as large as the one here—over 6,000 people. Instead of relying on a test of statistical significance, we need an analysis of “effect size.”

You can use a formula to determine effect size, and there’s a handy website calculator for that. Rounded off, the effect size of sex is .37. This means that sex has a moderate effect on MBE performance. The other statistic –effect size r squared =.03 –tells us that 3% of the difference between men and women is due to sex.

In other words, sex has a small relationship to performance on the MBE; in fact, the contribution of sex to performance on the MBE is only 3%. That means that the remaining 97% of the variability in performance is due to factors other than sex. Thus, it would be hard to claim that the MBE is biased against women examinees.

[1] The Multistate Bar Exam  is a 6-hour test that requires examinees  to answer 200 multiple questions on the application of law several doctrinal areas.

[2] The essay portion tests the ability of examinees to identify the applicable New York state law and write essayd on the application of that law to fact patterns.

Sam Sue, Copyright, 2014.

Leave a comment

Filed under Educational Psychology, legal education