How to Make Test-Enhanced Learning Work in a Law School Classroom

In the public’s mind–and those in the legal academy are a part of that—there has been a growing chorus of criticism that there has been too much testing, and that this over-evaluation leads to student anxiety and too much educational bean counting.  So when there were report of studies – showing that frequent testing might be good for students—many heads were turned.

What’s important to understand is about these studies were about low-stakes practice tests aimed to help learners—compared to the “high stakes” one-time only exams like finals or midterms (or standardized exams) that have a huge impact on how a student is graded or labeled.

Frequent in-class testing can enhance learning in a law school classroom but it needs the structural supports to make it effective. Several recent studies have established the benefits of test-enhanced learning; they include Pennebaker, J.W., Gosling, S.B., & Ferrell J.D. (2013) Daily Online Testing in Large Classes: Boosting College Performance while Reducing Achievement Gaps. PLoS ONE, 8(11).  and Roediger,  H.L., Agarwal, P.K., McDaniel, M.A., and McDermott, K.B. (2011). Journal of Experimental Psychology, 17(4), 382-395.

Roediger et al.(2011) showed that middle school students in a social studies class performed better when they were given frequent low stakes quizzes. In that study, students were given online multiple-choice tests two days after each lesson was taught. They used clickers to select an answer to a question presented on screen and then the question stem and correct answer would appear on a large screen.

So if low-stakes exams aren’t really bad for you, how exactly are they beneficial to learning?

One possible theoretical explanation for the benefits of test-enhanced learning is that the extra exposure to the material–it’s not the technique but it’s the extra practice that students engage in because of an impending test. But this theoretical explanation can be eliminated because Roediger’s study showed that the frequently tested group did better than students who had been told to study more.

Another explanation is based on the “transfer appropriate processing” theory, which posits that that benefits occur when the format of the practice matches the test upon which the students are assessed. Mirroring the type of questions on the final exam facilitates “fluent reprocessing” of the information needed to answer the questions on the final test.  For example, if the final test consists of multiple-choice questions, the practice tests must also be multiple-choice. Likewise, if the final test consists of questions asking for an essay answer, the practice tests must also be essay-type questions.  By practicing with matching tests, there are stronger neural connections which later help on an exam.

Techniques based on transfer appropriate processing are in wide use.  For example, much of bar exam preparation is premised on it. Bar prep consists of constant practice with multiple-choice questions as preparation for the Multistate Bar Exam as well as essay writing practice. Likewise, law students typically practice for final exams by mirroring the type of questions expected on a final or midterm.    In short, practice is better than not practicing, but practicing with questions that match the format of final exam questions—is even better.

Practicing with matching questions is good but practicing with feedback is superior because the feedback tells learners what they need to fix.  Practice does not make perfect if practice doesn’t lead to corrections in performance; without feedback, practice may simply reinforce imperfections. A first year law student can practice hours briefing a case, but without feedback, the student will continue making the same mistakes.

As good as practice with feedback is, feedback is effective only if the student knows how to employ the feedback and if the student is motivated to use the feedback.  Motivation is what makes us humans and not robots.  A self-correcting machine is programmed to take in feedback and make appropriate changes. But any human has to feel motivated to use the feedback and take corrective action.  Corrective feedback can be perceived as negative–if the student has pre-existing low opinion about his or her ability and believes there is nothing s/he can do to improve; in that instance, feedback may simply be taken as a negative, thus leading to a downward spiral of lower motivation, effort, persistence—and ultimately poor performance which in turn inspires even lower confidence, motivation, and effort.

This means that educators have to make sure that the right conditions exist for students to learn when they make mistakes. In short, legal educators need to maintain relatively high self-efficacy. Psychologist Albert Bandura who developed the concept, defined perceived self efficacy as “beliefs in one’s capabilities to organize and execute the courses of action required to produce given attainments.”   Self-efficacy  shouldn’t be confused with self-esteem; self-esteem is a wide-ranging perception about one’s self-worth whereas perceptions of self-efficacy are contextual –they may be  different from one task to a different one—e.g., “I may have high self-efficacy when it comes to playing tennis, but low self-efficacy for playing chess”

There is abundant research showing that moderately high levels of self-efficacy are associated with academic risk taking, persistence, and effort needed to become a good learner.   Teachers can promote self-efficacy through low-stakes practice tests—tests that have small impact on a student’s grade—and emphasizing the mastery of skills and understandings—rather than emphasizing doing better than others. Psychologist Carol Dweck and others have long established that a “learning goal orientation” which focuses on increasing competence promotes high persistence and effort while a performance goal orientation –which focuses on performance relative to others –fosters lower effort and persistence when students start to make mistakes.

There is well-established research that tells us that having a learning goal orientation is especially important because of the attributions we make when we fail. With a learning goal orientation, we will attribute failure to not being able to use the skills that are connected to the performance outcome; for example, a student with a learning goal orientation will attribute a poor outcome on a law school final to, say, not being able to fully apply the legal rule to a case. By contrast a law student with a performance goal orientation will focus on the low grade on a final exam, and how much lower the grade relative to everyone else. That student is more likely to believe that her abilities are innate and fixed—“I’m just not any good at doing this lawyer stuff.”

Law school instructors can build self-efficacy by establishing learning environments that focus on building specific skills and emphasizing the fact that there are different strategies one can use to improve performance.

In summary, frequent practice tests can enhance learning in the law classroom; but law school instructors can optimize  the benefits from frequent testing by doing the following:

  • First, the practice tests need to be low stakes.  Low stakes because students need stakes to exert effort. But the stakes can’t be too high – since that would undermine student self-efficacy.
  • Second, instructors need to give practice tests frequently and the practice tests need to match the type of questions that will be asked on the final.  Of course there has to be a readily available bank of these tests for this to be feasible.
  • Third, feedback needs to be immediate.  Instructors can do that by presenting the right answer to the questions they answered. Immediate feedback on multiple choice questions is easy to administer, but feedback on essay type answers is hard to generate. Instructors may have to rely on collaborative learning techniques to administer feedback on answers to practice essay questions.
  • Fourth, students need to be trained so that they can use the feedback to identify deficits in their performance and they need to be trained in what strategies they can employ to improve their performance.   For instance, if a pattern of errors shows that the student is overgeneralizing the rules from a case decision, then the student needs to re-calibrate her interpretation.
  • Fifth, students have to be motivated to fix the things they may be doing wrong—learning environments must build self-efficacy so they put in the effort and persist in the face of difficulties.  To do this, teachers have to promote a learning goal orientation instead of a performance goal orientation.
  • Sixth, teachers need to be aware of whether other courses entail frequent testing. The Roediger study showed benefits when there was one single course with frequent testing but it is doubtful that frequent testing in more than one course would be helpful. The student stress of having to prepare that much more for a second, third or fourth course with frequent tests seems counterproductive.

Also law students can do the following:

  • Students should seek out as many opportunities to test themselves using practice exams that match the type of questions on a final or midterm exam.  They should put in the effort to answer the questions so that they truly test themselves and get an honest assessment of what they know and don’t know. Multistate bar exam questions are good sources of multiple-choice questions and many law faculty make available past essay exams containing essay questions.
  • Students should compare their answers to an answer key available for Multistate Bar Exams.
  • Students themselves should compare the answers and assess where the errors are occurring in order to make adjustments.  They should avail themselves of academic support staff in making these assessments.
  • Students should keep in mind that there is always room for improvement; students should be realistic that small incremental steps are more likely to improve achievement.

All in all, frequent testing can work in a law school classroom, but instructors and students need to create the right conditions that allow for feedback and motivation to use the feedback to improve performance.

Suggested books and articles for further reading

Bandura, Albert. (1997). Self-efficacy: The exercise of control. (New York:  W.H.Freeman).

Dweck, C.S. (1986). Motivational processes affecting learning. American Psychologist, 41, 1040-1048.

Zimmerman, B.J. (2000). Self-efficacy: An essential motive to learn. Contemporary Educational Psychology, 25, 82-91.

Mind Over Matter: Paper Books Are Better for Reading– Because We Believe It to Be So

The e-books are coming. The e-books are coming to the legal academy.

Yes , the digitalization of legal case books is gaining momentum; in fact some predict that within a decade, on-screen legal texts will replace paper legal texts. And there are signs throughout the legal publishing industry of this major change. Last year, Apple entered the legal market while Thomson Reuters, let go of West Academic Publishing.

Image

So what’s the big deal? Once law students have mastered the technology of ebooks, shouldn’t there be a smooth transition from on-paper legal text to on-screen legal text? For the past 20 years or more, scientists have looked to see whether there have been differences in how readers process text in the two media. Research has indicated that digital texts are inferior to paper texts. Compared to readers of paper texts, readers of on-screen text read more slowly, less accurately and with greater amount of strain on working memory. But as Noyes and Garland (2008) note these inferiorities seemed to be more related to the backwardness of the digital technology available at the time of the studies were undertaken. As digital books have become more “book-like,” deficiencies in reading on screen text have diminished or disappeared altogether. With the latest technology, readers now “turn” a page rather than scroll. Readers can choose to read pages that are the equivalent length of printed pages of the same book. Moreover, digital texts now mimic printed books. Technologies like electrophoretic ink (e-ink) use ambient light just as a printed page does. As studies have yet to compare the new ebooks with print (Noyes and Garland, 2008), we cannot yet conclude that comprehension or reading speed is much poorer with the newest digital ebooks.

Our attachment to paper and books is wonderful, charming and quite understandable. I can’t stand reading stuff on my computer.

While we await further research on head to head comparisons between the newest generation of ebooks and paper text, two researchers Ackerman and Goldsmith (2011) believe that the biggest obstacle to reading digital texts is — ourselves. This reluctance to rely on digital texts for any serious reading is persistent and widespread. Emblematic of that reluctance are the remarks of Bill Buxton, a principal researcher at Microsoft, in Nature: “Our attachment to paper and books is wonderful, charming and quite understandable. I can’t stand reading stuff on my computer.”

The Ackerman and Goldsmith (2011) study sought to determine whether this widespread belief undermined reading of on screen texts. The researchers found that readers of digital text did poorer on comprehension tests than readers of paper texts, but this superiority vanished after the researchers instituted a condition that controlled for self-regulation (the researchers controlled this by specifying how much time the study participants could study texts).

The researchers concluded that the metacognitive regulation —the monitoring of reading and control of cognitive resources – were themselves affected by the very belief that reading for digital texts is ill suited for any serious reading. This reluctance to trust digital texts takes a life of its own and undermined the reader’s allocation of cognitive resources to undertake the task of reading digital text The effect of this belief goes like this: “Because I don’t trust reading from digital texts, I have no confidence in my monitoring and control of cognitive resources, and because of these self-referential beliefs, I will not exert too much effort and control over my cognition as there is very little I can do to improve my comprehension.”

More research is needed to test the effect of the belief in the inferiority of ebooks, but there is a very large body of empirically based research to support the effect of self-referential beliefs on self-regulation of academic performance. Self-efficacy, which are “beliefs in one’s capabilities to organize and execute the curses of action required to produce given attainments” (Bandura, 1997; Zimmerman and Schunk, 2001), has been shown to strongly affect motivation, cognitive control, exertion of effort, and persistence. On the strength of these studies, what kinds of things should legal educators be concerned about the use of digital texts in the law school classroom?

One is that the study gives pause to replacing conventional paper texts with digital texts in law school. There is nothing inherently inferior about reading text on the computer screen, but differences in metacognitive regulation of reading on-screen versus reading text on paper do undermine reading comprehension. Because of the reluctance to rely on digital texts for anything beyond reading for blogs, email messages and news articles, we can expect students to resort to a simplest solution to the problem– printing the digital texts on paper.

Two, deficiencies in reading on screen text are not inherent and insurmountable. Ackerman and Goldsmith (2011) suggest that cognitive strategies that force readers to engage in more effortful processing would overcome any disadvantages that digital text have as reading material. (In fact, these very strategies would improve reading of on paper text.) More training would help to change beliefs about the unreliability of digital texts, and that, in turn, would result in better self-regulation of reading of digital texts.

We cannot simply assume that our reading skills, long accustomed to paper text, will be seamlessly applied to digital media.

Third, legal educators should be very concerned with administering any examination questions on computer screens—an innovation that is already on the horizon. The Law School Admissions Council has already looked into a computer-based version of the LSAT. And it is not too hard to imagine a future in which bar examination questions are administered on-screen, since many jurisdictions such as New York already allow examinees to use computer-based software to encode their responses to exams that are on paper. In fact Pennebaker, Goslinger and Farrell (2012) advocate daily online testing in large classes to boost academic performance. Much more research is needed before we move to fully computer-based testing systems.

Although the digitalization of reading material seems inevitable in the legal academy and beyond, Ackerman and Goldsmith’s (2011) study shows that we cannot simply assume that our reading skills, long accustomed to paper text, will be seamlessly applied to digital media.

The takeaway on this to legal educators? Test and read text on screen but proceed with extreme caution until we’ve given students the tools to effectively comprehend digital materials.

Authors note: Many thanks to CUNY Law Professors Julie Lim and Jonathan Saxon on providing me with reports on the state of digital texts in law schools.

Data Offered to Support ABA Proposal to Change Bar Pass Requirement Raises More Questions than It Answers

Question_mark_(black_on_white)            About two weeks ago, a lawyer friend of mine asked me to look at the statistics submitted in the April 26-27, 2013  minutes of the ABA’s Section of Legal Education and Admissions to the Bar Standards Review Committee (“Standards Committee”). The worn out joke about lawyers is that they went to law school because they hate or couldn’t do statistics. But I am a lawyer, I like statistics and a significant part of my doctoral training was in statistical models.   When I reviewed the Standards Committee presentation of the data, I became dazed and confused.

The statistics were offered in support of the Standards Committee proposed to change the bar passage requirement for the accreditation of law schools.   Under Proposed Standard 316, 80% of a law school’s graduates must pass the bar within 2 years (5 tries) (this is called the “look-back period”).  Under the current rule, there’s a 75 % pass rate that must be achieved within 5 years (10 attempts) of graduation.  This change has stirred controversy because of its potential impact on non-traditional students, particularly students of color.

The proposed changes are based on a study of past examinees, and it’s important to review them before you make up your mind.   

In support of the proposed changes, the Standards Committee musters up a slender thread of evidence—that the overall bar pass rate for the past five bar exams have ranged between 79% and 85%.  Opponents have jumped all over this morsel of evidence. Readers can see these objections expressed in a letter to the Standards Committee from the Chair of the ABA Council for Racial and Ethnic Diversity in the Educational Pipeline

This posting takes a closer look at the data to support the ABA proposal for the shortening of the look-back period from the existing five years (10 tries) to two years (5 tries).

The Figure 1 (shown below) was used to support the Standards Committee’s proposal, and it warrants a careful look because it is very similar to the other graphs the Committee used to support its proposal. The figure shows a line graph in which the x-axis stands for the number of attempts at taking the Multistate Bar Exam (MBE) and the y-axis represents the percentage of examinees.  The reader get the general impression that there is a big drop in the number of repeaters after the first attempt.  The descent flattens out and starts to approach zero at the fifth and later attempts.  This general impression seems to support the Proposed Section 316 to shorten the look-back period to 2 years (5 attempts), since the number of examinees seems infinitesimal after five attempts.

Figure 1. Percentage of Examinees Taking the MBE One or More Times(N=30,878; 1st attempt in July 2006)

 

 graph1

                                                     Percent taking

           

 

 

                                                                                                              Number of Times Taking the MBE

But upon closer examination, it’s hard to make sense of what this figure is depicting.

It is said that a picture paints a thousand words, but this graph –like other graphs the Standards Committee used in its report–raises a thousand questions.

  • Figure 1 refers to 30,878.  It is difficult, if not impossible, for the reader to tell what the 30,878 stands for.
    • Does 30,878 represent the number of people who took exam the first time in July 2006 and those who re-took the exam again during the July 2006-July 2011 period?
    • Where does the number 30,878 come from?  Is it the total number of examinees from ABA law schools or does it include examinees from non-ABA law schools?  
    • Are the 30,878 a sample of a larger population of examinees?  If so, what is the size of this population?
    • How was the sample chosen?  What was the procedure for choosing the sample? 
    • Is this sample an accurate depiction of the population of examinees?   What confidence can we have that that the conclusions about this sample apply to a larger group of people?
  • Figure 1 refers to percentages. For example, in the first try, there’s 84%; 2nd try, 10%, etc. 
    • What numbers do these percentages refer to?  30,878?   Different numbers of examinees who took the bar from July 2006 to July 2011?  If so what are those numbers? 
  • There are also obvious questions that the Standards Committee did not ask:
    • Are the repeaters from a wide range of schools? Do the repeaters disproportionately come from schools with large populations of graduates of color?
    • What explains the low percentage of repeaters?  Does the need to earn an income and opportunity cost of preparing for the exam suppress re-taking the bar? 
    • What  is the behavior of the re-takers?  Do they skip a year or take it for the next bar exam administered? 

 

If the Standards Committee answered these questions, we would at least get a basic understanding of the repeater. It certainly isn’t too much to ask for this.

Are there plausible explanations for the Standard Committee’s numbers?

Yes, but, they aren’t in the Standards Committee April 26-27, 2013 minutes.

 

Could the 30,878 refer to the actual number of examinees who took the bar exam in July 2006?

No. An analysis of the National Conference of Bar Examiners’ own data shows that the 30,878 has to be a sample, not the actual population of examinees who took the July 2006 exam.

According to the NCBE statistics, there were 47,793 first-time takers of the July 2006 bar exam compared to 30,878 examinees in Figure 1.  Could the 30,878 be those who were from only ABA law schools? The NCBE did not publish a breakdown of the examinee’s law schools so it’s hard to compute an exact number examinees from ABA schools. However a pretty good estimate can be derived. For both the February and July 2006 bar exams, there were 9,056 examinees from non-ABA law schools, law schools outside the U.S., and from law office study.  If you assumed that all of these 9,056 non-ABA examinees took the July 2006 bar exam and subtracted this number from 47,793, you would get a low-end estimate of how many July 2006 bar examinees were from ABA law schools—and that estimate is 38,737.  Still even with that generous estimate, there is a 7,859 discrepancy (38,737-30,878) greater than a 25% difference between the two numbers. If the 30,878 includes students from non-ABA and ABA law schools, the discrepancy is even bigger.

 

The Takeaway

For a proposal as important as this, the Committee should have done a better job of providing detail on the data from their study and the methodology behind the study.  That’s needed for anyone to be able to make an informed opinion about whether the shortening of the look-back period is a good or bad policy.  They should have shown the analysis that would give confidence that the sample is not systematically biased and is reflective of the actual population of examinees.

Moreover, the study should have gone further and provided detail on the sample of examinees depicted in the study.  Greater detail is needed on the make-up of the repeaters. If the repeaters came from a wide variety of law schools, that would bolster the Committee’s argument of no-harm; however if the repeaters come from law schools with above-average percentages of students of color, then a shortening of the look-back period would seem to undermine these school’s ability to meet the proposed higher ultimate bar pass rate.

No one is saying that the numbers are wrong or cooked—but there simply isn’t enough information to give us confidence about the conclusions.  There are certainly possible plausible explanations to give sense to the numbers, and it’s very possible that further detail and explanation would allay the concerns laid out in this posting.  With a proposed big change like Section 316, it’s important to see what those explanations and details are.

 

Testing the Test of Legal Problem Solving

ImageA student preparing for the Multistate Bar Exam could liken the exam to posing 200 Rubik’s cube-type questions—okay, they’re not as hard as solving really hard puzzles–but examinees may sometimes feel that the MBE can be that difficult.

Bonner and D’Agostino (2012) in their research study on the Multistate Bar Exam (MBE) test the test by asking:
• How important is a test-taker’s knowledge of solving legal problems to performance on the MBE?
• To what degree is performance on the MBE dependent on general knowledge, which doesn’t have anything to do with the law or legal reasoning? For instance, are test-wise strategies and common-sense reasoning important to doing well on the MBE?

Background

With thousands of examinees taking this exam annually since 1972, you would think that the answer is a resounding yes to the first and a tepid yes to the second. Why else would most of the states rely on a test if it were not proved valid? But surprisingly, there hasn’t been any published evidence that this high-stake exam does in fact “assess the extent to which an examinee can apply fundamental legal principles and legal reasoning to analyze a given pattern of facts” as one article characterized the MBE — in other words, the MBE tests skills in legal problem solving.

Establishing test validity is the worth of the any test; validity is not an all-purpose quality, for it only has reference to its purpose. Establishing test validity is not uncommon in other fields such as medical licensure, where there are studies establishing validity of medical clinical competence , internal medicine in training exam, and the family physician’s examination.

A finding that common sense reasoning and general test-wise strategies are important to MBE performance would in fact indicate that the test lacks validity as a measure of legal problem solving. There is indirect support that the general common sense and test-wise strategies aren’t important. In his review of research Klein (1993) refers to a study in which  law students outperformed college students, untrained in the law, on the MBE. But we can’t conclude that common sense and test-wise strategies are critical for doing well on the MBE since maturation effects (just being older and smarter as a result) not legal training, cannot be ruled out as a reason for the superior performance of law students.

The Study

In devising a study to measure validity (construct and criterion) of the MBE, Bonner et al., (2012) drew on the very large body of research about novice-expert differences in problem solving. This research looks at the continuum of expertise suggested by these extremities and the spaces in between. Expert-novice studies compare how people with different levels of experience in a particular field of endeavor go about solving problems in that field. By doing this, scientists hope to see what beginners and intermediates need to do to go to the next level. Cognitive scientists have looked at these differences in many different areas—mathematics, physics, chess playing and medical diagnosis—but legal problem solving hasn’t got a lot of attention.

So what does this research tell us? Experts draw on a wealth of substantive knowledge to solve a problem in their area of expertise. They know more about the field and have a deeper understanding of its subtleties. An expert’s knowledge is organized into complex schemas (abstract mental structures) that allow experts to quickly hone in on the relevant information. Their knowledge isn’t limited to the substance of the area (called declarative knowledge); experts have better executive control of the processes they go about solving a problem. They have better “software” that allow them to use subroutines that weed out good and bad paths to a solution. In his article on legal reasoning skills, Krieger (2006) found that legal experts engage in “forward-looking processing.”

By contrast, novices and intermediates are rookies of varying degrees of experience in the domain in question. Intermediates are a little better than novices because they have more knowledge of the substance of a field, but their knowledge structures—schemas—aren’t as complex or accurate. Intermediates, though, are less rapid in weeding out wrong solutions and less efficient in honing in on the right set of possible solutions.

Bonner et al., classified law graduates as intermediates and devised a study to peer into the processes involved in answering MBEs. They anticipated that law school graduates needed an amalgam of general as well as domain-specific reasoning skills and thinking-about-thinking (metacognitive) skills to do well on the MBE.

Bonner used a “think-aloud” procedure –online verbalizations made during the very act of answering an MBE question– to peer into the students’ mental processes. A transcript of the verbalizations was then laboriously coded into types which, in turn fell into broader categories of legal reasoning (classifying issues, using legal principles and drawing early conclusions.), general problem solving (referencing facts and using common sense), and metacognitive (statements noting self-awareness of learning acts). The data was then evaluated with statistical procedures to see which of the behaviors were associated with choosing the correct answer on MBE questions.

Findings

Here are the results of Bonner’s and D’Agostino’s study:

Using legal principles had a strong positive correlation (r= .66) with performance. When students organized decisions by checking all options and marked the ones of that were most relevant and irrelevant, they were more likely to use the correct legal principles.
Using common sense and test-wise strategies had a negative, but not significant, correlation with performance. In fact test strategizing had a negative relationship with performance. Using “deductive elimination, guessing, and seeking test clues was associated with low performance on the selected items.”
Among the metacognitive skills, organizing (reading through and methodically going through all of options) had a strong positive correlation (r= .74) with performance. (Note to understand the meaning of a correlation, square the r and the resulting number is the degree to which the variation in one variable is explained by the other; in this example, 55% of the variation in MBE performance is accounted for by organizing.) When students are self-aware and monitor what they are doing, they increase their chances for picking the right answers.

The Takeaway

The bottom line is that the MBE does possess relevance to the construct of legal problem solving –at least with respect to the questions on which the think-aloud was performed. In short, Bonner et al., demonstrate that the MBE has validity for the purpose of testing legal problem solving.

Unsurprisingly, using the correct legal principle was correlated strongly with picking the right answer. This means a thorough understanding and recall of the legal principles are all critical to performance. That puts a premium not only on knowing the legal doctrine well, but exposing yourself to as many different fact patterns will help students to spot and instantiate the legal rules to the new facts. Analogical reasoning research says that the more diverse exposure to applications of the principles should help trigger a student’s memory and retrieval of the appropriate analogs that match the fact pattern of the question. Students should take a credit bearing bar related review course, and they should take this course seriously.

If you want to do well on the MBE, don’t jump to conclusions and pick what seems to be the first right answer. The first seemingly right answer could be a distractor, designed to trick impulsive test-takers. Because of the time constraints, examinees are especially susceptible to pick the first seemingly right answer. But the MBE punishes impulsivity and rewards thoroughness– so examinees should go through all the possible choices and mark the good and wrong choices. They shouldn’t waste their time with test-wise strategies of eliminating choices based on pure common sense or deduction that has no reference to legal knowledge and looking for clues in the stem of the question. Remember, test-wise strategies unconnected with legal knowledge had a negative relationship to MBE performance.

Although errors in facts that result from poor comprehension were not prevalent among the participants in Bonner and D’Agostino’s study, that doesn’t mean that good reading comprehension is unimportant. If a student has trouble with reading comprehension, the student should start the practice of more active engagement with fact patterns and answer options as Elkin suggests in his guide.

Finally, more training in metacognitive skills should improve performance. Metacognition, which is often confused with cognitive skills such as study strategies, is practiced self-awareness. As Bonner and D’Agostino, “practice in self-monitoring, reviewing, and organizing decisions may help test-takers allocate resources effectively and avoid drawing conclusions without verification or reference to complete information.”

Driven to Distraction?

From the back of any classroom in law school, you’re likely to see students surfing the web, checking emails, or texting while a lecture or class discussion is going on. A recent National Law Journal article reports a study’s finding that 87% of the observed law students were using their computers for apparently non-class related purposes—playing solitaire, checking Facebook and looking at sports scores, and the like—for more than 5 minutes during class. The study by St. John’s Law Professor Jeff Sovern was published in a recent edition of the Louisville Law Review. 2 and 3L’s –not 1Ls–were the groups most likely to be engaged in non-class related activities.

Image

Conventional wisdom and common sense tell us that media multitasking is bad for learning and instruction. For stretches of time, students in class aren’t paying attention to what’s happening in class. But it’s unclear whether the lack of engagement in the class is for any perceived good reasons. Perhaps the class discussion or portion of the lecture was perceived as boring or not relevant to what will be on a test.

Scientists have termed this student behavior to be “media multitasking,” and it isn’t confined to the classroom. Many students probably do this while studying. And media multitasking isn’t just a student problem—we all do it. Psychologist Maria Konnikova in her New Yorker blog writes that the internet is a like a “carnival barker” summoning us to take a look.

Konnikova points to a study that suggests a possible adverse side effect of heavy media multitasking. Ophir, Nass, and Wagner (2009) conducted an experiment where they asked the question: Are there any cognitive advantages or disadvantages associated with being a heavy media multitasker? This is one of the few studies that look at the impact of multitasking on academic performance.

In the experiment, participants were classified and assigned to two groups—heavy media multitaskers or light media multitaskers. Participants in the two groups had to make judgments about visual data such as colored shapes, letters or numbers. Ophir et al., found that when both groups were presented with distractors, heavy users were significantly slower to make judgments about the visual data than light users and that heavy users were more likely to make mistakes than the light users. Also heavy users had a more difficult time switching tasks than light users when the distractor condition was presented. The researchers explained the results with the theory that the light users had a more top-down executive control in filtering in relevant information and thus were less prone to being distracted by irrelevant stimuli. By contrast, heavy users had a more bottom-up approach of taking in all stimuli and thus were more likely to be taken off the wrong track by irrelevant stimuli.

In short, the researchers concluded that being a heavy media multitasker has adverse aftereffects. If you’re a heavy user, you’re more likely to be taking in irrelevant stimuli than if you are a light user. In other words, heavy media multitasking promotes the trait of being over-attentive to everything, relevant or irrelevant. That could mean that if you’re a heavy user, your lecture or book notes are more likely to contain irrelevant information – irrespective of whether you’re actually online when you were taking notes. It’s as though heavy media use has infected your working memory with a bad trait. Because the effects are on working memory, heavy media multitaskers should be advised to go back and deliberate over their notes, and revise accordingly.

But a note of caution. The study’s conclusions are a long way from being applicable to what actually happens in academic life. The study involved exposures to relevant and irrelevant visual and letter or number recognition stimuli. So it’s unclear whether the heavy user is equally distracted when she or he is presented with semantic information—textual or oral information that has meaning. And that’s an important qualification of the study’s conclusions because semantic information constitutes the bulk of what we work on as students. As far as I can tell, there is no such study where participants are asked to make judgments about semantic information.

Also the direction of causality is unclear. Does heavy multitasking make a person a poor self-regulator of attention? Or does heavy media multitasking happen because a person is a poor self-regulator?

Although the research is far from conclusive, we shouldn’t wait to act on what exists. Here are a few suggestions on what students and instructors can do.

• First, manage your distractions. Common sense dictates that being distracted is bad for studying—whether the source of the distraction is the internet, romantic relationships or television. Take a behavioral management approach. Refrain from using the internet, until you’ve spent some quality undistracted time with your studying or listening to what’s happening in class and then reward yourself with a limited amount of internet time – after class.

• Second, assess your internet use. Monitoring your use alone will have a reactive effect. Just being aware of what you’re doing will cut down on the amount of irrelevant stimuli you’re exposed to.

• Third, if you identify yourself as a moderate to heavy internet consumer of non-academic content while studying or in class, be more vigilant about the quality of your lecture and study notes. These notes are more likely to contain a lot of irrelevant material, and you’ll need to excise irrelevancies.

• Fourth, instructors take note that if they see their students tuning out in class, they should re-evaluate their lesson plans and classroom management to more fully engage the majority of their students. Banning the internet from the lecture halls won’t necessarily make students more engaged. If the class doesn’t engage the students, they will find other ways to tune out—with or without the internet.

George Miller’s Magical Number 7, Novice Law Students and Miller’s Real Legacy

Today’s NY Times Science section featured a story entitled Seven Isn’t the Magic Number for Short-Term Memory on psychologist’s George Miller classical paper on the limits of human’s short-term memory. His theory established in a 1956 article is that there is a numerical limit of 7 items that humans can retain in “short-term” memory. The gist of the NYT story is that the limit is quaint but outdated. So what’s interesting is that the same point in the Times article was made almost 21 years ago by prominent theorist Alan Baddley (1994) in The Magical Number Seven: Still Magic After All These Years. The point made in his article is that there isn’t strictly a limit of 7 on short term memory, and that there are many exceptions or qualifications to the limit of 7.

Short-term memory, in contrast to long-term memory, storage—is our workbench or working memory that we use in active processing of information. Short-term memory limits have a big impact on novices in a new field or endeavor—e.g., novice learners of the law. In fact, novice law students have greater difficulty reading case law and solving problems using case law because novices lack the automaticity in cognitive functioning that those with greater legal expertise have. Novice law students experience a kind of cognitive overload—as they have to exert great effort on mental-intensive processes to read and interpret legal cases and solve problems based on the cases they have read.

The NY Times blurb article misses the point about Miller’s legacy. The real legacy behind his theory is that it was important to moving psychology to information-processing models of human thinking. Miller’s article was published in the mid-1950’s when the prevailing learning theory was based on a behaviorism (learning based on stimulus and response). As Baddeley points out about Miller’s theory,

“Miller pointed the way ahead for the information-processing approach to cognition.”

That’s the real enduring magical legacy of Miller.

The Evidence Behind Adding Civ Pro to the Multistate Bar Exam

In February 2013, the National Conference of Bar Examiners (NCBE) made an announcement of a major change to the Multistate Bar Exam (MBE).  Starting with the February 2015 administration of the MBE, civil procedure will be added the list of subject areas being tested. Currently constitutional law, contracts, criminal law/criminal procedure, evidence, real property and torts are tested on the MBE. Since 1972, the MBE has been a high stakes licensing exam for attorneys, and a large aspect of passing the bar in most U.S. jurisdictions. (Today, Louisiana and Puerto Rico are the only exceptions as Washington state was added this year.) The new MBE will still consist of 200 multiple-choice items (of which 190 are actually scored), but there will be fewer items for each of the current subjects in order to make room for civil procedure.  photo

The addition of civil procedure is of little surprise, for experimental civil procedure-like questions had appeared in recent past administrations of the MBEs.  But the justification for this change was not apparent. A review of the NCBE’s 2012 “job study” makes it clear why civil procedure was added. The heart of the “job study” was the results of a survey to recent law graduates with 1 to 3 years of law practice experience.  Survey participants were asked to rate the significance of specified legal tasks, knowledge areas, and skills/abilities. The laws of civil procedure had the highest rating (3.08 in a scale from 1 to 4) among all the knowledge areas, and 86% of the participants of the survey said that civil procedure was both significant and frequently used in their work.

NCBE had contracted a private consultant to do the  job study  “to determine what new lawyers do, and what knowledge, skills, and abilities newly licensed lawyers believe that they need to carry out their work.”  The job study and its survey was part of NCBE’s effort to establish “content validity” for future and current versions of the MBE–in other words,  is the content of the MBE reflective of the actual knowledge or skills that is required of newly admitted attorneys.

Survey participants consisted of law graduates with 1 to 3 years of practice experience, and the makeup of the participants is reflective of the racial composition of law graduates nationally (78% were non-minority while 22%, minority) and the majority (about 52%) worked in private law firms.  The survey methodology for the job study is sound, and law schools should think about using the NCBE’s survey results and survey methodology to evaluate their own curriculum. Take a look at the study full study and the summary of the results. However the jobs study was only a first step toward establishing the MBE’s test validity; the study does establish relevance of the knowledge areas being tested—in other words, content validity.

But the bigger issue is that the evidence for the criterion and construct validity of the MBE hasn’t been strong. These types of validity are essential qualities of a good test    Validity raises the following issues: How well does the MBE predict whether an individual will engage in  a specified level of legal thinking? What exactly are the constructs that the MBE seeks to measure and how well does it measure these constructs? Educational psychologist Sarah Bonner (2012) points out  “the existing criterion and construct-related validity evidence in support of the argument that the MBE is a measure of domain-specific legal thinking is not strong, suggesting a need for an inquiry into processes underlying performance.”  Bonner then proceeded to conduct a study on whether the MBE measures domain specific and domain general skills. But more about the study in a future blog.