Category Archives: legal education

Is There a Gender Gap in Performance on the New York Bar Exam?

As thousands of law graduates take the bar exam, women graduates may wonder whether an important section of the bar exam—the Multistate Bar Exam (MBE)–is stacked against them. Evidence of this is tucked away in an obscure report of the NY Board of Bar Examiners (2006, Oct. 4), the agency that is responsible for administering the licensing exam. The numbers below show men outperform women on the MBEs[1] by about 27 points while women do slightly better –10 points–than men on the Essay[2] portion of the NY State Bar exam.

Score Means, Standard Deviations and Standard Errors

Domestic-Educated First Time Takers, Females and Males, July 2005**

Gender MBE Scaled Score x 5 Essay Scaled Score NYMC Scaled Score Total NY Bar Score*
FemaleMean 713.28 734.08 719.75 724.34
(SD )(n-3,264, SEM = 1.2) (72.53) (69.21) (76.85) (63.74)
MaleMean  740.04 724.12 724.62 730.54
(SD)(n=3,299), SEM 1.30 (73.96) (70.80) (77.84) (64.47)
Total   Mean  726.69 729.07 722.20 727.44
Total (SD) (73.96) (70.80) (77.84) (64.47)

*Total score is computed by taking the weighted average of the adjusted MBE scaled score (40%), Essay score (50%) and NYMC (10%). New York Board of Bar Examiners (2006, Oct.4).

**New York Board of Bar Examiners (2006, Oct.4).

On the MBE, men scored better than women by 26.76 adjusted points. This 27-point difference between men and women seems big and pervasive. The gap is consistent across ethnic/racial groups, first-time versus repeat takers as well as between domestically and foreign schooled graduates. But a closer analysis of the numbers tells us that the gap isn’t that alarming.

Presumably the 27-point gap meets the statistical test of significance, which tells us whether two averages from two different groups are statistically different or not. Typically a .05 probability is set and that means that there is a 5% or less chance that the two means are really different due to pure dumb luck. Put differently, 5% or less represents the acceptable level of risk of being wrong when we conclude that two averages from two groups are different.

But the fact that two means are statistically different doesn’t tell us whether that difference is big enough to really have practical significance. To paraphrase Cummins (2014), it’s wrong-headed to make too much of the statistical significance because it is largely a product of the enhanced sensitivity that comes from with a sample size as large as the one here—over 6,000 people. Instead of relying on a test of statistical significance, we need an analysis of “effect size.”

You can use a formula to determine effect size, and there’s a handy website calculator for that. Rounded off, the effect size of sex is .37. This means that sex has a moderate effect on MBE performance. The other statistic –effect size r squared =.03 –tells us that 3% of the difference between men and women is due to sex.

In other words, sex has a small relationship to performance on the MBE; in fact, the contribution of sex to performance on the MBE is only 3%. That means that the remaining 97% of the variability in performance is due to factors other than sex. Thus, it would be hard to claim that the MBE is biased against women examinees.

[1] The Multistate Bar Exam  is a 6-hour test that requires examinees  to answer 200 multiple questions on the application of law several doctrinal areas.

[2] The essay portion tests the ability of examinees to identify the applicable New York state law and write essayd on the application of that law to fact patterns.

Sam Sue, Copyright, 2014.

Leave a comment

Filed under Educational Psychology, legal education

Data Offered to Support ABA Proposal to Change Bar Pass Requirement Raises More Questions than It Answers

Question_mark_(black_on_white)            About two weeks ago, a lawyer friend of mine asked me to look at the statistics submitted in the April 26-27, 2013  minutes of the ABA’s Section of Legal Education and Admissions to the Bar Standards Review Committee (“Standards Committee”). The worn out joke about lawyers is that they went to law school because they hate or couldn’t do statistics. But I am a lawyer, I like statistics and a significant part of my doctoral training was in statistical models.   When I reviewed the Standards Committee presentation of the data, I became dazed and confused.

The statistics were offered in support of the Standards Committee proposed to change the bar passage requirement for the accreditation of law schools.   Under Proposed Standard 316, 80% of a law school’s graduates must pass the bar within 2 years (5 tries) (this is called the “look-back period”).  Under the current rule, there’s a 75 % pass rate that must be achieved within 5 years (10 attempts) of graduation.  This change has stirred controversy because of its potential impact on non-traditional students, particularly students of color.

The proposed changes are based on a study of past examinees, and it’s important to review them before you make up your mind.   

In support of the proposed changes, the Standards Committee musters up a slender thread of evidence—that the overall bar pass rate for the past five bar exams have ranged between 79% and 85%.  Opponents have jumped all over this morsel of evidence. Readers can see these objections expressed in a letter to the Standards Committee from the Chair of the ABA Council for Racial and Ethnic Diversity in the Educational Pipeline

This posting takes a closer look at the data to support the ABA proposal for the shortening of the look-back period from the existing five years (10 tries) to two years (5 tries).

The Figure 1 (shown below) was used to support the Standards Committee’s proposal, and it warrants a careful look because it is very similar to the other graphs the Committee used to support its proposal. The figure shows a line graph in which the x-axis stands for the number of attempts at taking the Multistate Bar Exam (MBE) and the y-axis represents the percentage of examinees.  The reader get the general impression that there is a big drop in the number of repeaters after the first attempt.  The descent flattens out and starts to approach zero at the fifth and later attempts.  This general impression seems to support the Proposed Section 316 to shorten the look-back period to 2 years (5 attempts), since the number of examinees seems infinitesimal after five attempts.

Figure 1. Percentage of Examinees Taking the MBE One or More Times(N=30,878; 1st attempt in July 2006)

 

 graph1

                                                     Percent taking

           

 

 

                                                                                                              Number of Times Taking the MBE

But upon closer examination, it’s hard to make sense of what this figure is depicting.

It is said that a picture paints a thousand words, but this graph –like other graphs the Standards Committee used in its report–raises a thousand questions.

  • Figure 1 refers to 30,878.  It is difficult, if not impossible, for the reader to tell what the 30,878 stands for.
    • Does 30,878 represent the number of people who took exam the first time in July 2006 and those who re-took the exam again during the July 2006-July 2011 period?
    • Where does the number 30,878 come from?  Is it the total number of examinees from ABA law schools or does it include examinees from non-ABA law schools?  
    • Are the 30,878 a sample of a larger population of examinees?  If so, what is the size of this population?
    • How was the sample chosen?  What was the procedure for choosing the sample? 
    • Is this sample an accurate depiction of the population of examinees?   What confidence can we have that that the conclusions about this sample apply to a larger group of people?
  • Figure 1 refers to percentages. For example, in the first try, there’s 84%; 2nd try, 10%, etc. 
    • What numbers do these percentages refer to?  30,878?   Different numbers of examinees who took the bar from July 2006 to July 2011?  If so what are those numbers? 
  • There are also obvious questions that the Standards Committee did not ask:
    • Are the repeaters from a wide range of schools? Do the repeaters disproportionately come from schools with large populations of graduates of color?
    • What explains the low percentage of repeaters?  Does the need to earn an income and opportunity cost of preparing for the exam suppress re-taking the bar? 
    • What  is the behavior of the re-takers?  Do they skip a year or take it for the next bar exam administered? 

 

If the Standards Committee answered these questions, we would at least get a basic understanding of the repeater. It certainly isn’t too much to ask for this.

Are there plausible explanations for the Standard Committee’s numbers?

Yes, but, they aren’t in the Standards Committee April 26-27, 2013 minutes.

 

Could the 30,878 refer to the actual number of examinees who took the bar exam in July 2006?

No. An analysis of the National Conference of Bar Examiners’ own data shows that the 30,878 has to be a sample, not the actual population of examinees who took the July 2006 exam.

According to the NCBE statistics, there were 47,793 first-time takers of the July 2006 bar exam compared to 30,878 examinees in Figure 1.  Could the 30,878 be those who were from only ABA law schools? The NCBE did not publish a breakdown of the examinee’s law schools so it’s hard to compute an exact number examinees from ABA schools. However a pretty good estimate can be derived. For both the February and July 2006 bar exams, there were 9,056 examinees from non-ABA law schools, law schools outside the U.S., and from law office study.  If you assumed that all of these 9,056 non-ABA examinees took the July 2006 bar exam and subtracted this number from 47,793, you would get a low-end estimate of how many July 2006 bar examinees were from ABA law schools—and that estimate is 38,737.  Still even with that generous estimate, there is a 7,859 discrepancy (38,737-30,878) greater than a 25% difference between the two numbers. If the 30,878 includes students from non-ABA and ABA law schools, the discrepancy is even bigger.

 

The Takeaway

For a proposal as important as this, the Committee should have done a better job of providing detail on the data from their study and the methodology behind the study.  That’s needed for anyone to be able to make an informed opinion about whether the shortening of the look-back period is a good or bad policy.  They should have shown the analysis that would give confidence that the sample is not systematically biased and is reflective of the actual population of examinees.

Moreover, the study should have gone further and provided detail on the sample of examinees depicted in the study.  Greater detail is needed on the make-up of the repeaters. If the repeaters came from a wide variety of law schools, that would bolster the Committee’s argument of no-harm; however if the repeaters come from law schools with above-average percentages of students of color, then a shortening of the look-back period would seem to undermine these school’s ability to meet the proposed higher ultimate bar pass rate.

No one is saying that the numbers are wrong or cooked—but there simply isn’t enough information to give us confidence about the conclusions.  There are certainly possible plausible explanations to give sense to the numbers, and it’s very possible that further detail and explanation would allay the concerns laid out in this posting.  With a proposed big change like Section 316, it’s important to see what those explanations and details are.

 

Leave a comment

Filed under Educational Psychology, legal education

Testing the Test of Legal Problem Solving

ImageA student preparing for the Multistate Bar Exam could liken the exam to posing 200 Rubik’s cube-type questions—okay, they’re not as hard as solving really hard puzzles–but examinees may sometimes feel that the MBE can be that difficult.

Bonner and D’Agostino (2012) in their research study on the Multistate Bar Exam (MBE) test the test by asking:
• How important is a test-taker’s knowledge of solving legal problems to performance on the MBE?
• To what degree is performance on the MBE dependent on general knowledge, which doesn’t have anything to do with the law or legal reasoning? For instance, are test-wise strategies and common-sense reasoning important to doing well on the MBE?

Background

With thousands of examinees taking this exam annually since 1972, you would think that the answer is a resounding yes to the first and a tepid yes to the second. Why else would most of the states rely on a test if it were not proved valid? But surprisingly, there hasn’t been any published evidence that this high-stake exam does in fact “assess the extent to which an examinee can apply fundamental legal principles and legal reasoning to analyze a given pattern of facts” as one article characterized the MBE — in other words, the MBE tests skills in legal problem solving.

Establishing test validity is the worth of the any test; validity is not an all-purpose quality, for it only has reference to its purpose. Establishing test validity is not uncommon in other fields such as medical licensure, where there are studies establishing validity of medical clinical competence , internal medicine in training exam, and the family physician’s examination.

A finding that common sense reasoning and general test-wise strategies are important to MBE performance would in fact indicate that the test lacks validity as a measure of legal problem solving. There is indirect support that the general common sense and test-wise strategies aren’t important. In his review of research Klein (1993) refers to a study in which  law students outperformed college students, untrained in the law, on the MBE. But we can’t conclude that common sense and test-wise strategies are critical for doing well on the MBE since maturation effects (just being older and smarter as a result) not legal training, cannot be ruled out as a reason for the superior performance of law students.

The Study

In devising a study to measure validity (construct and criterion) of the MBE, Bonner et al., (2012) drew on the very large body of research about novice-expert differences in problem solving. This research looks at the continuum of expertise suggested by these extremities and the spaces in between. Expert-novice studies compare how people with different levels of experience in a particular field of endeavor go about solving problems in that field. By doing this, scientists hope to see what beginners and intermediates need to do to go to the next level. Cognitive scientists have looked at these differences in many different areas—mathematics, physics, chess playing and medical diagnosis—but legal problem solving hasn’t got a lot of attention.

So what does this research tell us? Experts draw on a wealth of substantive knowledge to solve a problem in their area of expertise. They know more about the field and have a deeper understanding of its subtleties. An expert’s knowledge is organized into complex schemas (abstract mental structures) that allow experts to quickly hone in on the relevant information. Their knowledge isn’t limited to the substance of the area (called declarative knowledge); experts have better executive control of the processes they go about solving a problem. They have better “software” that allow them to use subroutines that weed out good and bad paths to a solution. In his article on legal reasoning skills, Krieger (2006) found that legal experts engage in “forward-looking processing.”

By contrast, novices and intermediates are rookies of varying degrees of experience in the domain in question. Intermediates are a little better than novices because they have more knowledge of the substance of a field, but their knowledge structures—schemas—aren’t as complex or accurate. Intermediates, though, are less rapid in weeding out wrong solutions and less efficient in honing in on the right set of possible solutions.

Bonner et al., classified law graduates as intermediates and devised a study to peer into the processes involved in answering MBEs. They anticipated that law school graduates needed an amalgam of general as well as domain-specific reasoning skills and thinking-about-thinking (metacognitive) skills to do well on the MBE.

Bonner used a “think-aloud” procedure –online verbalizations made during the very act of answering an MBE question– to peer into the students’ mental processes. A transcript of the verbalizations was then laboriously coded into types which, in turn fell into broader categories of legal reasoning (classifying issues, using legal principles and drawing early conclusions.), general problem solving (referencing facts and using common sense), and metacognitive (statements noting self-awareness of learning acts). The data was then evaluated with statistical procedures to see which of the behaviors were associated with choosing the correct answer on MBE questions.

Findings

Here are the results of Bonner’s and D’Agostino’s study:

Using legal principles had a strong positive correlation (r= .66) with performance. When students organized decisions by checking all options and marked the ones of that were most relevant and irrelevant, they were more likely to use the correct legal principles.
Using common sense and test-wise strategies had a negative, but not significant, correlation with performance. In fact test strategizing had a negative relationship with performance. Using “deductive elimination, guessing, and seeking test clues was associated with low performance on the selected items.”
Among the metacognitive skills, organizing (reading through and methodically going through all of options) had a strong positive correlation (r= .74) with performance. (Note to understand the meaning of a correlation, square the r and the resulting number is the degree to which the variation in one variable is explained by the other; in this example, 55% of the variation in MBE performance is accounted for by organizing.) When students are self-aware and monitor what they are doing, they increase their chances for picking the right answers.

The Takeaway

The bottom line is that the MBE does possess relevance to the construct of legal problem solving –at least with respect to the questions on which the think-aloud was performed. In short, Bonner et al., demonstrate that the MBE has validity for the purpose of testing legal problem solving.

Unsurprisingly, using the correct legal principle was correlated strongly with picking the right answer. This means a thorough understanding and recall of the legal principles are all critical to performance. That puts a premium not only on knowing the legal doctrine well, but exposing yourself to as many different fact patterns will help students to spot and instantiate the legal rules to the new facts. Analogical reasoning research says that the more diverse exposure to applications of the principles should help trigger a student’s memory and retrieval of the appropriate analogs that match the fact pattern of the question. Students should take a credit bearing bar related review course, and they should take this course seriously.

If you want to do well on the MBE, don’t jump to conclusions and pick what seems to be the first right answer. The first seemingly right answer could be a distractor, designed to trick impulsive test-takers. Because of the time constraints, examinees are especially susceptible to pick the first seemingly right answer. But the MBE punishes impulsivity and rewards thoroughness– so examinees should go through all the possible choices and mark the good and wrong choices. They shouldn’t waste their time with test-wise strategies of eliminating choices based on pure common sense or deduction that has no reference to legal knowledge and looking for clues in the stem of the question. Remember, test-wise strategies unconnected with legal knowledge had a negative relationship to MBE performance.

Although errors in facts that result from poor comprehension were not prevalent among the participants in Bonner and D’Agostino’s study, that doesn’t mean that good reading comprehension is unimportant. If a student has trouble with reading comprehension, the student should start the practice of more active engagement with fact patterns and answer options as Elkin suggests in his guide.

Finally, more training in metacognitive skills should improve performance. Metacognition, which is often confused with cognitive skills such as study strategies, is practiced self-awareness. As Bonner and D’Agostino, “practice in self-monitoring, reviewing, and organizing decisions may help test-takers allocate resources effectively and avoid drawing conclusions without verification or reference to complete information.”

Leave a comment

Filed under Educational Psychology, legal education

Driven to Distraction?

From the back of any classroom in law school, you’re likely to see students surfing the web, checking emails, or texting while a lecture or class discussion is going on. A recent National Law Journal article reports a study’s finding that 87% of the observed law students were using their computers for apparently non-class related purposes—playing solitaire, checking Facebook and looking at sports scores, and the like—for more than 5 minutes during class. The study by St. John’s Law Professor Jeff Sovern was published in a recent edition of the Louisville Law Review. 2 and 3L’s –not 1Ls–were the groups most likely to be engaged in non-class related activities.

Image

Conventional wisdom and common sense tell us that media multitasking is bad for learning and instruction. For stretches of time, students in class aren’t paying attention to what’s happening in class. But it’s unclear whether the lack of engagement in the class is for any perceived good reasons. Perhaps the class discussion or portion of the lecture was perceived as boring or not relevant to what will be on a test.

Scientists have termed this student behavior to be “media multitasking,” and it isn’t confined to the classroom. Many students probably do this while studying. And media multitasking isn’t just a student problem—we all do it. Psychologist Maria Konnikova in her New Yorker blog writes that the internet is a like a “carnival barker” summoning us to take a look.

Konnikova points to a study that suggests a possible adverse side effect of heavy media multitasking. Ophir, Nass, and Wagner (2009) conducted an experiment where they asked the question: Are there any cognitive advantages or disadvantages associated with being a heavy media multitasker? This is one of the few studies that look at the impact of multitasking on academic performance.

In the experiment, participants were classified and assigned to two groups—heavy media multitaskers or light media multitaskers. Participants in the two groups had to make judgments about visual data such as colored shapes, letters or numbers. Ophir et al., found that when both groups were presented with distractors, heavy users were significantly slower to make judgments about the visual data than light users and that heavy users were more likely to make mistakes than the light users. Also heavy users had a more difficult time switching tasks than light users when the distractor condition was presented. The researchers explained the results with the theory that the light users had a more top-down executive control in filtering in relevant information and thus were less prone to being distracted by irrelevant stimuli. By contrast, heavy users had a more bottom-up approach of taking in all stimuli and thus were more likely to be taken off the wrong track by irrelevant stimuli.

In short, the researchers concluded that being a heavy media multitasker has adverse aftereffects. If you’re a heavy user, you’re more likely to be taking in irrelevant stimuli than if you are a light user. In other words, heavy media multitasking promotes the trait of being over-attentive to everything, relevant or irrelevant. That could mean that if you’re a heavy user, your lecture or book notes are more likely to contain irrelevant information – irrespective of whether you’re actually online when you were taking notes. It’s as though heavy media use has infected your working memory with a bad trait. Because the effects are on working memory, heavy media multitaskers should be advised to go back and deliberate over their notes, and revise accordingly.

But a note of caution. The study’s conclusions are a long way from being applicable to what actually happens in academic life. The study involved exposures to relevant and irrelevant visual and letter or number recognition stimuli. So it’s unclear whether the heavy user is equally distracted when she or he is presented with semantic information—textual or oral information that has meaning. And that’s an important qualification of the study’s conclusions because semantic information constitutes the bulk of what we work on as students. As far as I can tell, there is no such study where participants are asked to make judgments about semantic information.

Also the direction of causality is unclear. Does heavy multitasking make a person a poor self-regulator of attention? Or does heavy media multitasking happen because a person is a poor self-regulator?

Although the research is far from conclusive, we shouldn’t wait to act on what exists. Here are a few suggestions on what students and instructors can do.

• First, manage your distractions. Common sense dictates that being distracted is bad for studying—whether the source of the distraction is the internet, romantic relationships or television. Take a behavioral management approach. Refrain from using the internet, until you’ve spent some quality undistracted time with your studying or listening to what’s happening in class and then reward yourself with a limited amount of internet time – after class.

• Second, assess your internet use. Monitoring your use alone will have a reactive effect. Just being aware of what you’re doing will cut down on the amount of irrelevant stimuli you’re exposed to.

• Third, if you identify yourself as a moderate to heavy internet consumer of non-academic content while studying or in class, be more vigilant about the quality of your lecture and study notes. These notes are more likely to contain a lot of irrelevant material, and you’ll need to excise irrelevancies.

• Fourth, instructors take note that if they see their students tuning out in class, they should re-evaluate their lesson plans and classroom management to more fully engage the majority of their students. Banning the internet from the lecture halls won’t necessarily make students more engaged. If the class doesn’t engage the students, they will find other ways to tune out—with or without the internet.

1 Comment

Filed under Educational Psychology, legal education