Introduction

This blog is about medical education in the US and around the world. My interest is in education research and the process of medical education.



The lawyers have asked that I add a disclaimer that makes it clear that these are my personal opinions and do not represent any position of any University that I am affiliated with including the American University of the Caribbean, the University of Kansas, the KU School of Medicine, Florida International University, or the FIU School of Medicine. Nor does any of this represent any position of the Northeast Georgia Medical Center or Northeast Georgia Health System.



Monday, January 17, 2011

Criteria for selecting students in the Match

We are fast approaching a very important day in the academic calendar. On February 23, 2011 residency programs around the country have to enter their Rank Order List. This day is the culmination of a lot of work by the Program Director, faculty and residents from the program, and from students applying to that program. The actual day that the results of the Match are released is about a month later on March 17. But the work is all done once the lists are in on February 23.
You may not understand this process, so let me walk you through it. Medical students around the country decide what specialty they are interested in applying to at some point during the third year of medical school. Students gather letters of recommendation from faculty over the next several months. Frequently, they will do a fourth year elective rotation in their specialty of interest. They also have to decide if they are going to try to go to another school and do an “away” rotation. For most students, they are obligated to enter the National Resident Matching Program.  If they are a fourth-year student at an allopathic medical school in the US, they have no choice but to enter the Match.
Beginning in October, students begin their job interviews. We call these residency interviews, but honestly the students are trying to land a job as a resident in a particular program. Students will have anywhere from 10 to 40 interviews depending on the competitiveness of the specialty that they are applying to enter. These interviews may be anywhere across the country, but are mostly in larger cities (that is where the teaching hospitals are located).
So, on February 23 the students enter their Rank Order List. The program that they like best is Number 1. Their least favorite is last. Residency programs do the same. They rank all of the students that they interviewed from 1 to however many they want to rank.  The programs don’t have to rank all of the students that they interview, and the students don’t have to rank all of the programs.  But the Match is a binding contract,1 if they rank someone (student or program) they are legally bound to that ranking.
The question for today is how do programs decide how to rank the students that they interview? There are several ways, some good, some really bad! Let’s start with the good ways.
Letters of recommendation can be very helpful, if they are written by honest faculty physicians, who know the student, and have personally worked with a student. These letters can be a great assessment of a student’s global performance. An old study by Keynan, et al 2 done in 1987 compared objective faculty ratings to other types of assessment. This study compared a global faculty rating, a multiple choice question (MCQ) test, and an oral examination.  They found that the “the 'subjective' expert assessment of performance through global rating scales is comparable to that of 'objective' evaluation through written MCQ.” They also found, using a stepwise regression analysis, that the ratings of 'reliability', 'knowledge', 'organization', 'diligence,' and 'case presentation' were the most predictive of the overall global rating. Chair’s letters which are often written by the Chair of a department (who probably does not know the student very well) are generally not much help.
Another good way to rank students is through an interview. Skilled interviewers can pick up on many communication and personality issues that probably don't show up on a paper application. Maybe the applicant is very introverted and has difficulty talking during the interview. Or maybe they are a jerk or a racist or a sexist. A personal interview can pick up these problems (not always, but often).
Unfortunately, there are also some bad ways to rank students.  Commonly, grades and boards are used. Frequently, medical school grades and USMLE board scores are the screens that decide whether a program invites a student to interview.
I want to focus on USMLE scores. Grades are quite variable from school to school. Some schools have an A to F scale, some have Pass/Fail, and others have Satisfactory to Superior. Preclinical grades don't have a lot of predictive value for clinical grades and neither are very predictive for performance in residency.
Board scores are just as bad. They seem to be an objective way to compare students. Everyone, across the country takes the same test. There is one big problem. The USMLE is designed to measure knowledge and application of knowledge. It was created to be used by the State Licensing agencies as a common evaluation for licensure. There are statistical problems when you try to interpret the scores that are given with a pass/fail based test. There have been several studies that all show basically the same thing about board scores. Performance on the boards does not correlate to performance as a physician.
In 2005, Rifkin and Rifkin3 compared the performance of all the first year Internal Medicine residents at a large academic medical center on standardized patient encounters to their scores on the USMLE Step 1 and 2. They found very low correlations. For Step 1, the correlation was 0.2 (df=32, p=0.27) and for Step 2 it was 0.09 (df=30, p=0.61). Remember a higher number means that the two measures are more strongly related.
A more recent study is very critical of the use of USMLE scores for selection of residents. This study by McGaghie and colleagues,4 was a research synthesis using a critical review approach.5 They collected and reported correlations between USMLE Step 1 and 2 and several reliable measures of clinical skills. These skills included auscultation of the heart, performance of ACLS (Advanced Cardiac Life Support), communication with patients, thoracentesis, and central line placement. They found correlations from -0.05 to 0.29 to Step 1 and -0.16 to 0.24 for Step 2.
Their conclusion sums it all up. "Use of these scores for other purposes, especially postgraduate residency selection, is not grounded in a validity argument that is structured, coherent, and evidence based. Continued use of USMLE Step 1 and 2 scores for postgraduate medical residency selection decisions is discouraged."
I couldn't agree more. If I need a neurosurgeon to operate on my brain, I want to know that he has a very steady hand, not the highest board score. If I need a radiologist, I want to know that her visual pattern recognition is outstanding, not that she scored well on a multiple-choice question test. And if I need a family doctor, I want to know that his clinical reasoning and communication skills are excellent, not that he scored well on the boards.
References
1. http://www.nrmp.org/res_match/policies/map_main.html
2. Keynan A, Friedman M, and Benbassat J.  Reliability of global rating scales in the assessment of clinical competence of medical students. Med Educ  1987;21(6):477-81.

3. Rifkin WD, Rifkin A. Correlation between house staff performance on the United States Medical Licensing Examination and standardized patient encounters. Mt Sinai J Med. 2005;72(1):47-9.

4. McGaghie WC, Cohen ER, and Wayne DB. Are United States Medical Licensing Exam Step 1 and 2 scores valid measures for postgraduate medical residency selection decisions? Acad Med  2011;86(1):48-52.

5. Eva KW. On the limits of systematicity. Med Educ. 2008;42:852–853.

No comments:

Post a Comment