CBME: the wave of the future in #meded! One of the claims of this medical education overhaul is that it will allow residents in difficulty to be identified earlier in their training … but will it? The hosts review a study that examined this assumption. (Bonus: get a peek into Jon’s hometown’s history).
KeyLIME Session 219
Listen to the podcast.
Ross S et al., Association of a Competency-Based Assessment System With Identification of and Support for Medical Residents in Difficulty. JAMA Netw Open. 2018 Nov 2;1(7):e184581.
Jonathan Sherbino (@sherbino)
Random personal fact. I grew up in Kapuskasing. That’s in the North. Northern Canada. If you have any preconceived idea about Canadians… frozen tundra, snowball fights in July… amplify them 100X and then you have a sense of what my hometown was like. Our dual claim to fame was that we once had a World War 2 POW camp without a fence. Good luck navigating the 1000s of kilometres of dense woods inhabited only by moose and blackflies. Our other claim to fame was that we were the home of an early warning radar system – part of the NORAD (North American air defence) Cold War apparatus. What does this have to do with #meded?
A long complaint of clinician teachers and program directors is the late identification of a trainee in trouble. I suspect that many #KeyLIMErs recall a time in their career where a learner was identified late in training as not ready for independent practice. When I have polled program directors during numerous talks, I have yet to have a room where at least one program director indicated that the solution to this problem was to send the learner to their final high-stakes knowledge exam, squint their eyes closed, and hope for the best.
One of the promises of CBME is that programmatic assessment (systematic, longitudinal, multi-sampled performance with group summary judgment) will remedy this problem. Learners in difficulty will be identified earlier in training with an opportunity for effective and less intensive remediation. But is this assumption true? Enter Ross et al.
“we addressed one of the core assumptions of CBME by examining the extent to which use of competency-based assessment is associated with a change in rates of identification of residents in difficulty compared with traditional assessment”
Key Points on the Methods
This was a retrospective, observational cohort study that adhered to STROBE guidelines.
Urban family medicine resident assessment data from a single site was compared: 2006-2008 (traditional) and 2010-2014 (programmatic assessment).
Files were reviewed for a flag (below-average performance on an end of rotation or summative report; unprofessional behaviour), whether the flag was addressed by a supervisor or the program or whether a resident was labelled as in difficulty based on program criteria of the number and types of flags.
Traditional assessment was typical end-of-rotation reports of expert judgement using rating scales and checklists.
Programmatic assessment required narrative description of direct observation of resident performance with associated tags to high-level descriptions of family medicine competencies (professionalism, communication, procedural skill, clinical reasoning etc.). An emphasis on assessment for learning (formative assessment) was part of the design.
A comparison of the number of flags pre to post implementation of programmatic assessment was performed.
n=458 with roughly similar demographics pre and post programmatic assessment implementation with the exception of 35% IMGs pre versus 18% post. A secondary analysis removing IMG data did not show any difference in the overall findings. A logistic regression did not show a significant association between IMG status and being flagged.
|At least 1 flag per cohort year||45-51%||16 – 27%|
|5 or more flags per cohort year||16-27%||0-11%|
- reduction in the proportion of residents receiving at least 1 flag after implementation of programmatic assessment
- 38;(95%CI, 0.377-0.383),
- decrease in the number of residents in difficulty after implementation of programmatic assessment
- 13 [95%CI, 0.128-0.132] to 0.17 [95%CI, 0.168-0.172]
- for residents who had 1 or more flags on assessments, increase in documentation that the flag was discussed after implementation of programmatic assessment
- 18; (95%CI, 0.178-0.183)
The authors conclude…
“The findings from this multiyear comparison of implementation of competency-based assessment and traditional assessment support a proof of concept for CBME. Changing the focus of assessment to an emphasis on direct observation, increased documentation, and assessment for learning may be associated with improved identification of learners who are deficient in 1 or more competency and with how those deficiencies are addressed.”
Spare Keys – other take home points for clinician educators
This is an important paper that tests the assumptions of “the-next-great-thing” in #meded. It’s a reminder of the importance of education scientists to challenge the assumptions that gird the innovations of clinician educators.
Access KeyLIME podcast archives here