Today’s guest post is from Deepak Dath (@drddath) a hepatobiliary surgeon and CE at McMaster University.
A hallmark of competency based medical education (CBME) is frequent, direct observation of performance. Learners need direct observation for formative as well as summative assessment. The basis of their assessment is the activity that they perform right in front of our eyes or the results of their work when they present it to us.
There are numerous direct observation of performance (DOP) instruments that are widely available. Some have been validated for particular situations. But for DOPs to be used, five conditions need to be met.
1. There must be an automatic triggering of the DOP assessment. Triggering the completion of a DOP is most easily done via the learner. However, faculty need to trigger an assessment when performance is below expectations or when major lapses occur – situations where the learner is conflicted to trigger an assessment.
2. The DOP instrument must be easy to use. There are a few options – one example is the O-SCORE.
3. The DOP instrument must be readily available. Existing technology can address this criterion – imagine having the assessment instrument on your hand-held device. Demographics are automatically populated. Technology cues the process and focuses an assessment tailored to the learner.
4. There must be an infrastructure to collate assessments and organize the results to provide meaningful feedback to both the learner and the program. Both quantitative and qualitative data can be collated and organized via existing technology.
5. Finally, there needs to be institutional support for a culture of assessment. The power of DOPs is found in the multiple biopsies from multiple assessors that occur over time. This process requires support from the program for both learners and assessors.
CBME is backstopped by frequent, direct observation. Technology makes the process feasible.
Image copyright of The Royal College of Physicians and Surgeons of Canada