Today’s post is from Jennifer Kogan, a thought leader on work-based assessment and direct observation instruments. Her original work on the frames of reference (FoR) of observers (e.g. faculty) demonstrated that an observer uses an internal, idiosyncratic FoR, and not a normative FoR that uses the equivalent cohort of the “observee” (individual being observed). In other words, when assessing a trainee, a faculty member will score performance based on their (faculty member’s) personal competence, and not (as expected) based on how an individual compares with their classmates. See here for more details.
More recently, Jen has argued that research that explores variability in direct observation should include patient outcomes as a metric of trainee performance. See here for her argument.
– Jonathan (@sherbino)
Direct observation of clinical skills (faculty or inter-professional team members observing residents performing real clinical work) is essential to assess trainees and provide them with informative feedback that fosters their progress from novice to competent (and beyond). Direct observation, which assesses “does” on Miller’s pyramid, is particularly relevant in competency based medical education. I believe we must increase the quantity and quality of direct observation, particularly as it relates to observing residents providing care to patients (i.e. history taking, physical examination, and counseling). For too long, we have excessively relied on proxy measures such as ability to present, answer questions on rounds or write notes to assess these core clinical skills.
Typically, direct observation of trainees with patients for the purpose of feedback and assessment occurs infrequently. When you ask the observers why this is the case, the first reason universally offered is “there isn’t enough time.” Therefore, to increase direct observation, we must ensure, first and foremost, that our education and patient care systems support, value, and reward it. We can also help assessors identify ways in which they can embed “snap shots” of observation and feedback in their work activities so that it is feasible, useful and valued by the learner while simultaneously improving patient care.
Workplace-based assessment has been plagued by poor reliability and validity. To realize the benefits of direct observation, we must improve the quality of the assessments. This requires investing in the professional development of faculty and members of the inter-professional team who will be observing residents in the workplace. What might this look like? In our era of competency-based medical education, we need to train assessors to use a similar, criterion-referenced standard to assess performance. The goal of residency training is to ensure that residents can be entrusted to practice unsupervised upon competing training. Therefore, we need to define “satisfactory” as “competent to provide safe, effective patient centered care unsupervised.” This means that observers must have a “shared mental model” of what “competent” looks like. Assessors must be familiar with the best practices related to the skills they are assessing so that their observation and feedback promotes high quality care. Professional development strategies such as performance dimension training can be helpful in this regard. In performance dimension training, participants work together to define key components of competence (knowledge, behaviors, and attitudes) for specific skills (for example, participants identify the knowledge, behaviors and skills that comprise counseling). This should be informed by evidence based best practices. Assessors then have a “shared mental model” which can guide observation and provide a vocabulary for feedback and assessment. Although workplace-based assessment will always require that multiple observers make multiple observations in multiple contexts over time to achieve validity, we still have tremendous opportunity to “raise the floor” of our assessment quality recognizing, in particular, that there are not limitless ways to provide high quality care.
Featured image: Miller’s pyramid, from: Miller G. The assessment of clinical skills/competence/ performance. Acad Med 1990; 65: 63–67.