In this week’s episode, Jason discusses ITERs, or In-Training Evaluation Reports, a fundamental building block of health professions education. These assessment tools combine Likert scales and narrative text to retrospectively document a trainee’s performance on a defined clinical experience. However, ITERs are often misused. A group of Assessment Avengers, lead by Rose Hatala from UBC, have assembled to show us the value of narrative comments in ITERs. They sought to determine existing validity evidence for or against the use of ITER narrative comments to assess clinical trainees, and to identify any gaps in evidence.
Curious what the hosts thought of this systematic review? Check out the podcast here (or on iTunes!) to find out!
KeyLIME Session 154:
Listen to the podcast
Read the episode abstract here.
Hatala R1, Sawatsky AP2, Dudek N3, Ginsburg S4, Cook DA2. Using In-Training Evaluation Report (ITER) Qualitative Comments to Assess Medical Students and Residents: A Systematic Review. Acad Med. 2017 Jun;92(6):868-879.
Reviewer: Jason R. Frank (@drjfrank)
ITERs, or In-Training Evaluation Reports, are a fundamental building block of health professions education. These assessment tools combine Likert scales and narrative text to retrospectively document a trainee’s performance on a defined clinical experience.
However, ITERs have been much maligned in the literature of the last two decades, having been saddled with low inter-rater reliability, numerous threats to validity, administrative overgrowth, and that dreaded disease of meded: nonplastic ITERitis checklistica. In short, ITERs are often horribly misused. They are deployed because institutions need to document that a trainee was present with a pulse, and some competencies were probably achieved. Death to ITERs! has been a recent rallying cry. Enter the “supergroup” of meded researchers, this time lead by Rose Hatala from UBC: they reorient our view of the loathsome ITER with a systematic reminder of the power of documented narratives.
These Assessment Avengers assembled to show us the value of narrative comments in ITERs. They sought to determine how existing validity evidence for or against the use of ITER narrative comments to assess clinical trainees, and to identify any gaps in evidence.
Type of Paper
Key Points on Methods
This is a wonderful systematic review with gold standard methods. Note the clear definitions, the documented steps and inter-rater stats, the use of PRISMA, and the grounding in previously published frameworks. They nicely describe Kane’s validity framework adapted for qualitative assessments (see Table 1). They also appraised the quality of studies using Popay’s criteria for evaluating qualitative research. Truly a tour de force.
My only quibble is their pragmatic search: restricted to English only, it meant included studies were weighted to US & Canada.
To be included, the papers had to be original research on the qualitative assessments of clinical trainees. Narratives had to be interpreted for judgements, and not for research in their own right.
Here is their description of Kane’s framework:
The authors identified 777 candidate studies and selected 22 to extract. Interrater reliability measures were moderate to high (ICC 0.73; kappa>0.41). Papers varied widely in their reporting elements and their qualitative quality scores using Popay.
Scoring inference was supported by studies that showed that rich narratives about constructs of importance can be harvested, that these vary by setting and context, and change with different prompts.
Generalization, or the ability to synthesize data into a useful and accurate overall interpretation, was supported by studies that showed thematic saturation and consistent analysis.
Extrapolation was supported by correlations with other assessment data, including numeric scores, as well as evidence that narratives reflect constructs of importance.
Evidence for implications was not readily found.
The authors conclude that the use of narratives in ITERs has validity evidence though there is a need for future studies to address implications & decisions.
Spare Keys – other take home points for Clinician Educator
This is a gold star meded systematic review that has model methods for any considering a SR (though this one is limited to English)
2. ITERs are not dead, contrary to recent literature. This paper makes the case that narratives can be rich and powerful
This is truly an all-star team of meded researchers. Look up their research for other outstanding papers.
Access KeyLIME podcast archives here
Check us out on iTunes!