On KeyLIME this week, Jason discusses how authors of this paper, from Tufts and from the Maine Medical Centre, set out to “gather validity evidence” on their FEEDME feedback evaluation instruments. He poses the question: wouldn’t it be good if we had some kind of validated instrument to give us some evaluation information on feedback encounters or feedback in the environment? Intrigued? Read on, and check out the podcast here (or on iTunes!)
KeyLIME Session 160:
Listen to the podcast
Read the episode abstract here.
Bing-You R1, Ramesh S2, Hayes V1, Varaklis K1, Ward D1 & Blanco M2 (2017): Trainees’ Perceptions of Feedback: Validity Evidence for Two FEEDME (Feedback in Medical Education) Instruments, Teaching and Learning in Medicine, 2017 Dec 14.
Reviewer: Jason Frank (@drjfrank)
All clinician educators can readily agree on the critical importance effective feedback can play in the developmental arc of all learners. Galaxies of papers have been written about feedback in its various incarnations (we’re all about coaching feedback right now), including some that we have reviewed on the #KeyLIMEpodcast Episode 139. However, what do we really know about the quality of feedback given in our clinical environments? Wouldn’t it be good if we had some kind of validated instrument to give us some evaluation information on feedback encounters or feedback in the environment?
Key Points on Method
This study has elaborate methods. This study has elaborate methods of the kind Jon & Linda hate me choosing in papers: complex, convoluted, difficult to remember, needing a summary flow diagram, clocking in at over 1000 words of journal text.
Briefly, the team of authors used a multi-step process:
1. Reviewed the lit on feedback
2. Created a working model of feedback in the clinical setting
3. Created 54 item feedback evaluation instrument
4. Conducted cognitive interviews with 35 med students, 11 residents, and 20 more med students doing meded who were nearby
5. Refined the instrument into 2 different instruments (1 for a feedback encounter, and 1 for feedback culture)
6. Identified 12 meded experts and 19 local faculty to perform a Delphi on the items of the instruments
7. Recruited 31 more different trainees to pilot the instruments
8. Recruited ~140 more trainees to complete the instruments & perform a factor analysis
9. Then they rested
The study was exempt from Research Ethics.
The authors used each step in the process to refine their instruments based on feedback and use, resulting in 18 items in FeedME-Culture & 17 items in FEEDME Provider.
Learners said they liked them.
The authors concluded that they demonstrated empirical evidence of validity of these two novel feedback instruments.
Spare Keys – other take home points for clinician educators
1. The idea of an instrument to give us valid information on feedback in the clinical environment is an attractive idea, but not sure FEEDME is there yet.
2. The authors used a framework for demonstrating validity evidence of an instrument that is not contemporary—We recommend David Cook’s version of Messick’s validity model and Kane’s assessment validity approach.
3. The authors’ use of convenience samples of local trainees on multiple occasions is a potential threat to validity.
1. Shout out to Bob Bing-You: we have been a fan of your work for years.
2. Shout-out to the journal Teaching and Learning in Medicine. While not as widely read as some of the other top meded journals, it often contains great content and has some brilliant educators on its editorial board (though the board could use some geographic diversity). Shout out to Larry Hurtubise, organizer extraordinaire, John Mahan, and the editor-in-chief Anna Cianciolo, who have contributed to KeyLIME in the past.
Access KeyLIME podcast archives here
Check us out on iTunes