In today’s episode, gadget loving Jon selects a qualitative study that looks to identify enablers and barriers to engagement with an EPA app. We have the technology – but are we using it to its full benefit?
KeyLIME Session 271
Young et. al., A mobile app to capture EPA assessment data: Utilizing the consolidated framework for implementation research to identify enablers and barriers to engagement Perspect Med Educ June 2020.
Jon Sherbino (@sherbino)
Tech is never a solution for poor educational design. But, good educational design can fail because of logistics. For example, the current emphasis in (P)GME on programmatic assessment has a strong theoretical rationale, but handling large amounts of numeric and narrative assessment data is a problem that causes good design to fail. In this case tech is a solution. When good design and good tech meet, HPE innovation happens.
KeyLIMERs know that I love gadgets. Not surprisingly, this paper is about an education app. The meta message from this paper is around implementation science and contemporary ways to understand how to evaluate educational interventions.
“If EPA apps are to be successfully incorporated into programmatic assessment, a better understanding of how they are experienced by the end-users will be necessary. The authors conducted a qualitative study using the Consolidated Framework for Implementation Research (CFIR) to identify enablers and barriers to engagement with an EPA app.”
Key Points on the Methods
This study was set in a single outpatient psychiatry continuity clinic affiliated with a large academic teaching hospital.
Prior to introduction of an EPA app, a paper-based 27-item checklist and global rating scale and reinforcing and corrective narrative comments.
The iOS app required a faculty to select the relevant EPA, assign an entrustment score and provide a corrective comment.
Faculty and learner dyads were monitored by the study authors via a dashboard to promote regular use of the app. Onboarding to the app included written instructions and a 30-minute one-on-one meeting.
Structured interviews of faculty and residents were performed using a modified CFIR interview guide, with 26 constructs across 5 domains.
Transcripts were independently coded using a directed content analysis with differences resolved by consensus. Codes were constructed into themes that mapped to the CIFR domains.
The reflexivity of the authors included their close relationship with the residency program, and an apriori belief in the efficiency of the app for work flows. They also demonstrated concern that the app would limit the quantity and quality of narrative comments.
After 8 faculty and 10 resident interviews theoretical sufficiency was achieved.
- fast and frequent feedback
- force function of corrective feedback valued
- force function of distilling feedback into essential element
- loss of detail and nuance of feedback
- misunderstanding of entrustment scale
- positive emotional response
- conflict with use in front of patients
- no follow-up on the feedback after initially reading it
- the app was not used when competing clinical demands exceeded the available time
- Aligned with institutional values and norms
The authors conclude…
“The findings support ease of use and utility but also highlight important barriers such as competing demands, variable faculty understanding of the assessment framework, lack of resident use of the feedback beyond initial receipt, and salient tradeoffs when comparing comments generated by the app versus longer, more detailed paper-forms. Educators should utilize app development guidelines that optimize the user interface.”
Access KeyLIME podcast archives here
The views and opinions expressed in this post and podcast episode are those of the host(s) and do not necessarily reflect the official policy or position of The Royal College of Physicians and Surgeons of Canada. For more details on our site disclaimers, please see our ‘About’ page