Simulation Education – Too much emphasis on the toys and not enough on the theory?

By Jonathan Sherbino (@sherbino)

Over the last two decades, one of the prominent debates in simulation education has been around fidelity – the accuracy of representation of reality.  For thousands (!!) of dollars, technology companies can provide a lot of bells and whistles from partial task trainers to immersive environments (Remember the fantasies promised with VR?)

Tues post_max headroom

The unspoken assumption was that the degree of fidelity correlated to the efficacy of learning.  One of my favourite studies at the forefront of challenging this unspoken (and expensive) assumption comes from a colleague at McMaster University. (Here’s the reference: Matsumoto ED, Hamstra SJ, Radomski SB, Cusimano MD. 2012. The effect of bench model fidelity on endourological skills: a randomized controlled study. J Urol. 67(3):1243-7.)

Essentially, the authors demonstrated that a styrofoam cup and two McDonald straws is equivalent to a high-fidelity bench trainer to teach (and assess) ureteroscopy and stone removal.

Tues post_pic 2

Many additional studies and commentaries have subsequently argued both sides of this issue.  However, a recent commentary from Stan Hamstra and colleagues articulates the issue in a novel and important way. (Bias alert… I’ve published two manuscripts with Dave Cook, the senior author of this commentary, using data they reference in the commentary.) Here’s the reference:  Hamstra SJ, Brydges R, Hatala R, Zendejas B, Cook DA. 2014. Reconsidering fidelity in simulation-based training. Acad Med. 89(3):387-92

The biggest challenge in the high v. low fidelity argument involves the definition of fidelity? Structural elements (how the simulator appears / physically represents reality)? Functional elements (what the simulator does)? Educational effectiveness (how the simulator promotes learning)?  Obviously, fidelity is a multifaceted concept that is poorly articulated in a high v. low dichotomous argument.

Hamstra and colleagues make three recommendations, paraphrased here:

  1. The HPE community should abandon the term fidelity.  Continuing to use a term that is inconsistently applied and defined in the education literature is hindering the advancement of simulation-based training.
  2. Educators should focus on how a simulator represents/reproduces the function/task that is being taught or assessed.  The historical attention in simulator design to physically mimicking clinical conditions is misguided.  Instead, simulators should mimic the functions/tasks that are addressed in the learning objectives (i.e. functional task alignment).
  3. The goal of simulation should be to promote learning. Attention should be given to learning objectives, learner orientation, and learner engagement (and not the sophistication of the hardware). For example, too many simulation programs fail to incorporate established education best practices such as mastery learning or deliberate practice.

What would the simulation director in your program think about these recommendations?

 Max Headroom image courtesy of Wikipedia