Trickery or training? Deception in healthcare simulation

By Victoria Brazil (@SocraticEM)

I remember being deceived in a simulation. Our team was prepared for the arrival of a trauma patient, coming in after a motor vehicle accident. We had clear roles and a plan. The patient arrived, and then more (unexpected) patients arrived from the same accident, and soon we discovered the accident involved a truck carrying toxic gas. Our staff (some of whom were confederates) started falling on the floor unwell. Our plan fell apart. Later, in the debrief, we discussed how well the scenario, and our response, illustrated points about situational awareness and leadership strategy.

But I still felt …well…tricked. It hadn’t been ‘fair’, and I wondered what more deception the rest of the session might involve.

So when is deception ok in healthcare simulation?

Its complex.

Some argue that all simulation involves deception; that the fundamental fakeness of simulation – the monitors aren’t real, the patient might be a plastic mannikin or actor – put it in the category of deception, albeit benevolent. I’m not sure I agree with that. Ideally the ground rules are clear in advance – the fiction contract’ in simulation requires facilitators are completely transparent about what’s real and what’s not. A longer discussion of this issue here on Simulcast.

At the other end of the spectrum, there are proponents of trickery (e.g. equipment not working, unhelpful confederates) simply to increase the challenge for simulation participants; to ‘toughen them up’ and prepare them for uncertainty. I am also not a fan of this approach. There is more than enough extraneous cognitive load in most simulation experiences, and this approach sets up mistrust and heightens threats to psychological safety. As LeBlanc and Posner explain – we manipulate emotions in simulation at our peril.

But there is a grey zone. A recent study involved deceiving participants about the role of a senior doctor participating in a simulation, in order to explore ‘speaking up’ behaviour when there are hierarchy gradients in a team. The authors claim such deception was necessary to achieve sociologic fidelity (i.e. a realistic experience with respect to power, hierarchy, professional boundaries, and gender within teams).  This more nuanced situation underlines the need for a guidance for our deception decisions as simulation facilitators. It raises philosophical issues that date back to Socrates’ ‘Noble lie’ in Plato’s Republic – the idea that some deceptions are necessary for our own good. But there may be harms – perhaps the Easter Bunny myth is less innocent than most parents realise…..?

Calhoun and colleagues offer us a framework for considering deception in simulation in their 2015 commentary. They warn of “an apparent disconnect between current practice and existing empiric research on this subject.” Their framework encourages consideration of learner and faculty backgrounds, educational intent, scenario structure, session goals and institutional environment.

Based on this work and others’, I suggest that simulation facilitators should: –

  1. Carefully consider the risk and benefits of any planned deception. Trust between faculty and learners is hard to regain once breached.
  2. Manipulate emotions at our peril, especially if without warning.
  3. Reflect on how intent matters. (I’m still ok with the Easter Bunny 😊)
  4. Mitigate impacts of deception. If possible be clear if deception is going to occur, even if the exact nature of that can’t be revealed. Bring participants along on the justification. 

Love to hear of others experience.

Happy simulating


The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The Royal College of Physicians and Surgeons of Canada. For more details on our site disclaimers, please see our ‘About’ page