By Ronish Gupta
My group sat around the restaurant table, discussing the latest randomized controlled trial (RCT) in pediatric critical care at our most recent journal club. As the evening progressed, I could not help but notice that the discussion made a persistent gravitation towards our individual experiences with relevant cases. This was despite the statistical analyses of the article not even being particularly complex. As per usual, our conclusion was that the RCT could not change routine practice, and that the results should be considered on an individual case-by-case basis.
A recent study in Critical Care Medicine1 suggests that over half of the patients admitted to the adult intensive care unit are not eligible for any one of the top 15 most highly cited RCTs in critical care. So, either we are not studying commonly encountered clinical problems, or, more likely, many patients do not end up fitting the rigid RCT participant molding. In a commentary response to this article2, Lanspa & Morris eloquently describe this apparent irony on the basis of RCTs attempting to maintain internal validity rather than external validity.
What does this mean for the clinician at the bedside?
It means that in an effort to minimize noise, and maximize the chances of finding a significant result in RCTs, participant populations, interventions, and assessments are often held to unrealistic standards. There is usually someone who pipes up during rounds to explain why a particular patient would not have been eligible for a given study, and therefore the findings cannot be applied directly. It means that despite the enormous effort that goes into planning and executing these RCTs, we are often left unsure, and skeptical about how to apply the findings to our own practice.
While large-scale population or group comparison studies may be helpful in developing public health strategies, or broad clinical practice guidelines, by the time the process filters its way down to the bedside of an individual patient, it often becomes less clear and confident. However, I don’t think we should throw the baby out with the bath water. I once heard someone recommend the practice of evidence-informed medicine, as opposed to evidence-based medicine. It is a subtle, but important, distinction. The latter suggests a more rigid, prescriptive style of practice, while the former promotes knowing and understanding the existing literature, but utilizing a more holistic strategy of integrating evidence with the rest of the clinical picture. It is a phrase I’ve pitched from time to time with learners, and with some agreement and understanding.
What does this mean for the teacher in the journal club?
Although there will always be a role for reviewing and being familiar with RCT literature within our respective fields, it got me wondering that perhaps limiting journal club to these types of studies is resulting in lost learning opportunities. Instead, could there be a role for introducing the odd case report into the divisional journal club roster? Despite their obvious shortcomings, case reports still maintain a central position in even the most prominent scholarly journals for a reason. A well-constructed case report has the potential to draw attention, and stimulate abstract analytical thinking. In a group environment, they open the door for sharing experiences and hypotheses. Often, they can paradoxically seem more directly applicable to our day-to-day practice, as you can link evidence, pathophysiology, and contextual factors all together. A critical appraisal tool for case reports is available from the Joanna Briggs Institute.3 It is meant for evaluating case reports for inclusion in systematic reviews; however, it does highlight several important features to consider when reading a case report for any purpose.
Case reports for journal club; give it a try and let me know how it goes.
- Ivie RMJ, Vail EA, Wunsch H, Goldklang MP, Fowler R, Moitra VK. Patient Eligibility for Randomized Controlled Trials in Critical Care Medicine: An International Two-Center Observational Study*. Crit Care Med. 2017;45(2):216-224. doi:10.1097/CCM.0000000000002061.
- Lanspa MJ, Morris AH. Why So Few Randomized Trials Are Useful*. Crit Care Med. 2017;45(2):372-373. doi:10.1097/CCM.0000000000002115.
- The Joanna Briggs Institute. Critical Appraisal Tools. http://joannabriggs.org/research/critical-appraisal-tools.html. Published 2016. Accessed March 2, 2017.
Featured image via Wikimedia Common
Image 2 via Flickr
The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The Royal College of Physicians and Surgeons of Canada. For more details on our site disclaimers, please see our ‘About’ page