Understanding the interaction of the ‘truths’ of validity in #MedEd studies [Part 2]

Read part one here

By Damian Roland (@Damian_Roland)

The validity on a clinical study is determined by a number of key well defined factors. While it is possible to have varying qualities of validity there is generally a clear delineation of how a particular intervention is related to an outcome or how potential confounders are controlled for. In medical education or improvement science, validity is a much more challenging concept. In part one some of these problems were highlighted. In this second part as specific example of this challenge is demonstrated through the utilisation of Patient Video Cases in medical education.

Patient video cases are a specific educational intervention which demonstrate key features of clinical signs through an audio-visual medium [12]. They act as a surrogate ‘bedside’ teacher where an expert can guide the learner and demonstrate features of illness that can only be readily demonstrated via audiovisual stimulus. There is a growing evidence base for their use, however the exact mechanism of their action has yet to be determined [13].

In order to demonstrate that patient video cases are a useful adjunct to medical and clinical educators’ current teaching resources, especially given they are potentially time consuming to create, edit and store, then their effectiveness must be demonstrated. A review of Patient Video Cases highlighted the large heterogeneity in current published work. This included 18 studies which were assessed for objectivity of their construct and internal validity. To do this an amended checklist originally proposed by Faringdon [14]  was used and the following applied to each of the studies:

  1. Selection: Does the outcome measure allow for control between groups?
  2. History: Does the outcome measure allow for the effects caused by some event occurring at the same time as the intervention?
  3. Maturation: Does the outcome measure allow for natural progression in learning and knowledge?
  4. Instrumentation: Is the outcome measure reproducible?
  5. Testing: Does the outcome measure itself affect the results?
  6. Differential attrition: Can the outcome measure control for differing numbers of participants in control or experimental groups (if present) or large drop out rates.


Only two of the studies [15,16] satisfied criteria to ensure they had adequate internal and construct validity with three other papers [17-19] having minor concerns. The paper examined internal and construct validity in isolation as only the outcome measures themselves were being evaluated.  The paper recommended greater attention be paid by researchers to highlighting the validity of any outcome measure used. Theoretically the relatively discrete and objective nature of all 4 domains, or ‘truths’ of validity described should enable for this information to be included in any study protocol. In practice there are some particular issues with these ‘truths’ in an educational context as the impact of ensuring validity for one of the ‘truth’s’ may falsify another.  This effect is demonstrated by examining an interaction or ‘clashes’ between the 4 truths for each of the studies in the review (figure 2). The Venn diagram is not complete in the sense that external validity and internal validity tend not impact on each other (Once a study has been performed and internal validity deemed sufficient this should not be a confounder in an external validation) as neither do statistical validity and construct validity. Not all intervention/outcome measure investigated for patient video cases had challenges with aspects of validity clashing. However the greatest problems arose with interplay of the ‘truths’ if assessment of an individual or group was being examined.

Figure 2 [Part 2]
Figure 2
There are various implications for these interactions in respect of performing research on assessment in the context of medical education. Validity is not necessarily a binary phenomenon and the persistent chasing of a psychometric property may not be desired or feasible i.e. it is not a case of being valid or not valid [20]. However there are some predictable complications which could be mitigated or controlled for as long as considered in the study design. Visualising the interplay of the forms of validity in medical education studies may assist researchers in delivering high quality methodological approaches.


  1. Roland D and Balslev T. Patient video cases in Medical Education. Arch Dis Child Educ Pract Ed 2015;100:210-214
  2. Roland D, Coats T, Matheson D. Towards a conceptual framework demonstrating the effectiveness of audiovisual patient descriptions (patient video cases): a review of the current literature. BMC Medical Education 2012;12(1):125.
  3. Farrington DP. Methodological Quality Standards for Evaluation Research. Ann Am Acad Pol Soc Sci 2003 05/01;587(1):49-68.
  4. Kamin C, O’Sullivan P, Deterding R, Younger M. A comparison of critical thinking in groups of third-year medical students in text, video, and virtual PBL case modalities. Acad Med 2003 Feb;78(2):204-211.
  5. Balslev T, Jarodzka H, Holmqvist K, de Grave W, Muijtjens AM, Eika B, et al. Visual expertise in paediatric neurology. Eur J Paediatr Neurol 2012 Mar;16(2):161-166.
  6. Raijmakers PG, Cabezas MC, Smal JA, van Gijn J. Teaching the plantar reflex. Clinical Neurology & Neurosurgery 1991;93(3):201-4.
  7. Balslev T, de Grave W, Muijtjens AMM, Eika B, Scherpbier AJJA. The development of shared cognition in paediatric residents analysing a patient video versus a paper patient case. Advances in Health Sciences Education 2009;14(4):557-565.
  8. Wood S, Cummings JL, Schnelle B, Stephens M. A videotape-based training method for improving the detection of depression in residents of long-term care facilities. Gerontologist 2002;42(1):114-21.
  9. Richard Fuller. Personal Communication. Association of Medical Education in Europe Conference 2014 Milan, Italy (session #3b Validity)