The Key Literature In Medical Education podcast tackles clinical reasoning (i.e. decision making, case management) this week. The author builds on Pangaro’s RIME framework to help Clinician Educators classify a learner’s case presentation along a scale from vague – structured – organized – pertinent (VSOP). Check out the abstract (below) or the podcast here to see if this innovation is something you want to adopt into your practice.
KeyLIME Session 132 – Article under review:
View/download the abstract here.
Onishi, H. Assessment of Clinical Reasoning by Listening to Case Presentations: VSOP Method for Better Feedback. Journal of Medical Education and Curricular Development. 2016 (3): 125-131.
Reviewer: Linda Snell (@LindaSMedEd)
The author proposes that there is a link between the quality of case presentations (CP) by learners and the learner’s diagnostic reasoning (DR) ability, that preparation for the former will through reflection improve the latter, and that better DR will result in better CP.
Socratic methods such as the One-minute preceptor or SNAPPS allow a clinical teacher to provide feedback on CP and clinical management, but the author states that these methods do not explicitly contain an assessment component for DR (although in my opinion all of the discussion in both methods provides formative feedback).
The author proposes that categorizing CP by levels of DR ability could become the basis for using CP to assess clinical reasoning ability. However, given diverse perspectives of trainers, a global rating scale might help standardization.
Two examples of rating scales provided: RIME (reporter, interpreter, manager, and educator) with 4 easy-to-remember descriptors, used in varied contexts, but very general; and the Mini-CEX, which includes diagnostic reasoning ability as “clinical judgment,” yet does not explain how clinical judgment is assessed and is not specific for DR.
The author previously published a tool in Japanese for assessing DR (see below) derived from RIME.
This scale is best used on undiagnosed patients seeking initial consultation in outpatient clinics or emergency departments – less useful for complex cases and already diagnosed cases (such as ‘admitted patients’).
|Original Global rating scale for diagnostic reasoning.|
|LEVEL||CONDITION OF PRESENTATION||LEVELS OF THE PRESENTER||FEEDBACK FROM THE TRAINER TO THE PRESENTER|
|1||Lack of essential information of the case or inappropriate definition or reliability of information about S/S.||Lack of basic clinical skills for H&Por required information for case presentation.||· Ask the presenter about inappropriate terms or essential information of case presentation. Offer one-to-one practice to the presenter.|
|2||Insufficient or unordered information.||Inability to capture each piece of information or organize information for the case.||· Point out what is missing in case presentation. Ask the presenter to practice case presentation.|
|3||Essential information is well covered but KDDs are not well listed.||Able to report the case but unable to interpret the patient problems.||· Give positive feedback for complete information. Ask the presenter to summarize the presentation and KDDs.|
|4||KDDs are covered but pertinent positive and negative S/S are insufficient.||during H&P no relevant S/S to KDDs obtained.||· Give positive feedback for KDDs and specific feedback for pertinent positive and negative S/S.|
|5||Pertinent positive and negative S/S relevant to KDDs are covered.||through H&P whole picture of the case and its KDDs are clearly described.||· Give positive feedback for a good presentation. Ask the presenter to specify the lesson learned from the case|
(1) to explore underlying theory for this assessment tool;
(2) to change the original 5-point scale model into a 4-point scale model (? To simplify the model).
Type of Paper
Research: validation or reconstruction of rating scale?
Tool development, Pilot study.
Key Points on Methods
Faculty development of trainers done using a standard role play of a trainee/trainer interaction.
17 clinical clerk senior medical students presented 84 CP to 10 faculty supervisors in an outpatient setting. Teachers assessed the CP using the 5 point scale above.
Descriptive and univariate stats done.
Feedback from faculty teachers obtained.
‘Validation’ by comparison with ‘principles of work based assessment’.
1. No student scored 1, their mean scores 3-4; high interfaculty variability in scoring; two thirds of CP scored 3
2. Faculty feedback suggested adding a descriptor for each level of the GRS, to make it easier to understand and remember different level
3. Based on the results the scale changed to 4 points as no student scored 1. (see below). Recommendations to trainee more specific, integrating whether key differential diagnoses used
|Revised global rating scale for diagnostic reasoning.|
|CONDITION OF PRESENTATION LEVELS OF THE PRESENTER||FEEDBACK FROM THE TRAINER TO THE PRESENTER|
|Vague||Vague presentation due to insufficient or unordered information or poor expression of the contents.||Gathering or organization of information is not systematic. Practice for case presentation is needed.||Point out what is missing in case presentation. Tell the trainee to have one-to-one practice.|
|Structured||Case presentation is structured with routine H&P information is but key KDDs are still lacking.||Able to report the case information but unable to list all KDDs by interpreting the patient’s problems.||Give positive feedback for complete information. Ask the presenter to think of any more KDDs. If not, the presenter should be recommended to narrow down the extent of differential diagnoses.|
|Organized||Most KDDs are listed and organized with H&P information in case presentation, but pertinent positive and negative S/S are insufficient.||During H&P no relevant S/S to KDDs are obtained.||Give positive feedback for KDDs and corrective feedback for pertinent positive and negative S/S.|
|Pertinent||Pertinent positive and negative S/S relevant to KDDs are covered.||Through H&P whole picture of the case and its KDDs are clearly described.||Give positive feedback for a good presentation. Ask the presenter to specify the lessons learned from the case.|
|Abbreviations: H&P, history taking and physical examination; S/S, signs and symptoms; KDD, key differential diagnoses.|
4. Author states that ‘validity was checked’ using Crossley’s 4 general principles of WBA but it is not clear how this was done other than matching the points.
There is a very short discussion (primarily focusing on future directions), although part of the result section could be considered a discussion.
Unclear if there was really a link with underlying theory, or whether WBA principles used for validation.
The author concludes “This model (VSOP) is the first WBA for diagnostic reasoning based on trainers listening to case presentations. Validity was checked using Crossley’s four general principles of WBA, but there are many issues for further study”.
Spare Keys – other take home points for clinician educator
Despite methodological problems in the paper, there is much face validity in the revised scale and I could see using it in my teaching practice. So what does a CE do when ‘things make sense’ yet they are not adequately ‘proven’?
Use the term validation with caution!
Hints for writing papers: Alignment of items across all sections is essential; and a clear question that is later discussed in depth, and a strong conclusion, make a better paper!
Onishi-sensei and the other faculty at IRCME.
Access KeyLIME podcast archives here
Check us out on iTunes!