#KEYLIMEPODCAST 356: The Curve Balls with Learning Curves

Today’s article looks at learning curves and how they are used in simulation-based interventions in Health Professions Education. After noting that the research using these learning curves often omitted important information, the authors conducted a systematic review of their reporting quality to identify areas for improvement.

Listen to the episode to hear their conclusions.


KeyLIME Session 356

Listen to the podcast


Howard et. al., Learning Curves in Health Professions Education Simulation Research A Systematic Review. Simul Healthc. 2021 Apr 1;16(2):128-135


Jon Sherbino (@sherbino)


I’ll confess. Last decade (ok a few years ago) it was hard for me to talk Med Ed without throwing in learning curves. The promise of CBME was going to be displayed via dashboards that showed individualized performance on the pathway to achieving competence. Much like mapping a chicane in Formula One, I anticipated that educators could equally show ability versus time, producing lovely learner S curves. Then I did a study (with Teresa Chan and Matt Mercuri) and my educational chicane whimpered to a relatively straight line. Sure, the lines told a story and demonstrated individualized performance. But gone was the more nuanced explanation of acquisition of ability promised by a learning curve.

What is a learning curve? (From the authors:)
A typical learning curve includes:
• a y-intercept which represents the learners’ baseline skill;
• a usually nonlinear slope which represents the learning efficiency in terms of performance improvement with practice;
• an inflection point, where the slope flattens out, indicating the need for progressively greater effort to achieve continued learning gains (ie, “diminishing returns”); and finally,
• an asymptote which is the theoretical maximum performance possible within the learning system and which often represents expert levels of performance.

So…. I haven’t talked much about learning curves in the intervening years. To me, it’s times to revisit this previously big topic of #MedEd conversation. Enter Howard, Cook, Hatala and Pusic – giants in the world of learning analytics AND systematic reviews. They tackle learning curves in the context best suited for deliberate practice – simulation.


“…we systematically reviewed the use and reporting of learning curves in simulation-based interventions in HPE…”

Key Points on the Methods

The authors evaluated articles from a (1969 -) 2011 systematic search  (See episode 44… yeah.. I know.  We’re going deep into the archives).  Using the same search terms a subsequent search was conducted in 2016 to identify ~ 25% equivalent number of articles for comparison with the original data set. 7 common databases were searched independently by 2 reviewers.  Interrater agreement between raters for manuscript inclusion was moderate to high (k 0.88 2011 search, 0.57 2016 search).  Two reviewers independently abstracted data following a previously published protocol of learning curve elements that was iteratively refined by the authors.

Key Outcomes

From 13,719 (10K + 2.8K) articles, 230 were included in the review. Nearly all (203) involved a psychomotor procedure with a diversity of disciplines and spanning UME, PGME, CME and other health professionals.

Y Axis (Performance Variable – Achievement)
Only ~ 50% of articles report the Y axis with 25% of these using truncated range (which can distort small differences in performance).

X Axis (Repetition – Effort)
Data points ranged from 3 to 360, (median = 20). 30% of included articles failing to report spacing of data points (e.g., once a week vs. % times in 3 months). 17% allowed unlimited practice. Both elements impair determination of learning effort. 12% report a forgetting curve (washout after educational intervention).

Statistical Linking Function
22% used simple linking function of beginning-end points (e.g., t test)
65% used linking function for every data point but did not account for relationships between/within individual (e.g., multilevel modeling)

10% of studies report tabular (non-graphic) learning curve.
94% showed average curves, but only 50% reported statistical variance.

Boundary (Mastery/Remediation) Lines
Only 1 study reported theoretical maximum performance – asymptote. 3% reported remediation line and 23% mastery line

Howard et. al., Learning Curves in Health Professions Education Simulation Research A Systematic Review.

Key Conclusions

The authors conclude…
“Learning curves are a powerful conceptual and statistical basis for competency-based instructional designs… we found that studies often incompletely report the properties of learning curves and underutilize their desirable properties.”

Spare Keys – Other take home points for Clinician Educators

This scoping review is notable for the RAPID expanse of a completely new literature in HPE. There is a “meta” theme here worth exploring in future scholarship.

BEME guides are the “Cochrane reviews” of HPE. Started by Ron Harden and Ian Hart (shout out to a Canadian) these guides are structured with an initial registration then a publication (including a full length publication and a condensed version in Medial Teacher). The BEME Collaboration is overseen by an executive, which includes the author of the current paper Michelle Daniel. Interestingly for HPE, BEME reviews are specifically positivist in the types of review methodologies endorsed / accepted. Currently there are 70 BEME Reviews with an additional 22 registered and in process. The current review that we discussed is unusual in that it is an update of a previous review, one of the gaps in the BEME process.

Access KeyLIME podcast archives here

The views and opinions expressed in this post and podcast episode are those of the host(s) and do not necessarily reflect the official policy or position of The Royal College of Physicians and Surgeons of Canada. For more details on our site disclaimers, please see our ‘About’ page