The Academic Life in Emergency Medicine (ALiEM) Faculty Incubator was hard at work during the pandemic to bring you the fifth volume of the Education Theory Made Practical series. This series strives to make theory accessible to educators by distilling the background and key literature of each theory and grounding them in practical education scenarios.
The Faculty Incubator is a year-long professional development course for medical educators centered around a virtual community of practice (a concept we have all started to appreciate during quarantine). Teams of 2-3 participants from around the world authored primers on education theories and different teams offered a first round of peer review on each post. As in prior years, they will be serialized on the ICE Blog for review and comment. You can learn more here.
They have published three e-book compendiums of this blog series (Volume 1, Volume 2, Volume 3) and you can find the Volume 4 posts here (the e-book is in progress!) As with the previous iterations, final versions of each primer will be complied into a free eBook to be shared with the health professions education community.
Your Mission if you Choose to Accept it:
We would like to invite the ICE Blog community to peer review each post. Your comments will be used to refine each primer prior to publication in the final ebook. No suggestion is too big or small – we want to know what has been missed, misrepresented, or misconstrued. Comments as small as grammatical errors all the way to new scenarios for practical applications or new citations are welcome. (Note: The blog posts themselves will remain unchanged.)
This is the seventh post of Volume 5! You can find the previous posts here: Banking Theory; Constructive Alignment; IDEO’s Design Thinking Framework; R2C2 Feedback Model; Feminist Theory and, Sociomaterialism.
Logic Model of Program Evaluation
Authors: Kathryn Fisher MD MS (@KatieFisherEM); Jeanne Macleod; Sarah Kennedy
EDITOR: Benjamin Schnapp, MD MS (@schnappadap)
Main Authors or Originators: Edward A. Suchman
Other important authors or works: Joseph S. Wholey, McLaughlin J.A. and Jordan G.B.
Part 1: The Hook
Sarah has just finished her ultrasound fellowship and is working in the Emergency Department at her new hospital. She has discovered that many of her new partners are not familiar with or comfortable using bedside ultrasound in clinical practice. When she inquires about this, many of her coworkers mention that they were educated prior to 2006, when ultrasound became incorporated into residency training as part of the required curriculum for residents.
Sarah would like to design a training program for her colleagues to help them become more comfortable with performing and interpreting ultrasound in clinical practice. Initially, she would like to develop a curriculum to teach her colleagues how to perform and interpret basic bedside ultrasound studies and then expand to other imaging applications. What activities can be planned and what outcomes could be measured to ensure success of her program?
Part 2: The Meat
The logic model is a conceptual tool that can be used for program planning, implementation, and evaluation. This tool is designed to examine a program’s resources, planned activities, and proposed changes or goals in an organized fashion. It describes the linkages between resources, activities, outputs, outcomes, and their impact on the program as a whole. It provides a model of how a program’s component parts might function together. 
The logic model is represented visually in four main sequential components: inputs, activities, outputs, and outcomes. These comprise two main domains: planned work and intended outcomes. Planned work includes inputs and activities while intended outcomes reflect the outputs and outcomes. Outcomes can be measured as immediate, intermediate, and long-term. Some sources also suggest a fifth component, a measurement of impact, at the end of the model in lieu of, or in addition to, long-term outcomes.  In business applications, many logic models also show external influences as arrows into each of the components to show how each of these external factors affects each of the steps of the model.
This model has been used in designing and planning new programs, or when it’s time to evaluate a program and restructure. It provides a structured framework to systematically evaluate program components and to facilitate communication with team members, and helps guide the acquisition of the information necessary for decision making. It is specifically useful in determining evaluation and management strategies for medical education programs. 
Figure 1: Logic model components, including inputs, activities, outputs, and outcomes. From AMEE 67 
The early foundations for the logic model were first set out in a 1967 book by Edward A Schuman about evaluative research.  The concept of a logic model was previously captured in other structures and under different variations including “Chains of Reasoning,” “Theory of Action,” and “Performance Framework.” Bickman (1987) introduced logic models as a tool for program evaluation that emphasized program theory, the idea that interventions contribute to a downstream chain of results and outcomes. 
The first publication using the term “Logic Model” was by Joseph S Wholey (1983).  The first developers of the logic model came from business, public sectors, and international non-profit sectors. Logic models did not become widely used until United Way published Measuring Program Outcomes in 1996.  This article was important in establishing the terms and structure used today for developing Logic Models.
The W.K Kellogg Foundation published a widely available Logic Model Development Guide which has been used for public policy and healthcare planning. Over the last decade, logic models have also been used for medical education program evaluation.
The logic model contains four key components. First, inputs can be seen as the resources necessary to operate the program.  Inputs include resources dedicated to or consumed by the program and can include financial resources and funding, protected time for faculty or staff, expertise of faculty and staff, administrative support, and physical resources such as facilities and equipment. For example, a department running an EKG curriculum for residents could list talented EM and cardiology faculty, experienced ED techs, and reserved time in the departmental conference room among its inputs.
Inputs are then used to operate planned activities. Activities represent what the program does with the inputs to fulfill its mission. Activities may include any combination of needs assessments, teaching, curriculum design, planning of sessions, faculty development, development of systems, or performance evaluations. These activities are dependent on the program’s mission. For the EKG curriculum, the activities might include didactic sessions reading EKGs and time in the department placing EKG leads and interpreting them in real-time.
If the planned activities are accomplished, then the program will create outputs.  The outputs describe the direct, measurable products of the activities. This can include demographics such as number of participants in a program’s activities, number who completed a certain curriculum, or program metrics such as number of programs, time in existence, or number of graduates of a program. In our EKG example, the outputs might be reaching 24 residents, or conducting 12 sessions in a year.
Outcomes look at the benefits for participants during and after program activities. Outcomes can be measured immediately, as well as in the intermediate and long-term time frames. These can include increased knowledge or skill, satisfaction with quality of activities, or improvement in course evaluations. Depending on the program, clinical, teaching, or academic success may be appropriate outcomes, including awards, or productivity. For the EKG curriculum, outcomes might include residents attaining a higher score on a test of EKG proficiency, or self-reported improved confidence with EKG interpretation.
Finally, if the benefits of the program are achieved, then the activities implemented as part of the program will have an impact on external factors such as an organization or system.  This can relate to impact on the community, environment, or infrastructure. For the EKG example, this might translate to improved rates of appropriate recognition of ST-elevation myocardial infarctions, or improved functional outcomes for patients.
Modern takes or advances
The logic model is a tool that can facilitate communication and idea sharing, identifying the necessary components that are critical to the successful execution of a program. The logic model can help identify if there are implausible linkages among program elements or redundant pieces. Benefits of the logic model include gaining a common understanding and expectations of resources, their allocation, and expected results. In the past few decades, the logic model has been applied to various applications in medical education and healthcare.
Program managers use the logic model to argue how or why the program is meeting a specific customer need, whether that customer is in a private sector or a medical learner.  The logic model is also used to facilitate thinking through faculty development and other large-scale education initiatives.  It has also been adopted on a larger scale for healthcare systems innovations,  or in the public health workforce to support communication between divisions and ongoing program planning and evaluation.  Portfolio evaluation, or the evaluation of multiple projects with a common purpose, also benefits from the use of logic modeling as a visual tool. 
Logic models have been used to create consensus among leaders or with stakeholders, as all parties can examine both the required inputs and the desired outcomes and how they will be measured. This approach can also be applied to medical education in the setting of institutional self-review.  The World Federation for Medical Education utilized a logic model applied to further define and evaluate each of their accreditation standards. In this specific case, the logic model was used both for standard setting and consensus of their standards, but also as an evaluative tool. 
Other examples of where this theory might apply in both the classroom & clinical setting
Van Melle et al proposed the use of the logic model for program evaluation for competency-based medical education (CBME). [13,14,15] The authors provide an outline of how to use a logic model to focus CBME program evaluation in a residency program, how to make program evaluation scholarly, and how to build capacity for program evaluation. In this case, the desired proximal outcome is enhanced resident readiness for independent practice and the desired distal outcome is improved patient care. 
The logic model has also been utilized by medical education programs as a tool for planning initiatives. Campbell et al described the model as a tool to guide the creation of a Fellows’ College, or a centralized educational program for all pediatric subspecialty fellows at a single institution.  They describe the inputs such as leadership, funding, individual subspecialty program structure, and existing curricula. They discuss activities such as an expanded curriculum, a mentorship program, scholarship program, and networking. The outputs were the number of fellows and clinical educators engaged in the Fellows’ College. Finally, their outcomes measure success of the intervention and subsequent program modifications.
Annotated Bibliography of Key Papers
McLaughlin, J.A. and G.B. Jordan. Logic models: a tool for telling your programs performance story. Evaluation and Program Planning, 1999. 22(1): p. 65-72. 
This paper was one of the first to outline in detail the practical applications of the logic model. It was intended to explain to program managers in the public and private sectors how to measure and evaluate a business program and how to use that knowledge to improve a program’s effectiveness. By utilizing clearly outlined figures and tables, these authors provided a detailed explanation on how to build a logic model for business managers.
Model Development Guide, ed. W.K.K. Foundation. 2004. 
The Kellogg foundation document outlined important definitions of the logic model as a method of program evaluation, defining it as, “a picture of how your organization does its work- the theory and assumptions underlying the program. A program logic model links outcome (both short and long term) with program activities/processes and the theoretical assumptions/principles of the program.”
Otto et al Otto, A.K., K. Novielli, and P.S. Morahan. Implementing the logic model for measuring the value of faculty affairs activities. Acad Med, 2006. 81(3): p. 280-5. 
Otto et. al (2006) published the first article in Academic Medicine suggesting the use of logic models in medical education. They suggested the use of logic models for measuring how faculty development offices contribute to the recruitment, retention, and development of teaching faculty. They give an example of its use in a visual format with a comprehensive associated list of components in each category. Use of the logic model is suggested to facilitate the process of thinking through the entire faculty development process.
AMEE G Frye, A.W. and P.A. Hemmer, Program evaluation models and related theories: AMEE guide no. 67. Med Teach, 2012. 34(5): p. e288-99. 
This paper takes a broader view of various program evaluation models. It is useful because it compares several different models. In addition to the logic model, it also discusses the experimental/quasi-experimental model, the Context/Input/Process/Product Model (CIPP model), and Kirkpatrick Model. It delves into the strengths and weaknesses of each type of model and how they can be applied to medical education. The authors in this paper provide a good description and analysis of each of the four essential elements of the logic model. It provides various medical education-centered examples of each of these elements.
The logic model’s main limitation is that it may lead to over-simplification. Medical educational programs are often complex and don’t always follow a linear path, and outcomes may not match what is initially predicted. To overcome this limitation, the logic model needs to be well designed. The creators of the model should have a thorough understanding of how change works in the educational program being evaluated. Both intended and unintended outcomes should be anticipated, and feedback loops must be incorporated into the model to address these complexities.
Complexity can be further built into the Logic Model with the addition of multiple tiers, accounting for various layers of an intervention. Mills et al attempt to address this problem in their 2019 article.  The authors propose a typology of logic models. They categorize logic models into four types, ranging from simple (type 1) to the most complex (type 4). The type 4 logic models attempt to provide more insight into the interactions between interventions and context (social, political or cultural factors in the environment where the program exists).
The logic model design needs to be flexible and dynamic to integrate unexpected complexities. Educators and researchers need to be prepared to revise the model as the program is being implemented. Therefore, the development and revision of a logic model can be a time-consuming process.
Finding the right balance between precision (which may require many data points) and clarity (emphasizing concise, easy to understand points) in your logic model can be a significant challenge, especially when getting started.
Part 3: The Denouement
Sarah uses the logic model to plan a curriculum for her coworkers to obtain core ultrasound skills in a stepwise approach. She creates a pre-survey and pre-test to see initial attitudes and knowledge. She then creates educational opportunities to teach her coworkers. She has a post-survey and post-test to evaluate how behaviors, knowledge, and attitudes have changed over the course of a year, as well as how the number of ultrasounds performed in the department during clinical practice has changed. Sarah makes a goal of certifying 75% of her colleagues in core ultrasound applications at first, and then plans to revisit her logic model and expand her program to include other imaging applications. The logic model she produced is here.
Don’t miss the eighth post in the series, coming out Tuesday, September 21, 2021!
PLEASE ADD YOUR PEER REVIEW IN THE COMMENTS SECTION BELOW
1. McLaughlin JA. and GB Jordan, Logic models: a tool for telling your programs performance story. Evaluation and Program Planning, 1999. 22(1): p. 65-72.
2. Developing a Logic Model or Theory of Change. [cited 2020 May 22]; Available from: https://ctb.ku.edu/en/table-of-contents/overview/models-for-community-health-and-development/logic-model-development/main.
3. Otto AK, K Novielli, and PS Morahan. Implementing the logic model for measuring the value of faculty affairs activities. Acad Med, 2006. 81(3): p. 280-5.
4. Frye AW and PA Hemmer, Program evaluation models and related theories: AMEE guide no. 67. Med Teach, 2012. 34(5): p. e288-99.
5. Schuman E. Evaluative Research Principles and Practice in Public Service and Social Action Progr. 1968, New York: Russel Sage Foundation
6. Bickman L. The functions of program theory. New Directions for Program Evaluation, 2004. 1987: p. 5-18.
7. Wholey, J., Evaluation and Effective Public Management. . 1983, Little Brown.
8. Measuring Program Outcomes: A Practical Approach ed. U.W.o. America. 1996.
9. Logic Model Development Guide, ed. W.K.K. Foundation. 2004.
10. Glynn MK et al. Strategic Development of the Public Health Workforce: A Unified Logic Model for a Multifaceted Program at the Centers for Disease Control and Prevention. Journal of public health management and practice : JPHMP, 2019.
11. Wu H et al. Using logic model and visualization to conduct portfolio evaluation. Evaluation and Program Planning, 2019. 74: p. 69-75.
12. Tackett S, J Grant, and K Mmari. Designing an evaluation framework for WFME basic standards for medical education. Med Teach, 2016. 38(3): p. 291-6.
13. Railer J et al., Using outcome harvesting: Assessing the efficacy of CBME implementation. J Eval Clin Pract, 2020.
14. Van Melle E et al. A Core Components Framework for Evaluating Implementation of Competency-Based Medical Education Programs. Acad Med. 2019. 94(7): p. 1002-1009.
15. Van Melle E. Using a Logic Model to Assist in the Planning, Implementation, and Evaluation of Educational Programs. Acad Med, 2016. 91(10): p. 1464.
16. Van Melle E. Program Evaluation for CBME: Are we making a difference? in The International Conference on Residency Education. 2016. Niagara Falls, Ontario, Canada.
17. Campbell JR et al. Building Bridges Between Silos: An Outcomes-Logic Model for a Multidisciplinary, Subspecialty Fellowship Education Program. Acad Pediatr, 2015. 15(6): p. 584-7.
The views and opinions expressed in this post are those of the author(s) and do not necessarily reflect the official policy or position of The Royal College of Physicians and Surgeons of Canada. For more details on our site disclaimers, please see our ‘About’ page