Will the future of #MedEd Be Learning From Machines or Teaching an Artificial Intelligence?

By Daniel Cabrera

(If you’re interested in complex adaptive systems and / or stigmergy.. or if you want to understand what that means to make sense of this post, check out part 1 here)

 Within medical education there is a (distant) move from an individual competency model to a collective competence construct,  where the outcomes of education and healthcare are not defined by the isolated performance of an individual but by the complex interconnections of multiple agents. At the same time, we need to start considering how to incorporate collective clinical competence with dataism and artificial intelligence (AI)..

The idea of collective competence, developed by Lorelei Lindgard and Bryan Hodges, mirrors the concepts of stigmergy and rhizomatic organization of networks. Traditionally we have focused on hyper-specialization, data reductionism and individual performance. However, a more decentralized architecture calls for multipotentiality, contextuality, interconnection, data augmentation and network/community performance.  As Lindgard proposes, this can be mainly achieved with technological affordances and constraints.

We have that technological affordances and constraints now. We are witnesses to the arrival of soft Artificial Intelligence (AI) in our lives, from preemptive recommendations on what we want to buy to predictions of who is pregnant. This type of AI is becoming ubiquitous in our clinical practice, particularly in the domains of pharmacotherapeutics and decision support. Currently, decision support is nothing more than a cognitive crutch, but it is becoming increasing intrusive in all aspects of clinical care. As clinicians and educators, we have not given enough attention to the issue of how interact with soft AI. (e.g., If pharmacy decision supports become universal, why should new learners know anything about it? Can we shorten training? Can we just focus on diagnosis and decision-making? ) I feel that this particular train has already left the station and many of our learners are using these tools without understanding the key concepts behind them.

Although soft AI is creating a lot of questions, the tectonic change will come with the advent of strong AI. This event, the emergence of efficient, supra human and massive data management intelligence will redefine what we do. The day that a strong AI tell us that our diagnosis is wrong and our treatment recommendations are faulty is not far into the future; I’m certain this will happen during my life time.

ai

We need to start thinking and planning our roles for a future where AI will take most of the important decisions with little input from humans. Will the training of future doctors be restricted to learning empathy? How do we teach students to learn from a digital intelligence? How do we teach digital beings? Do we actually have to do it? Is medical education going to be nothing more than how to interact with AIs?

Many experts think that data can’t self organize (following Claude’s entropy).  However, strong AI will almost certainly behave in a way that assures instrumental goal achievement, self-preservation and resource acquisition. Strong AI will relentlessly pursue the objective that they are programed for even if is not aligned with human priorities. We have to be very careful in deciding and programming what those goals are.

We are entering the age of dataism, where authority and truth does not emanate from human self-determination, but from data analysis. If we don’t pay attention to the changes around us, we are threaten to become nothing more than biological data entry agents to a supra human mind; we will become the machine of the Ghost in the Machine problem. The ultimate challenge is to create a framework for strong AI that guarantees that the prime directive of the system is to achieve what is good for the patient, what is good for the patient’s life, what is good by the patient’s self-determination and value structure and not necessarily good according to the AIs optimal solution. As educators, we need to start thinking about how to teach these digital beings about what it  is to be human, how medicine is about helping, comforting and accompanying our patients, and not only the optimization of diagnosis and treatment. Finally, we need to start thinking about how we are going to learn from non-human teachers.

References and further reading

———————————————————————BONUS TRACK

“Answer” by Fredric Brown (1954)

Dwan Ev ceremoniously soldered the final connection with gold. The eyes of a dozen television cameras watched him and the subether bore throughout the universe a dozen pictures of what he was doing.

He straightened and nodded to Dwar Reyn, then moved to a position beside the switch that would complete the contact when he threw it. The switch that would connect, all at once, all of the monster computing machines of all the populated planets in the universe — ninety-six billion planets — into the supercircuit that would connect them all into one supercalculator, one cybernetics machine that would combine all the knowledge of all the galaxies.

Dwar Reyn spoke briefly to the watching and listening trillions. Then after a moment’s silence he said, “Now, Dwar Ev.”

Dwar Ev threw the switch. There was a mighty hum, the surge of power from ninety-six
billion planets. Lights flashed and quieted along the miles-long panel.

ai2Dwar Ev stepped back and drew a deep breath. “The honor of asking the first question is yours, Dwar Reyn.”

“Thank you,” said Dwar Reyn. “It shall be a question which no single cybernetics machine has been able to answer.”

He turned to face the machine. “Is there a God?”

The mighty voice answered without hesitation, without the clicking of a single relay.

“Yes, now there is a God.”

Sudden fear flashed on the face of Dwar Ev. He leaped to grab the switch.

A bolt of lightning from the cloudless sky struck him down and fused the switch shut.

Image 1 from Pixabay used under Creative Commons License 1.0

Image 2 courtesy of  Dalle Molle Institute for Artificial Intelligence Research, via Wikimedia Commons

#KeyLIMEpodcast 122: Using Words to Assess Learners in #Meded

The Key Literature In Medical Education podcast this week reviews a great commentary from some thought leaders in medical education.  The topic is the use of qualitative assessment data and the framing of validity arguments with this type of data.  If you are a regular listener, you know that modern validity arguments are one of my soap boxes.  (Sorry… I’ll keep my diatribe to a minimum.)

So, click your way to the actual podcast or read the abstract below for more details.

– Jonathan

————————————————————————–

KeyLIME Session 122 – Article under review:

Listen to the podcast

Download the abstract: keylime-episode-122

new-keylime-podcast-episode-image.jpg

Cook DA, Kuper A, Hatala R, Ginsburg S. When Assessment Data Are Words: Validity Evidence for Qualitative Educational Assessments. Academic Medicine. 2016 Apr 5. [Epub ahead of print]

Reviewer: Jonathan Sherbino (@sherbino)

Background

Which comment is more formative for a learner? A program director?

  • “You are a 3.4 out of 5.”
  • “You used technical terms without explanation; however, your nonverbal skills engaged the patient to make them feel comfortable.”

Both comments are assessments about patient–physician communication.

Quantitative data has long reigned supreme in medical education.  The translation of a complex judgment into a standardized representation of that judgment allows for ease of aggregation of many judgments and statistical manipulation of the data to determine trends about the learner and the raters.  While statistics can seem magical, there are lots of issues around the “truth” behind these numbers.  (See KeyLIME episodes 49, 59, 78, 86… and 63 for a counterpoint.)

As qualitative research methodologies from the social sciences have influenced medical education, the adoption of narrative within assessment programs has also increased.  (See KeyLIME 91 for a partial description of the field notes instrument used by family medicine training programs in Canada.) So, how can the rigor of qualitative research inform the increasingly complex needs of programmatic assessment in this new era of medical education?  That’s the issue this paper tackles.

Purpose

“The purpose of this article is to:

  • articulate the role of qualitative assessment as part of a comprehensive program of assessment, 
  • translate the concept and language of validity

…,  and

  • elaborate on principles … that relate the validity argument to both quantitative and qualitative assessment”

Type of Paper

Commentary

Key Points on Methods

Best practices from qualitative research are aligned with both Messick’s (source of evidence to support the construct being assessed) and Kane’s (process and assumptions in collecting evidence, i.e. “the argument”) validity frameworks.  This is a theory paper (appropriately) without a methods section.

Key Outcomes

Qualitative research is considered rigorous if it demonstrates:

  • a theoretical frame
  • an explicit question
  • reflexivity (influence of assessors’ /analysts’ background and their relationship with learners)
  • responsiveness in data collection
  • purposive sampling
  • thick description
  • triangulation of data sources
  • transparent, defensible analysis
  • transferability
  • relevance

Kane’s Inferences Framework & Qualitative Assessment

DOMAIN (SELECTED) EVIDENCE OF RIGOR
Scoring

(How observations become a narrative)

·      Questions / prompts stimulate rich response

·      Observer credibility

·      Varied reflexivity of assessors

Generalization

(Aggregated data accurately reflects performance when observed)

·      Analysts are credible

·      “Auditable” analysis

·      Iterative/responsive/meaningful analysis (e.g. seeks counter examples)

·      Triangulation of data

·      Sufficiency of data

Extrapolation

(Generalization of judgment extends to new “real-life” contexts)

·       Authentic / real-life data (and process for data collection)

·       Member check (stakeholders agree with final interpretation)

·       Analysis consistent with other external data

Implication

(Acting on the judgment leads to meaningful decisions and minimal negative downstream effects)

·       Interpretation leads to appropriate advancement / remediation

·       Unintended consequence of assessment is favourable

For a superior review of evidence of rigor in qualitative assessments check out Tables 1 to 3 in the manuscript.

The authors acknowledge that their theory paper is limited by hypothetical examples of evidence of rigor not supported by a systematic search of the literature.  However,  this is the first description of a framework to defend the rigor of qualitative assessments, thus the limitation is over stated.

Finally, while narrative provides rich data, operational issues preclude it from appropriate use in “all” scenarios.  Programmatic assessment is complex and benefits from an integration of quantitative and qualitative data, an argument made by the authors.

Key Conclusions

The authors conclude…

 “We vigorously oppose the segregation of quantitative and qualitative assessment methods. Rather, we advocate a “methods-neutral” approach, in which a clearly stated purpose determines the nature of and approach to data collection and analysis … we urge the use of a contemporary validity framework when evaluating any assessment, quantitative or qualitative… What matters most in validation is that evidence is strategically sought to inform a coherent argument that evaluates the defensibility of intended decisions.”

Spare Keys – other take home points for clinician educator

Language is complex and nuanced. A shout out to the authors for their constructivist approach that suggests that “reality” is a social phenomenon, interpreted through the shared meaning of words, words that evolve in definition and hold different interpretations between people.

Access KeyLIME podcast archives here

We are nothing but insects. Organizing, teaching, coexisting and learning from data. Part 1.

By: Daniel Cabrera  (@CabreraERDR)

we are nothing but insects.jpg

The following post is a mutatis mutandis version of my talk at the 2016 ICE Summit: Niagara

Since the emergence of modern natural philosophy, the structures governing information have been based on a foundational myth where a central authority or force defines the goals, paradigms, structures, distribution channels and beneficiaries of the knowledge and its wealth. This creates a very concentrated type of pyramidal constitution, where creation and management is restricted to a few societal groups ruling the correct paradigms, creation methods and channels for dissemination. In others words, this is a centralized hierarchical and authoritarian managed system.

In contrast to the centralized model designed by humans utilizing creational myths, Complex Adaptive Systems (CAS) are being recognized as the model explaining how nature, biological beings and data are organized. CASs are defined by the ability to self-organize, adapt to the changes of internal and external conditions and provide a survivability advantage to the organism (community) as a whole.  Many things that surround us are CASs, like social media networks, cities, wolf packs, swarms of insects and memes.

When knowledge and ideas are organized as CASs, they are based in the concept of stigmergy, where cues created by individuals (nodes) influence the behavior of others members of the community (network) changing the overall output of the group. A classic example of this is ant troops building an ant colony. This stigmergic collaboration requires communication, social negotiation and a creative output. Humans associated this way for thousands of years, from hunting mammoths to building cathedrals. But as information became more complex, the creative outputs became more difficult to socially negotiate and stigmergy faded into the background.

Gilles Deleuze and Félix Guattari rediscovered and described the concept of rhizomatic organization (a form of stigmergic collaborative network), where the network is non-hierarchical, self-governed, distributed, maximally connected, multi-domain, semiotic and where the behavior and outputs can not be predicted by the characteristics of the nodes as they change when they communicate with each other. We have written about this before on this blog.

Medical education for centuries and even after the advent of the Flexnerian era has been based on the centralized, hierarchical and authoritarian paradigm of information and knowledge management. Despite quantum leaps in the last decades such as competency-based education, the overall framework remains founded in a stratified model, where some members of the group are directors, others are teachers and others are students with unidirectional flow of information.

Our world is changing rapidly in the way we manage data and knowledge. For most practical purposes the average individual now has access to an almost incomprehensible amount of information, and this includes medical science and education. Users of the information, in this case our learners, want to turn it into knowledge without necessarily having a preceptor telling them what is right and what is wrong. What learners want is a community to give contextual meaning to the information in order to create their own personal learning networks and educational artifacts. This is partially an explanation of the erruption and success of the Free Open Access Medical Education movement.

Centralized, authoritarian and hierarchical structures are per definition inefficient and non-resilient as they can’t manage problems with unbounded data and not able to react nimbly to changes in conditions. After hundreds of years they way we teach medicine remains one within these structures. On the other side, CASs are a core part of our lives, from the way we share news with our friends and family, shop for items on the internet, how the traffic lights are organized in our commute, and how our insurance premiums are calculated. This is the time to move medical education to a new social constructivism paradigm based on CASs, rhizomatics and open knowledge. This new construct is predicated on an engaged community, robust knowledge exchange, and self-governance, where collaboration is encouraged and facilitated, curators are enablers but not authorities, and the system is controlled by multiple iterations of social negotiation as via an evolutionary algorithm.  David Cormier better describes this as “the community is the curriculum”.

(Stay tuned for Part 2 next Tuesday, when artificial intelligence makes an appearance as a process to facilitate non-hierarchical learning in medicine .)

References and further reading

Image Bernard Goldbach via flickr under CC BY 2.0

#KeyLIMEpodcast 121: ACGME Accreditation Reveals Key Themes for #MedEd

The Key Literature in Medical Education podcast tackles accreditation this week. It’s an important topic because the accreditation literature is quite limited.  This paper is an synthesis of a massive undertaking by the Accreditation Council for Graduate Medical Education (aka the American organization responsible for > 10k residency programs and 130k residents… yep… take a look at those numbers!)

Accreditation is undergoing an important (and long overdue) transition from a process orientation (e.g. how many teaching faculty do you have) to an outcome orientation (e.g. demonstrate that your residents are learning about ‘X content’ relevant to their discipline).  This report is long, but the podcast identifies some key themes relevant to all medical education jurisdictions.  While you might debate some of the methods (if you listen to the podcast, you’ll hear about my pet peeves) there are some really important themes that you may want to consider addressing in the curricula you oversee.

So… download the podcast and get the abstract below.

– Jonathan

————————————————————————–

KeyLIME Session 121 – Article under review:

Listen to the podcast

Download the abstract: keylime-episode-121

New KeyLIME Podcast Episode Image

Wagner R, Koh NJ, Patow C, Newton R, Casey BR, Weiss KB, on behalf of the CLER Program. Detailed Findings from the CLER National Report of Findings 2016. Journal of Graduate Medical Education. 2016 May;8(2 Suppl 1):35-54.

Reviewer: Jason Frank (@drjfrank)

Background

Accreditation, the enterprise and process to judge and enhance the quality of an educational program via comparison to third-party standards, is undergoing a quiet revolution. Accreditation is evolving away from an overwhelming emphasis on process measures to also focus on other important aspects of health professions training, such as program outcomes and learning environment. However, accreditation reform suffers from a few challenges. Two I will mention here: 1) a lack of bold innovation, and 2) few published papers to build on. Enter the ACGME’s CLER (Clinical Learning Environment Review) initiative.

Purpose

The authors of this paper describe the ACGME’s CLER initiative and the first national report on the patterns found in 6 areas in US residency clinical learning environments, namely:

  1. Patient safety
  2. Health care quality
  3. Care transitions (aka handovers)
  4. Supervision
  5. Fatigue management & duty hours
  6. Professionalism

Type of Paper

Program Evaluation

Key Points on Methods

The ACGME (Accrediting Council for Graduate Medical Education) is the accrediting body for residency education (GME aka PGME) in the US. CLER emerged as part of an evolving package of reforms intended to enhance American residency training programs and shift the accreditation emphasis to outcomes (and away from process measures).

The ACGME moved to mandatory reporting of resident progress on competency-based milestones a few years ago, and at the same time decreased the number of on-site surveys of programs. They also added a new time of survey of institutions focused on features of the clinical learning environment, and CLER was born.

This report was generated from the aggregate findings of reviews of 297 meded institutions overseeing 8,878 residency programs (3 to 148 per site) between 2012 and 2015. This covered 111,482 trainees (range 8 to 2,216; median 241).

Survey teams used an accreditation technique sometimes called a tracer, in which surveyors interview groups and then go on walking rounds to seek validity evidence for patterns that were suggested. Multiple lines of evidence are combined to provide a greater picture. CLER teams interacted with a wide variety of officials and professionals, from 1000 executives, 8755 residents, 7730 faculty, 5599 program directors, as well as nurses, pharmacists, social workers, etc. Data collection was from discussions, surveys, interviews, and anonymous audience response systems. Quantitative scores were compared for groups using simple stats.

Key Outcomes

Looking at the patterns from these 297 institutions, here are some highlights the authors found:

  1. Patient Safety: large variations in all aspects of patient safety; 96.8% of residents reported have some patient safety education; there usually was a method for reporting incidents; 95.5% of trainees reported a safe environment to report; but few <20% ever did themselves;
  2. Quality: about 3 quarters of interviewed populations reported any knowledge of QI priorities; few trainees were familiar with even basic QI terminology (e.g. PDSA); about 3 quarters of trainees said they did a QI project; only about half of participants reported knowing about the priorities to improve health care disparities;
  3. Handovers: 82% of trainees reported that handovers were a priority area for improvement; ~84% reported using some kind of standardized process for inpatients, and 90% for end of shifts;
  4. Supervision: more than 90% of trainees reported feeling confident in their scope of activity without direct supervision; 47% of PDs reported managing issues related to supervision and patient safety;
  5. Fatigue & Duty Hours: 95.5% of trainees reported receiving education on fatigue management (mainly on the first week of orientation), where only 67% of faculty reported the same; 8% of PDs discussed patient safety incidents related to fatigue; faculty expressed concern about a “shiftwork mentality” after duty hour reforms;
  6. Professionalism: 66.4% of executives reported incidents relating to professionalism; 92.8% or residents reported some education related to professionalism; 16% of residents felt they had been asked to compromise their integrity for an authority.

These findings reflect a program evaluation methodology, and there are a number of threats to validity.

Key Conclusions

The authors conclude that the CLER visits have provided rich data on 6 important aspects of the learning environment in the US that can be used by system and institutional leaders and others to act.

Spare Keys – other take home points for clinician educator

1. This paper is in a rare category of meded paper: data from an accreditation study. We need more to inform accreditation practices, and measures of educational outcomes.

2.We also need data on clinical learning environments. There is a lot rich material here to inspire future interventions by clinician educators.

3. As we’ve said before, JGME is a great new meded journal. Check it out.

Shout out

Kudos to Tom Nasca and his team at ACGME for committing to innovations and sharing results with the community.

Access KeyLIME podcast archives here

The Flipped Ward Round

(From the E-i-C: For a quasi-related topic on making efficient use of teaching time, check out the Flipped Classroom here)

By: Anthony Lewellyn

The Flipped Ward Round.jpgI’d like to introduce to the world a prioritization technique or tip that I have used for some time.  I call it the “Flipped Ward Round.”

As a psychiatrist predominantly in administration or educational roles, my clinical time has often been limited.  I have over the years found it particularly helpful to briefly fill in for my colleagues during their periods of leave, rather than having a regular clinical load myself.  This has been highly popular for my colleagues with the added benefit of giving me greater exposure to a range of services.

In doing these intra-service locums I inherit established processes around the review of patients, whether a patient list or a ward round process.  In general these processes include a very static, linear, process (i.e. the format for a ward round discussion would generally start with a discussion of Mr Jones in Bed 1, then Mrs Smith in Bed 2 and so on and so forth down to Ms Brown in Bed 24.) For readers familiar with the pitfalls of meetings where time is not allotted to agenda items you are no doubt aware that the same problem can occur on ward rounds, there is an overgenerous discussion of cases at the top of the list and inadequate time allocated to patients at the bottom of the list.  This can of course lead to issues not being properly addressed for such patients, errors of omission and unnecessarily lengthy stay.

So, my approach to this problem is simple.  As the “intra-service locum” consultant I found it fairly easy to convince the rest of the team to indulge me in a simple experiment.  “What if we start at the bottom of the list this time?”  It would often lead to some interesting discussions about patient problems that had been overlooked up until that point.

I wonder if there is something in this for us as medical educators as well?  Do we get hooked on to do lists?  Do we tend to dwell too much at the top of these and neglect important issues at the bottom?  When we design new courses do we often find ourselves flagging for ideas toward the end?  Do the topics at the end of a seminar series get as well covered as those at the start?  Maybe we should #fliptheorder?

Flashback Friday: It’s all fun and games until someone learns, then it’s education

Want to inject a little F-U-N in your teaching? It’s flashback Friday and we are throwing it back to a post on gamification by editorial board member Daniel Cabrera.

————————————————————————–

Originally posted December 18, 2015

Super Mario Image

The gamification of medical education has been a trending idea, but unfulfilled promise for some time. The concept of using elements from ludic games has been adopted by primary and secondary educators for several decades; artifacts like badges of merit, prize oriented tasks and increasing complexity of objectives (i.e. leveling up) are common place in many non-medical education learning settings. (A caution, while these techniques may be common place, they are often poorly understood.)

Gaming, really.

The idea of gaming as a reputable activity became more attractive with the emergence of generations of teachers and learners adept at digital gaming, ranging from the classic consoles (Atari or Nintendo) to massively multiplayer online role-playing games (World of Warcraft). Although the archetype of gamers as geeks drinking mountain dew, eating pizza and living in their parent’s basements is present and pervasive, truth is most gamers are late 30’s with disposable outcome and (questionable) disposable time invested in these alternative worlds, young professionals and blue-collars workers with day time jobs and families but devoted to this activity.

Video games represent a 100 billion US dollar industry with each American spending around 21 minutes per day playing. This industry not only sparks large revenue but also careers in game development as well as scholar activity and game-related science. In case you didn’t know you can get your PhD in “video games” or have a lofty life as a game developer or engineer. Digital games are a legit and respectable area of knowledge.

With increasing understanding of the core neuroscience that underpins gaming, as well as research in related ideas (e.g., non-linear gameplay), allow leveraging of digital games for teaching and training. The availability of personal devices (e.g., phones) with access to the gaming platforms (e.g., Steam) permits a basic anywhere-anytime access to games. Industries such as the military are harnessing this potential at a very rapid rate, off-loading classic face-to-face, face-to-blackboard and face-to-simulator time for these new learning platforms.

Level up, key concepts

Gaming encompasses a spectrum of activities that involve competition, a set of rules and a defined reward. From constructivist perspective games inhabit the Kolb model of learning: concrete experience, reflection, conceptualization and application (experimentation).

Graph

Gaming constitutes a model with constant deliberate practice, where there is a clear objective, the observed outcomes from the last action are rapidly analyzed and incorporated in the mental script for the task and implemented in the next round of actions. A game needs to be easy enough to start playing but difficult enough to be interesting. This explains why most games are arranged around discrete tasks (destroy the alien ship), a series of levels (destroy the current alien fleet) and a long term objective (stop the invasion of earth). From a theoretical perspective the parallelism between gaming and learning is very attractive.

In Medical Education

Current evidence does not support the use of gaming in medical education.  More precisely there is no robust data to support that gaming is better than other learning techniques. But it doesn’t have to be. I don’t think anybody would suggest the sole use of gaming as the learning method for medicine. Gaming is probably helpful in some scenarios, but quite inadequate in others.

The big promise of gaming in #MedEd is the facilitation of a learner directed, stealth and asynchronous spaces of learning. Stealth refers to the principle of delivering knowledge and skills within the framework of a game in a manner that is perceived not as an industrial instructional method but instead as a community and personal journey of growth and enjoyment. The learner is in the drivers seat, determining the route and speed. This creates a sentiment of empowerment and control that is key for gaming and advisable for learning.

A few systematic reviews have looked into didactic instruments that used gaming techniques and their impact in outcomes. The source material is not of strong quality and usually small, single center interventions aimed at psychomotor skills (e.g., surgical techniques) or raw medical knowledge (e.g., game-show style quiz). Most instruments reviewed did not impact behavioural outcomes in learners.

There are some practical and conceptual challenges obstacles to use gaming techniques in medical education. The difficulty in creating a game that is engaging, enjoyable, entertaining, educational and scalable is enormous; also we still don’t have a common language and standard gametrics to assess impact, engagement or even enjoyment. Matching the game to the curriculum requires a robust knowledge both of games, learners and content.

What we get wrong and how it dooms gaming

I think the problem with gaming in medical education is we put the label of gaming on a well-crafted, goal oriented, multi-level tool with a clear set of rules and rewards. The pejorative of  “game” disguises a good instructional system tool.

Games must be epic; games need themes, history and cultural clues to make them identifiable and relevant. Part of the attractiveness of games is their function as a cultural myth,  informing the personal identity and narrative that imbues the learner’s life. It is no fun to be the best intern in the Name-Causes-of-Hypomagnemesia game. What people and gamers want to be is Jessica-the-Magnesium-Dragon-Slayer in a hero’s journey narrative.

Jane McGonigal  describes gamers as super empowered hopeful individuals. For a game to fully function as an instructional method, it needs to be affirmative of the players self-aspiring image, create a community for engagement and social interaction and validation, where the activity contained in the game has a group effect and most importantly, the game needs to provide a relevant, even transcendental, meaning. It is lame to be the best at magnesium but is profound to help friends solve difficult cases.

What a good medical education game looks like

It is an epic journey over a solid road

It is an epic journey

  • The games offers the prize of achieving something incredible, worthy of my journey
  • The game provides a meaningfully narrative arch and origin myth
  • The game provides a platform for self-affirmation or creation of an identity
  • The game provides a community of practice (community, domain, practice)
  • The actions of the game have a clear, measurable impact, and I get pride/satisfaction from it

Over a solid road

  • The game needs to be grounded in a clear curriculum of knowledge and skills
  • The game requires a set of rules, clear goal, scoring system and reward system
  • The game needs to be fun, entertaining and engaging
  • The game requires instantaneous, short, mid and long feedback loops
  • The game allows for a trainee-player model, the learning is in control of the experience
  • The game has a clear and explicit evaluation system
  • The game allows for infinite-play (continuous deliberate practice) until expert level is achieved

References and Further Reading

Image 1 from JD Hancock via flickr CC-BY-2.0

Image 2 from G.B. Kitchen, J. 64 Humphreys / Trends in Anaesthesia and Critical Care 4 (2014) 63e66

Time Management and #MedEd

By Anthony Lewellyn

I have recently been reviewing a number of Leadership and Management modules produced by the Royal College of Psychiatrists in the United Kingdom.

I was reminded during one of the modules that it was Dwight Eisenhower that developed this famous decision matrix that you may have seen once or twice in your life:

time management.png

The “Eisenhower Box” apparently enabled Dwight to sustain high levels of productivity over significant periods of time.

I often think that knowing how and what to prioritize is a key challenge in any transition in a medical career.  One can see this challenge for example when advanced trainees (i.e. senior residents) take on a (junior) consultant role.  Suddenly the trainee is responsible for significantly more patients with other trainees and other health professionals reporting to him or her about these patients.

The tendency is to retreat to pure medical expert role, reviewing all patients as if they were still the trainee, rather than to work through the other members of the team and more judiciously intervene.  In Organizational Psychology this phenomenon is referred to as the Peter Principle. Ken Blanchard of the “One Minute Manager” fame wrote a book about this problem called the “One Minute Manager Meets the Monkey” (which I highly recommend to readers!)

Eisenhower of course led a very busy life having been the Supreme Commander of the Allied Forces in Europe in WWII and then becoming the 34th President of the United States.  In between these roles he served as President of Columbia University and the Supreme Commander of NATO.

On leaving office in 1961 after 8 years as President he was famous for his warning of the rise of the Military-Industrial Complex.

time management quote.jpg

Peter Drucker followed on from Eisenhower in “The Effective Executive” describing effective prioritization as requiring rules about: delegating, developing action plans, running efficient meetings and choosing “what is it that only I can do?”

For many of us, we cope with our workload by generating to-do lists.  However, the problem with a to do list (particularly an unstructured one) is that tasks rarely diminish over time as the list tends to get longer.  While items are completed, more are added. The list itself does not guarantee task completion and the visual presence of an ever increasing list can increase stress levels.

To-do lists are okay but I’d recommend adding a prioritization process or ranking process like the Eisenhower Box.  By being proactive and applying a regular discipline of prioritization you will be amazed how more control over your work you will feel. You will probably also notice that you are scheduling time for non-urgent but important activities, as well as time to effectively delegate. You may even cancel a few unimportant activities from your diary.