One of the challenges we face, as educators, is aligning competing goals. High stakes assessment is a good example of competing goals. Is the goal of an assessment to promote learning or to determine if learning has occurred? There are good rationales (and evidence) to support both goals. Diving deeper into the argument, should a CE provide specific high stakes exam data to a learner to help them identify their “lacunae” (and develop a remedial plan)? What about the considerable design costs of reproducing (de novo) a reliable high stakes exam?
This KeyLIME podcast can’t answer the questions above. However, it does suggest some preliminary answers about the implied assumption – that learners will USE the feedback they receive from a high stakes assessment.
To learn more, LISTEN to the podcast here or READ the podcast summary below.
– Jonathan (@sherbino)
KeyLIME Session 74 – Article under review:
View/download the abstract here.
Harrison CJ, Könings KD, Molyneux A, Schuwirth LW, Wass V, van der Vleuten CP. Web-based feedback after summative assessment: how do students engage? Medical Education, Jul 2013, 47(7): 734-44
Reviewer: Jonathan Sherbino (@sherbino)
The 2010 Ottawa conference consensus statement on assessment (PMID: 21345060) argued that assessment serves as a number of purposes, including the catalytic role (“The assessment provides results and feedback in a fashion that creates, enhances, and supports education; it drives future learning forward.”)
The challenge with summative assessment is providing meaningful feedback to learners that:
• doesn’t impair the security/operations of a recurring summative exam;
• is specific enough to the learner to influence future learning; and
• is provided in a timely fashion to learners as they embark on a new curriculum.
As a result, feedback in summative assessments is rarely studied/reported in the literature.
This paper sought to determine the impact of:
• goal orientation (learning v. performance);
• motivation (intrinsic – internal desire to success v. extrinsic – external markers of success);
• control of learning (personal v external); and
• self-efficacy (high v low personal belief in ability to succeed) on the use of feedback post summative assessment.
Type of paper
Key Points on the Methods
• Third year (out of 5; transition to clerkship) medical students
• Participants completed a 51-item questionnaire measuring learning characteristics/attitudes
– Likert scale
– Completed 2 weeks in advance of OSCE
• 12 station OSCE
• A website was developed that provided feedback on performance
– Station-by-station pass/fail marks
– Global scores with comparison to cohort
– Sub-analysis of skill performance within a station
– “next step” pages that provided learning plans to meet deficits
– data available up to 2 months post OSCE
• Analysis via
– Latent class analysis (e.g. cluster analysis)
• 82% (n=113) completed questionnaire
– good internal reliability of scales
• Mean scores suggest participants:
– want to master the curriculum;
– avoid the perception of struggling with the curriculum;
– do not value the perception from peers of excelling in the curriculum;
– do not want to hide their accomplishments from peers;
– are intrinsically motivated to learn;
– are able to control their learning;
– have high self-efficacy;
– value feedback, especially frequent feedback; and
– are willing to risk exposing lack of knowledge when seeking feedback.
• 96% (n=132) visited website
– 87% (n=115) first day available
• Mean number of visits = 1.9 (range 1-5)
• Mean number of pages viewed = 123 (range 2-377)
– 130 unique pages available
– Comprehensive users valued feedback more
– Minimal users had more extrinsic motivation
• Excellent students (no failed OSCE stations)
-Visited website more frequently
-Viewed more pages
-Used “next step” pages less frequently
The authors conclude… “Higher performing students appeared to use the feedback more for positive affirmation than for diagnostic information. Those arguably most in need engaged least. We need to construct feedback after summative assessment in a way that will more effectively engage those students who need the most help”
Spare Keys – other take home points for clinician educators
This is a great example of “multiple wins.” The primary author is from the UK. Two senior authors are associated with the Maastrich School of Health Professions Education. I suspect that course work for a graduate degree has been transformed into this valuable education research paper.
Access KeyLIME podcast archives here