This week the KeyLIME-ers look at Health Professions Education (HPE) surveys, and, more specifically, what makes an effective survey? While surveys are widely used in HPE, there has been little research done in to their quality and effectiveness. Poorly designed surveys can result in poor quality data so it is certainly an important question for those who design them to consider. Listen here to the co-host discuss the results of the research presented.
KeyLIME Session 303
Artino et. al.,“The Questions Shape the Answers”: Assessing the Quality of Published Survey Instruments in Health Professions Education Research. Acad Med. 2018 Mar;93(3):456-463
Jon Sherbino (@sherbino)
Listen to this stat. More than 50% of orginal research studies in the top three HPE journals, by impact factor, use a survey methodology. My very first publication was… wait for it… a survey. There is a bit of a soapbox vibe to this episode in that as a collaborator and reviewer I feel inundated with HPE surveys. I’m not suggesting that the survey is a “bad” methodology. In fact, I’m very proud of a survey on imposter syndrome that I am working on with my colleague Jessica Cheung. While surveys are not bad per se, tehre are a lot of bad surveys out there. I think it’s a function of the deceptive appearance that surveys are easy to do. Well, KeyLIMERs, we are here to suggest that good survey methodology is rigorous. And hard.
The authors state:
“we lack insight into the quality of the survey instruments used in HPE research. To fill this gap, we reviewed research articles from several high-impact HPE journals to assess the quality of published survey instruments. … we coded the quality of the survey items using a rubric … based on the survey design literature. We … hope to inform …: investigators who intend to use surveys as research tools and journal reviewers and editors who determine which survey studies are published.”
Key Points on the Methods
A hand search of Academic Medicine, Medical Education and Advances in Health Sciences Education for the year 2013 was performed. Only surveys were the respondent completed them on their own were included. (e.g. using a survey in a focus group was excluded).
A scoring rubric captured validity evidence for the instrument and based on best practices in survey design was used to evaluate each survey. Best practices included:
- Response options should not ask you to agree/disagree with the question
- No multi-barrel items (only 1 question or premise)
- Labelled each response option
- Evenly spaced response options (each anchor on the Likert should be evenly distributed visually)
- Non-substanive response options (e.g. don’t know, not applicable) are formatted separate from substantive response options to prevent visual skew of the scale
Coding was done indepdently by two authors. Discrepanices were resolved by consensus.
Of the original research articles published 52% (185/356) were surveys. More than two-thirds of self-adminsitered surveys did not report their survey items for analysis.
There were 37 self-adminsitered surveys with items available for coding.
Coding was highly reliable (ICC0.975; 95% CI 0.966-0.981)
Validity data included:
- Scoring (27%)
- Generalisation (19%)
- Extrapolation (0%)
- Implications (0%)
591 Likert items available for review (733 total) with a mean of 16 items per survey (range 1-55).
A single deviation from best practice occurred in 35 of the 37 (95%) surveys. This included:
- Agreement response: 57% of surveys with at least one item; 45% of Likert items
- Multibarrel: 65% of surveys with at least one item; 17% of Likert items
- Unlabeled response: 42% of surveys with at least one item; 34% of Likert items
- Unevenly spaced response: 47% of surveyswith at least one item; 47% of Likert items
- Nonsubstantive response formatted with sutatnaive response: 43% of surveys with at least one item; 25% of Likert items
** the denominator varies for each of these measures (8 – 37 surveys; 150 – 591 Likert items) based on the data available in the manuscript
The authors conclude…
“ published HPE survey-based research could profit from more informed survey design and better reporting. In the end, failure to follow best practices in survey design and reporting has the potential to negatively impact HPE investigations, the majority of which employ survey methodology. Through…more stringent requirements by journal reviewers and editors, the field of HPE can begin to strengthen the quality of its survey research.”
Access KeyLIME podcast archives here
The views and opinions expressed in this post and podcast episode are those of the host(s) and do not necessarily reflect the official policy or position of The Royal College of Physicians and Surgeons of Canada. For more details on our site disclaimers, please see our ‘About’ page