|
Technology-enhanced simulation for health professions education: a systematic review and meta-analysis |
Cook DA, Hatala R, Brydges R, Zendejas B, Szostek JH, Wang AT, Erwin PJ, Hamstra SJ |
|
|
CRD summary This review concluded that, compared with no intervention, technology-enhanced simulation training for health professional learners was associated with large effects for the outcomes of knowledge, skills, and behaviour, and with moderate effects for patient-related outcomes. Despite limited information on the effects of specific approaches, these conclusions and the authors' recommendations for research appear to be reliable. Authors' objectives To identify and quantitatively summarise studies of technology-enhanced simulation training for health professional learners, compared with no intervention. Searching MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, and Scopus databases were searched for studies published in any language until May, 2011. Search terms were reported. Reference lists of included articles and published reviews, and two key journals were searched. Study selection Studies were included if they evaluated technology-enhanced simulation for teaching health professionals, compared with no intervention. Included studies had to evaluate learning outcomes (knowledge or time, process, or product skills), behaviours, or effects on patients. Single-group before-and-after and two-group non-randomised or randomised studies were included, as well as studies of instruction as an addition to other training. Within the selected studies, most of the participants were medical or nursing students, physicians in practice or postgraduate training, or other health professional learners, including veterinarians. Training was in a range of clinical topics, including surgery (minimally invasive or other), resuscitation and trauma, endoscopy and uteroscopy, physical examination, intubation, communication and team skills, vascular access, obstetrics, anaesthesia, endovascular procedures, and dentistry. Two reviewers independently selected studies for inclusion, with any conflicts resolved by consensus. Assessment of study quality Study validity was assessed using the Medical Education Research Study Quality Instrument (MERSQI) and an adaptation of the Newcastle-Ottawa Scale that evaluated aspects of sample representativeness, cohort comparability, randomisation, concealment of allocation, and blinding, where relevant. Data extraction Data were extracted on the training level of participants, clinical topic, training location (simulation centre or clinical environment), study design, methods of group assignment, and outcomes. Standardised mean differences (Hedges' g effect size) were calculated for each study outcome, using reported means and standard deviations or statistical test results, such as probability values. Where such data were unavailable, the average standard deviation from all studies reporting that outcome was used, or study authors were contacted to obtain the relevant data. For two-group before-and-after studies, the post-test means were adjusted for pre-test values, or adjusted statistical test results were used. Where these were unavailable, the difference in change scores was standardised using the pre-test variance. For crossover studies, means or exact statistical test results, adjusted for repeated measures, were used and where these were unavailable, means, pooled across each intervention, were used. Two reviewers independently extracted data, with any conflicts resolved by consensus. Methods of synthesis Effect sizes were pooled using random-effects models. Inconsistency across outcomes was investigated using I2, with values greater than 50% indicating high inconsistency. Subgroup analyses were conducted on the basis of study design, quality score, and certain instructional design features, using the Z-test to investigate interactions. Sensitivity analyses, excluding studies in which the effect sizes were calculated from estimated statistical test results and imputed standard deviations, were conducted. Publication bias was assessed using the Egger asymmetry test, and where asymmetry was found the trim-and-fill method was used to calculate revised pooled effect estimates. Results of the review A total of 609 studies (35,226 participants) were included in the review. Of these, 137 were randomised, 67 were non-randomised, with two or more groups, and 405 were before-and-after studies. Inconsistency was high (I2>50%) for all main analyses. Pooled effect sizes were statistically significant, favouring the intervention, for knowledge outcomes (1.20, 95% CI 1.04 to 1.35; 118 studies), time skills (1.14, 95% CI 1.03 to 1.25; 210 studies), process skills (1.09, 95% CI 1.03 to 1.16; 426 studies), product skills (1.18, 95% CI 0.98 to 1.37; 54 studies), time behaviours (0.79, 95% CI 0.47 to 1.10; 20 studies), other behaviours (0.81, 95% CI 0.66 to 0.96; 50 studies), and direct effects on patients (0.50, 95% CI 0.34 to 0.66; 32 studies). Subgroup analyses found no statistically significant interactions between simulation training and instructional design features or study quality. The results of sensitivity analyses were similar to those of the main analyses. Authors' conclusions Compared with no intervention, technology-enhanced simulation training was consistently associated with large effects for the outcomes of knowledge, skills, and behaviour, and with moderate effects for patient-related outcomes. CRD commentary The review question was clearly, but broadly defined in terms of relevant selection criteria. Clear attempts were made to identify all the relevant literature and to minimise the potential for error or bias throughout the review process. Study validity was assessed and incorporated into the analysis; studies of higher quality and more robust design tended to show smaller effects. The approach to synthesis appears to have been appropriate given the diversity of the interventions and outcomes being pooled. The review's broad inclusion criteria limited precise conclusions on the effectiveness of specific forms of simulation, but the authors' main conclusions and subsequent recommendations for future research appear to be reliable. Implications of the review for practice and research Practice: The authors did not state any implications for practice. Research: The authors stated that theory-based comparisons between different technology-enhanced simulation designs that minimise bias, achieve appropriate power, and avoid confounding, alongside rigorous qualitative studies, were needed to clarify how to use these approaches most effectively and cost-efficiently. They stated that further comparisons of simulation against no intervention were no longer necessary. Funding Supported by intramural funds, including an award from the Division of General Internal Medicine, Mayo Clinic, USA. Bibliographic details Cook DA, Hatala R, Brydges R, Zendejas B, Szostek JH, Wang AT, Erwin PJ, Hamstra SJ. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA 2011; 306(9): 978-988 Indexing Status Subject indexing assigned by NLM MeSH Computer Simulation; Computer-Assisted Instruction; Education, Medical /methods; Education, Professional /methods; Health Personnel /education; Humans AccessionNumber 12011005200 Date bibliographic record published 14/09/2011 Date abstract record published 28/09/2011 Record Status This is a critical abstract of a systematic review that meets the criteria for inclusion on DARE. Each critical abstract contains a brief summary of the review methods, results and conclusions followed by a detailed critical assessment on the reliability of the review and the conclusions drawn. |
|
|
|