Author(s): Schwid HA, Rooke GA, Carline J, Steadman RH, Murray WB,
Abstract Share this page
Abstract BACKGROUND: Anesthesia simulators can generate reproducible, standardized clinical scenarios for instruction and evaluation purposes. Valid and reliable simulated scenarios and grading systems must be developed to use simulation for evaluation of anesthesia residents. METHODS: After obtaining Human Subjects approval at each of the 10 participating institutions, 99 anesthesia residents consented to be videotaped during their management of four simulated scenarios on MedSim or METI mannequin-based anesthesia simulators. Using two different grading forms, two evaluators at each department independently reviewed the videotapes of the subjects from their institution to score the residents' performance. A third evaluator, at an outside institution, reviewed the videotape again. Statistical analysis was performed for construct- and criterion-related validity, internal consistency, interrater reliability, and intersimulator reliability. A single evaluator reviewed all videotapes a fourth time to determine the frequency of certain management errors. RESULTS: Even advanced anesthesia residents nearing completion of their training made numerous management errors; however, construct-related validity of mannequin-based simulator assessment was supported by an overall improvement in simulator scores from CB and CA-1 to CA-2 and CA-3 levels of training. Subjects rated the simulator scenarios as realistic (3.47 out of possible 4), further supporting construct-related validity. Criterion-related validity was supported by moderate correlation of simulator scores with departmental faculty evaluations (0.37-0.41, P < 0.01), ABA written in-training scores (0.44-0.49, < 0.01), and departmental mock oral board scores (0.44-0.47, P < 0.01). Reliability of the simulator assessment was demonstrated by very good internal consistency (alpha = 0.71-0.76) and excellent interrater reliability (correlation = 0.94-0.96; P < 0.01; kappa = 0.81-0.90). There was no significant difference in METI versus MedSim scores for residents in the same year of training. CONCLUSIONS: Numerous management errors were identified in this study of anesthesia residents from 10 institutions. Further attention to these problems may benefit residency training since advanced residents continued to make these errors. Evaluation of anesthesia residents using mannequin-based simulators shows promise, adding a new dimension to current assessment methods. Further improvements are necessary in the simulation scenarios and grading criteria before mannequin-based simulation is used for accreditation purposes.
This article was published in Anesthesiology
and referenced in Journal of Anesthesia & Clinical Research