The National quality Forum (NQF) published in 2002 a report defining 27 “serious reportable events” in healthcare, with one additional event added in 2006, completing a total of 28 “never events”, which are events that should not occur or are highly preventable.1,2 A goal in this era of measuring quality through outcomes, is to not have any of these so-called “never events”. A very successful way to improve outcomes in healthcare is achieved by simulation.
Simulation has been used for decades in different fields. It has been defined by Gaba as “A technique (not a tool or technology) to replace, augment or amplify reality with guided experiences, often immersive in nature, that evoke or replicate substantial aspects of the real world in an interactive fashion”.3
A simulator is basically a surrogate for the real, and can take many and varied forms. A life-size doll called Mrs. Chase was built in 1911 by Martha Jenkins Chase (a doll maker) to train nurses in how to dress, turn, and transfer patients, and later in 1914 an upgraded version called Arabella, allowed nurses to practice injections.4,5 Later, in 1940, a male version of the mannequin was requisitioned by the US Army to teach medical corpsmen the techniques of medical care.6 In 1960 a mannequin called Rescue Annie was created to train people in mouth –to-mouth ventilation, and, in the same year, Sim One, a high-fidelity anesthesia simulator was also developed.7
Today, several companies produce all kinds of simulators to assist with the training of health care personnel.
In the past, the Halstedian method was the base of teaching in surgery. Doctor William Stewart Halsted, an American surgeon had a famous motto, which characterized his method: “See one, do one, and teach one”. The expectation was that the students were able to perform a procedure, after just having observed it being performed by another surgeon, and after that, it was considered that they were ready to teach it to some other students. Clearly, this method was not only applied in surgery, but it extended to any procedure performed by health care workers. And although some people learned new skills through practice with models such as cadavers, tissue blocks, or animal laboratories, this practice was typically informal, and not necessarily integrated into an overall education curriculum. The psychomotor skills that a trainee needed to acquire to become proficient, happened mostly while training in real patients, sometimes leading to fatal mistakes.
After the NQF reviewed the “never events”, simulation emerged as a technique to improve outcomes. But some programs made the mistake of thinking that the simulator was more important than the curriculum; they purchased the simulators without creating clear educational objectives, and started the process, without having a defined way to give feedback to the trainees, and without evaluation tools. We now know that the goal of education, including technical skills, is to improve or excel in the desired outcomes. Ideally a simulation center should have a multidisciplinary nature, with different specialties working together to develop a curriculum and creating a system to evaluate each student. Through this collaboration, the center is able to have committed faculty, facilities, and equipment to improve the learning process and to educate many more personnel than what it would be achieved using a single-specialty training facility.8
Expert review and evaluation provide robust and useful informative and summative feedback. The expert can coach the trainee and also grade performance using structured assessment tools. Choosing the appropriate assessment tool depends on the specifics of the simulation, and may vary depending on the goals and intended audience of the curriculum. For example, time and error metrics may be appropriate for simulations teaching psychomotor skills that rely on self-guided learning. On the other hand, training in complex procedural tasks may benefit from expert review, coaching, and assessment.9
For example, in our WWAMI Institute for Simulation in Healthcare at the University of Washington in Seattle (WISH), a Central Venous Catheterization (CVC) module was created in collaboration among different specialties. Through this development process, different specialties (internal medicine, anesthesia, internal medicine, family medicine and surgery) were able to agree on a common technique, using a standardized CVC kit, and providing clear documentation and reducing practice variation. The main goal of this process was to improve outcomes, to reduce complications (Central Line Associated Bloodstream Infection is one of the “never events”), and to reduce costs. All physicians whom place central lines in our hospitals system are required to obtain a certification on this course, before placing the first three lines under direct supervision.10 All of the simulation courses taught at WISH have a didactics part that can be accessed on line, facilitating the learning process of the student, by using the tool at its own pace.11
Another example of interaction among different practitioners is being developed at the Seattle VA Hospital, where mock scenarios are run weekly in the operating room, with the participation of nurses, anesthesiology personnel and several surgical specialties. It has been a wonderful exercise for everyone, and had served to identify issues that should have been problematic in case the situation would have presented in real life, making the pre-intra-and-postoperative environment safer, not only for our patients, but for everyone.
I have to agree with Dr. J.I. Curry's modification of Halsted's motto. In this era of medicine, a better model to teach any procedure is characterized by: “See one, practice on a simulator (with feedback), do one”.12
FinancingNo financing was received by the authors to write this article.
Conflicts of interestThe authors have no conflicts of interest to declare.
Please cite this article as: Figueredo EJ. Simulación en salud. Rev Colomb Anestesiol. 2016;44:270–271.