How we work
Simulation is a representation of a situation over time, useful to predict and optimize its outcomes. This means that its development is critical in order to produce the desired effects. Following a systematic methodology, despite requiring considerable amount of time, may produce effective results. One of the most reliable development methodologies was designed by R.Satava and is called “Full life-cycle curriculum development”1. The development process begins with the definition of the expected outcomes.
Outcomes & Metrics
This is decided by a consensus of experts and is relative to a full procedure or part of it. After this first decisional step, the operation that needs to be simulated is deeply analyzed, with a process called Cognitive Task Analysis (CTA)2. The CTA can be structured as an interview to one or more experts, with the aim not only to define the single steps of the procedure, but also to understand the decisional pathway followed by the expert. A series of questions are formulated to define the different options, the reasons why one approach can be more favorable than another and the different information needed in order to decide which option can be the best. Within the CTA, the following data need to be collected:
– Indications
– Contraindications
– Equipment
– Procedural steps (listing all available techniques)
– Do’s
– Don’ts
Each of the aforementioned data will be useful afterwards, to build specific parts of the curriculum. Once the CTA is completed, it needs to be compared to guidelines to avoid any experience-based bias and, afterwards, it needs to be shared with the experts. In this phase, Delphi Method can be used to reach consensus.
Curriculum development
After the preliminary data collection, in this phase the actual educational protocol starts to be designed. Information gathered within the CTA are now managed as follows:
– Indications and contraindications will provide information for the cognitive addendum of the training-session;
– Equipment will serve as an instrumentation list and to understand what will be needed to set-up the hands-on training session;
– Procedural steps will provide core info about the operation to simulate and its details;
– Do’s and don’ts will allow to define goals and errors of the training tasks.
Thanks to the mentioned considerations, preliminary training-task description is produced, mentioning not only the sequence of maneuvers to perform, but also the errors to avoid and the requirements of the simulator to be used, which will drive the next development step.
Simulator development
This is the most stimulating part of the process, as it allows to check and test all the devices that are useful to the designed tasks. In a first phase a test of the pertaining simulators on the market is needed. This is critical to highlight their pro’s and cons, to check the feasibility of the task on each of them and the possible upgrades or modifications to apply. Tests are usually run by a cohort of experts, after receiving in advance a detailed task description. If a simulator can be used with minimal modifications, then it’s usually preferred to a brand-new product design, which may require some dedicated investment. In case no system is fulfilling the requirements, then a new simulator is designed. Simulator development requires a close collaboration of educators, engineers and physicians. While the educator may provide an insight about the correct methodology to use, the engineers allow the development of each component (3d-print, electronics, materials, software) and the physician double checks and provides support along the whole process. After the early development, the prototype is then tested again by the experts, who may ask for changes to allow easier replicability of the original training task. Once the simulator has been finalized, it is double checked for eventual production modifications.
Validation studies
Validation might be considered as the most important step, but a wise development drastically increases the chances of a success. According to the latest concept inspired by Messick’s framework of validity3,4, validation is mainly focusing on how the simulator was designed, how relevant is the background of the surgeon who is approaching it and how important is the assessment to understand the actual acquisition of skills5. According to the updated validity taxonomy summarized by Goldenberg6, validation includes the following aspects: test content, response processes, internal structure, relationships to other variables and consequences of testing . Test content pertains the ability of the simulator to produce the expected outcomes, usually decided by a cohort of experts. Response process is the analysis of the assessment methodology and its ability to reflect and score the observed performance of the trainee. Internal structure focuses again on the assessment methodology, its replicability and statistical reliability. Relationship to other variables correlates the performance with known measures of skill or ability, like for example the clinical background of the participant. Consequences of testing are considering the relationship between the assessment and what comes after the training itself (eg. improvement on the surgical field).
Validation is anyway not absolute: a valid simulator might be more or less beneficial to a trainee, depending on several variables and, most important, the teaching ability of the tutor7.
Implementation
Before becoming an actual “assessment tool”, the validated protocol has to be tested as a “training tool” on a large scale, together with the relative simulator. Wide-spreading allows to understand the feasibility of the teaching model in a regular setting, which can be either a simulation center, a University class or a conference, depending on the previously set goals. Implementation phase tests the portability of the simulator, the replicability of the training session and overall the “standardiz-ability” of the entire training system. Feedbacks are collected in this phase to check whether the participants and the funding companies feel satisfied and if their expectations are met. Once again, being standardization the core of high-quality training, this phase is fundamental to make sure that everything is working correctly.
Issue certification
The final part of the curriculum development endorses the assessment properties of the entire protocol and gives sense to the name suggested by Satava: Full life-cycle curriculum development1. Indeed, issuing the certification allows to confirm acquisition of the skills as planned during the first phase, outcomes and metrics. This “closes the circle” and needs to exactly correspond to what was expected since the very beginning, during the early consensus meetings.
References
- Satava R, Gallagher A. Next generation of procedural skills curriculum development: Proficiency-based progression. J Heal Spec. 2015;3(4):198. doi:10.4103/1658-600X.166497
- Salmon P, Stanton N, Gibbon A, Jenkins D, Walker G. Cognitive Task Analysis. In: Human Factors Methods and Sports Science. ; 2009. doi:10.1201/9781420072181-c4
- Messick S. Validity of Psychological Assessment. Am Psychol. 1995. doi:10.1037//0003-066X.50.9.741
- Korndorffer JR, Kasten SJ, Downing SM. A call for the utilization of consensus standards in the surgical education literature. Am J Surg. 2010;199(1):99-104. doi:10.1016/j.amjsurg.2009.08.018
- Sweet RM, Hananel D, Lawrenz F. A unified approach to validation, reliability, and education study design for surgical technical skills training. Arch Surg. 2010. doi:10.1001/archsurg.2009.266
- Goldenberg M, Lee JY. Surgical Education, Simulation, and Simulators—Updating the Concept of Validity. Curr Urol Rep. 2018;19(7). doi:10.1007/s11934-018-0799-7
- Satava RM. The future of sugical simulation and surgical robotics. Bull Am Coll Surg. 2007.