To give patients the best possible care, healthcare providers need to combine the three S’s. These are the best Systems, the best Science and the best Skills. The outline of this is seen in figure 1. Best Science incorporates concepts such as evidence-based care, the use of randomised control trial data, meta-analyses, cochrane collaborations, the development thereafter of guidelines, well-kept clinical data bases, protocols and, of course, textbook learning. These things are assessed usually by a form of cognitive assessment. Best Skills include technical communication, ethical and other skills and confident assessment is more difficult. These are taught in an apprenticeship model although simulation offers a great opportunity for better assessment of skills in trainees and the workforce. Finally, there are Best Systems. These include things such as physical facilities, clinical governance, appropriate culture in the workforce, human factors, good teamwork, best protocols etc. Many aspects of this domain are never assessed.
The range of skills amongst practicing surgeons is quite varied and Gallagher et al., in an article in the Journal of the American College of Surgeons in 2003, demonstrated a wide range of variation in surgical skill based on the number of errors in a MIST-VR simulated environment.
Outcomes for a long time have been known to indicate differences in individual surgeons practice and volume has certainly been well demonstrated as a predictor of good surgical outcomes.
The question then is, what contribution can surgical skills simulation realistically make to safety and quality? In the published Abstracts from Medicine Meets Virtual Reality the word ‘safety’ is used 61 times in 21 published Abstracts, and ‘quality’ 75 times in 39 published Abstracts. Thus, quality and safety seen to be drivers of the simulation movement in surgery as well as anaesthesia, emergency care and nursing.
Unfortunately, quality and safety can be over-promoted and used out of context to justify the need for simulation.
The ‘To Err is Human - Crossing the Quality Chasm’ publications by the National Institute of Medicine in the United States are two of the triggers for the quality and safety movement are frequently cited in the simulation literature as drivers for the adoption of simulation, as is the response of the General Medical Council in the United Kingdom to the Bristol Inquiry. In these documents, not more than a paragraph on simulation is present. If one looks at a variety of publications on quality and safety, one can see what might be termed “failure of surgical skills” contributes a relatively small amount to poor patient outcomes. Zhan & Miller found in a major paper reviewing 20% of admissions to US hospitals that only 2.2% of patient safety-indicator events were due to “technical difficulty / problem”. This compares to approximately 20% of postoperative physiologic and metabolic derangement, 6.5% of venous thromboembolic disease and 7.2% of decubitus ulcers, amongst other patient safety indicators. The overall rate per thousand discharges at risk was in fact only 3.2%. Gawande et al. found that adverse events, in fact occurred in approximately 3% of surgical and obstetric patients but adverse events overall were no more likely in surgical than non-surgical care.
If one looks at the safety literature, very little mention is made at present of simulation as a methodology to improve safety. By way of example, Leape in his evidence report # 43 2001 from the Evidence-Based Practice Centre at Stanford, (an article which has been criticised for its focus on error rather than systemic problems) identified 73 practices, 11 of which had the greatest strength of evidence. When these are reviewed, four of the seventy three may possibly have been helped by the use of simulation, and simulation itself is mentioned only once in the paper and this is in the second lowest group (“lower impact or strength of evidence”).
Does simulation work?
The Australian Safety and Efficacy Register for New Interventional Procedures—Surgery (ASERNIP-S) reviewed surgical simulation by way of a systemic review. This work was subsequently published in scientific format in Annals of Surgery. The conclusion of the assessment was that “while there may be compelling reasons to reduce reliance on patients, cadavers and animals for surgical training, none of the methods of simulator training has yet been shown to be better than other forms of surgical training”.
This is hardly surprising given the relative novelty of surgical simulator training and the fact that even in aviation it has been identified that at least 70% of aircraft accidents and incidents are caused not by a pilot’s technical skill but lack of human factor skills. The aviation comparison is frequently thought to be analogous to the situation in the operating room.
Why then should we teach surgical skills on simulators? This has been addressed in particular in an article by Gallagher in which a model of attentional resources for the master versus the novice surgeon was described. This model proposes that because the novice surgeon, much like the learner driver early in his experience, devotes nearly all his attentional resource to psychomotor performance, depth and spatial perception and operation judgement and decision making. This significantly reduces their attentional resources which can otherwise be devoted to comprehending instructions and gaining knowledge. By pre-training on the simulator, the amount of psychomotor performance, depth and spatial judgement, operative judgement and decision making, attentional resource that is required can be reduced.
This model has subsequently been explored by Fried and his colleagues at McGill University (personal communication). They have confirmed this using a laparoscopic simulator trainer and having the candidates solve mathematical problems as well as perform tasks within the trainer.
Indeed, it has even been suggested within Gallagher’s model that the attentional resources of the below average surgeon may be so challenged by the laparoscopic environment as to reduce their overall effectiveness.
There is increasing evidence emerging, however, that simulation in surgical training is effective. Further reviews that we have conducted with ASERNIP-S have tended to indicate that skills acquired by simulation-based training are transferable to the operative setting. This has been demonstrated in ten randomised control trials and one non-randomised study to date. Unfortunately, these studies have been of variable quality and did not use comparable simulation-based training methodologies.
There is, therefore, a need for larger numbers of trainee assessments in different techniques but using equivalent methodologies. However, the problem remains that if all patients had completely error-free technical aspects to their admission, approximately 97% of medical errors and bad outcomes would still occur. Simulator training may enable us, to reduce the attentional resource needed by novice and master surgeons in any particular setting so as to improve their situational awareness, teamwork and human factors and thereby prevent these other larger (by number) causes of harm.
Technical skills contribute about 2-3% of quality & safety problems. Surgical simulation cannot be restricted to procedural simulation alone if it is to have a real impact. We in the simulation community should not make exaggerated promises (that we cannot keep). Simulation should be assessed with the same rigour as any other intervention in healthcare and we should not lose sight of the fact that outcomes are the ultimate measure of any intervention in healthcare