Section Editor: Donald E Fry
Parts
Part 1A Core Surgical Concepts
- The Acquisition of Surgical Knowledge
- Perioperative Risk Assessment and Care of the Surgical Patient
- The Operating Room Environment
- Wound Healing
- Fluids and Electrolytes
- Hemorrhage, Hemostasis, and Transfusion
- Infection, Inflammation, and Antibiotics
- Minimal Invasive Surgery
- Informatics, Outcomes, and Economics of Surgical Care
- Case Study: Laparoscopic Access to the Multiply Operated Abdomen
- Case Study: Nutritional Support of the GI Fistula Patient
Part 1B Special Considerations
- Anesthesia and Pain Control
- Principles of Endoscopy
- Vascular Access
- Diagnostic and Interventional Radiology
- Principles of Oncology
- Geographic Factors Influencing Surgical Practice
- Surgical Management of the Elderly
- Surgical Management of Children and Neonates
- Case Study: Preventable Complications During Central Venous Access
- Case Study: Preparation of the Elderly Patient for Elective Surgery
- Case Study: Management of the Severe Pulmonary Contusion
Part 1C Critical Care
- Assessment and Monitoring in Critical Illness
- Shock
- Nutritional Support
- Pulmonary Support
- Cardiac Assessment and Support of the Surgical Patient
- Acute Kidney Injury
- Case Study: Source Control in the Management of Intra-abdominal Infection
INTRODUCTION
Surgeons use knowledge, technical skill, compassion, and judgment to achieve the best outcomes for each patient. Ideally, compassion is an inborn characteristic that may evolve over time as surgeons learn to integrate knowledge, compassion, empathy, and communication into what is currently termed “patient-centered practice.” The most critical challenge facing surgeons in the application of the principles of “patient-centered practice” is to use knowledge and judgment to inform the patient and guide his or her choices toward the treatment that will achieve the best result for them. Achieving the best result may, at times, require advising against a surgical procedure. Effective communication to the patient and family members is a key element in the process of teaching them about therapeutic choices and obtaining the “buy-in” from both patient and family into the plan of treatment. Communication is a dynamic, not a static process since patient perceptions and preferences change as the disease state changes and the treatment and its complications progress.
This chapter will review the sources of surgical knowledge, the techniques of evaluating information, methods for quantifying the reliability of available information, and ways of communicating the certainties and uncertainties inherent in applying knowledge to the individual patient. Similar to the techniques of the concert violinist, technical skill in surgery improves with deliberate practice. New skills may be acquired and existing skills honed using simulation. The strengths and limitations of technical skill acquisition through simulation will also be reviewed.
|
Four main sources of surgical knowledge are easily available and are identified in Table 1.1. As we progress through this chapter, we will consider each of these sources in sequence.
LEARNING FROM PATIENTS
Patients and their families are rich sources of medical knowledge. They are the ones who experience the illness and the surgical treatment used to produce an improved state of health. Facilitating the transfer of knowledge from patient to surgeon can be challenging. Surgeons are well aware of the fact that pain and suffering must be inflicted on the patient and will be necessary to achieve the 4improved state of health. An editorial by Alpert1 in 2011 provides advice for the surgeon that will make the transfer of the patient's experience easier so that knowledge acquisition can occur. Most important for the surgeon is to practice effective listening, embracing attentiveness, concentration, and directed questions. During the transfer of knowledge from patient to surgeon, the surgeon will be a teacher as well as a learner. A time-honored axiom states that the best way to learn something is to teach it. When teaching patients about their illness and its treatment, surgeons have a unique opportunity to learn the patient's perspectives, values, cultural context, and needs. From these experiences, knowledge can be gained that will be useful in the care of future patients.
Many professional organizations have begun to emphasize the importance of effective communication between patients and healthcare professionals. The Committee on Healthcare for Underserved Women and the Committee on Patient Safety and Quality improvement of the American College of Obstetrics and Gynecology produced an opinion statement on the elements of effective communication between patients and healthcare professionals.2 The committees recommend using the “partnership” model of communication wherein the healthcare professional seeks to equalize the time spent speaking by both parties participating in the knowledge exchange. The committee statement goes on to emphasize the importance of several personal characteristics that help ensure high-quality information exchange and knowledge acquisition. These characteristics are comfort, acceptance, responsiveness, and empathy. Comfort and acceptance relate to the healthcare professional's ability to deal with difficult topics without becoming uneasy or disorganized. Responsiveness and empathy relate to the ability to detect and interpret subtle and indirect messages from the patient that may be transmitted by body language or indirect verbal messaging. An example contained in the committee opinion is helpful: a patient may be tearful but minimize the problem saying, “I'm just having a bad day.” An effective approach would be for the healthcare professional to offer an opportunity to discuss issues leading to the “bad day.” The committee recommendations close with suggestions that healthcare professionals adopt patient-centered interviewing techniques, that patients be encouraged to write questions in advance of the information exchange, and that training in cultural sensitivity may be beneficial. A final recommendation is that electronic communication with patients be adopted in compliance with national guidelines from organizations such as the American Medical Association.
The adoption and practice of effective communication skills has to be coupled with persistence. It is obvious to most practitioners that patient attitudes are dynamic and vary with changes in the illness status, the development of complications of treatment, and numerous other factors. Being alert to changes will help optimize information exchange and knowledge acquisition.
LEARNING FROM COLLEAGUES
Interactions between healthcare professionals can provide a meaningful opportunity for knowledge transfer and learning if specific measures are taken to ensure timely, accurate, and understandable information. This information can be in the form of conversation or narrative contained in the medical record. Other opportunities for professional interaction occur in the forms of educational presentations and seminars.
The characteristics of high-quality verbal exchanges between healthcare professionals are reviewed in the editorial by Alpert.1 Central to achieving accuracy of information transfer and maximum understanding of the meaning and importance of the conversation is a mutual attitude of respect between the participants. Alpert stresses that each participant must be willing to be attentive to the process of information transfer. Actions such as the checking of emails on a smartphone should be avoided. If possible, interruptions for receiving telephone calls and answering pages should be minimized. The healthcare professional who wishes to provide information to a physician or surgeon should be the central focus of attention. Important parts of the information being transferred will be omitted if the person delivering the information perceives that the listener is not paying attention. It is also important for the listener to understand the level of importance that is placed on the information to be transferred by the person bringing the information. The listener needs to transmit to that person, through direct or indirect means, that the listener believes the information to be important. One effective way to send this message is to look attentively at the person delivering the information.
Alpert stresses that effective listening is critical to understanding questions that are being posed and the exact content needed by the requestor. The response to a question posed by a colleague or other healthcare professional should be framed to answer the question specifically without additional supplementary information. Inquiring 5of the questioner as to whether the response provided effectively answered the question is also an important component of the conversation.
Important medical information is presented and received by use of the medical record. For this mode of knowledge transfer to be used effectively, certain writing skills need to be used. Reviewing these skills, Simon3 emphasizes the importance of clarity when entries are made into the medical record. Good narrative writing will be useful to the surgeon when she/he returns to the medical record after a time interval. Simon notes that effective entries in the medical record can “bring the patient to life” when the record is opened for a future patient encounter. Effective and clear narrative is increasingly important because of the fact that patients now have full access to their medical records. Patient access to medical records has benefits and hazards. The benefits are that patients can add perspective to their medical histories and correct errors. Hazards include the risk for causing fear and anxiety if the medical record contains frightening material that is presented without clarification or written without a balanced perspective. Several pieces of advice offered in the article include an admonition to “write with pride,” with narrative designed to present information clearly and with compassion. Writers are encouraged to “be personal,” using the patient's name frequently. It is useful to include direct quotes from the patient. As use of electronic medical records increases, it is important to resist the urge to “cut and paste” when constructing a narrative. One critical point to remember is to proofread notes. An approach that I find valuable is to read narrative aloud. If it then sounds correct and good, the narrative will probably be meaningful when read by the patient or another healthcare professional.
Surgical knowledge can be acquired during formal education presentations and seminars. During presentations by expert surgeons it is important that surgeons receiving the information maximize their opportunity to receive it accurately by using efficient listening. Direct questions to the presenter can be helpful to integrate the information into “real-world” practice situations. Alpert1 includes useful advice for presenters in his editorial. He emphasizes that slide presentations should not include too much detailed information on each slide. Four or five clearly stated points presented on slides that are easy to read without too much color and/or extraneous material will ensure that the message is transmitted with the maximum clarity.
LEARNING FROM THE MEDICAL LITERATURE
There is current consensus that surgeons should use an “evidence-based” approach to clinical practice. Central to the successful implementation of evidence-based practice by surgeons and surgical teams is the ability to use information from the medical literature to improve patient outcomes. The principles of evidence-based practice include a hierarchy of research quality. The highest quality research is the prospective, randomized, double-blinded clinical trial. Randomized trials that evaluate surgical interventions are difficult to perform and are expensive to conduct. As a consequence, the evidence supporting most aspects of surgical practice cannot be considered strong. Conducting randomized trials in any area of medicine is challenging because of the difficulty in obtaining sufficient patient enrollment. Because results of medical treatments have improved over time, the selected endpoints of a research trial occur infrequently and treatment effects are often small in magnitude. This fact means that enrollment will need to be large, and many studies are concluded and published with suboptimal enrollment. This problem has been countered, to some extent, by the use of meta-analytic research techniques. Meta-analysis combines available research studies and uses specific statistical and categorization techniques to analyze combinations of studies. The result is the production of a group of research subjects that is large enough to have the necessary statistical power to evaluate the impact of an intervention and to facilitate the introduction of the intervention into practice. In an exchange of views on the utility of small trials that are later combined into meta-analytic studies by Guyatt et al.4 the authors emphasize the hazard of interpreting small trials that are underpowered. They also stress the importance of high-quality meta-analyses.
For a meta-analysis to be valuable it has to meet certain requirements. Ideally, the plan to conduct the meta-analysis should be published as a research protocol ahead of the analysis itself or registered on a Web site such as PROSPERO (www.crd.ac.york.uk/NIHR/PROSPERO/). Also, meta-analyses should examine publications in languages other than English and evaluate unpublished data. Recommendations for the items to be included in a meta-analysis are found in the Preferred Reporting Items for Systematic Reviews and Meta-analyses (www.prisma-statement.org). Unfortunately, most publications that purport to be systematic reviews or meta-analyses fail to meet 6these requirements. In the era of “comparative effectiveness” research, the challenge is to prevent “metabias,” which is the biased interpretation that can result from the evaluation of a group of flawed meta-analyses.5 Despite the flaws in many published reports, the meta-analyses and systematic reviews pertaining to surgical topics can, if published in a peer-reviewed surgical journal and reviewed carefully and critically by the reader, contribute meaningfully to the improvement of surgical practice. One particularly strong source of meta-analyses and systematic reviews that meet all of the proposed evidence quality requirements are those published by the Cochrane Collaboration (www.cochrane.org). In future, it is likely that carefully performed meta-analysis will greatly ameliorate the problem of small randomized trials. Thus, there is no reason to discourage the conduct of carefully structured small randomized trials.
Another source of data to assist in the acquisition and application of surgical knowledge is observational research. High-quality observational, nonrandomized research studies can provide outcomes that are similar to high-quality randomized trials.6,7 The most valuable observational studies are cross-sectional analyses, cohort analyses, and case–control studies. Cross-sectional studies evaluate characteristics of a sample of research subjects from a population of interest at a single point in time. Cohort studies select a sample of subjects from a population of interest and follow these subjects over time to record changes in health status. Case–control studies select a sample of research subjects from a population of interest and record baseline data. An intervention is used in each subject, and the change in health status for each subject over time is recorded. In case-control studies each subject serves as her/his own control.
Unfortunately, high-quality observational research studies pertaining to surgical topics are small in number. In fact, most reports of surgical topics consist of retrospective case series, multicenter prospective nonrandomized trials, and literature reviews. Surgical textbooks and periodic literature reviews (such as the publication I edit) fall into the lowest category of medical literature evidence, namely expert opinion. Practicing surgeons need to evaluate retrospective case series and prospective single or multicenter trials carefully. If several retrospective case series from differing locations reach the same conclusion, and the conclusion reached harmonizes with the experience of a surgeon or group of surgeons, the evidence has a good chance of being dependable and valuable. Focused literature reviews can be valuable if all viewpoints are acknowledged and estimates of the quality of the evidence reviewed are included. To have accurate data from the practice of an individual surgeon so that a “reality check” of published data can be conducted, it is important that individual surgeons record clinical data from their patient care experiences. The Surgeon-Specific Registry sponsored by the American College of Surgeons is a convenient and confidential way to record these data and easily retrieve them for analysis. At the hospital level, accurate risk-adjusted data that can be used to document outcomes can be acquired by participation in the National Surgical Quality Improvement Project sponsored by the American College of Surgeons.
Successful use of the medical literature requires an understanding of the strengths and weaknesses of published research. One successful method for evaluating and applying information from the medical literature is found in the reference work by Guyatt.8 Several important pitfalls in interpretation of medical literature publications are presented in this book, and readers are encouraged to review them.
Caution is indicated when interpreting information published in the medical literature because of shortcomings in the design of published research studies, including misleading use of the term “statistical significance,” inadequate sample size, and bias. It is common for authors to suggest that a hypothesis is supported because of a “p value” of < .05 obtained from use of statistical analysis software; the implication of this interpretation of “statistical significance” is that there is at least a 95% chance that the null hypothesis (there was no benefit from the tested intervention) is false. In fact, a p value of < 0.05 means that the null hypothesis will be true, on average, in 5% of the population represented by the reported sample. Statistical significance is easily demonstrated in large samples, but the observed results may not be clinically significant. For example, the use of a particular intervention may lead to a shortened recovery time as indicated by return to work in 5 days compared to 6 days in patients who did not have the intervention. In a sample of > 1,000 patients this outcome will be statistically significant, but the clinical relevance of a 1-day difference given the varying practices of employers in permitting return to work is questionable. Guyatt et al. emphasize that authors should make efforts to ensure that data are expressed in more detailed ways that make interpretation of the reported observations easier. Some of these practices include reports of risk reduction, number 7needed to treat, and 95% confidence intervals. Risk reduction is determined by analyzing the difference in the risk of an event (death, for example) in patients who did not have an intervention compared with patients who had the intervention. This approach is best applied to data from a prospective randomized controlled trial. The frequency of the event in question in the control group is considered the baseline risk. Assuming that the frequency of the event in the treatment group is reduced compared with the control group, one can express the potential benefit in the treated patients in various ways. Risk reduction is frequently reported as absolute risk reduction, relative risk reduction, or odds ratios. Each reporting method is acceptable, but differences between them can be important. For example, the absolute risk reduction is basically the difference in the frequency of the event in the control and treated groups. The relative risk reduction refers to the proportion of the baseline frequency of the unwanted event that is removed when the treatment is applied. Odds ratios express the odds of an event occurring or not. Groups can be compared to determine the differences in odds of an event when the two groups are compared. Odds ratios should be used with caution; however, Guyatt et al. present an example of a treatment that reduces the odds of an event by 30% in each of three groups of patients. If the baseline risk of the event is 30% the treatment reduces the odds of the event to 20%. If the baseline risk is 1%, however, the risk is reduced to 0.67%. In the group with the lowest baseline risk (1%), a treatment with a 10% or more risk of a disabling complication would probably not be offered to the group with the lowest baseline risk.
Another way of interpreting data about a potentially beneficial treatment is to use an estimate of the number of patients who would receive the treatment to prevent one occurrence of the event in question (death, for example). If the difference in the risk of death with treatment compared with the risk without treatment is, for example, 20%, then 20 events would be avoided in every 100 patients treated. The number that would need to be treated to avoid one event is 100 divided by 20 or 5 patients. A final way of expressing data about treatment effectiveness is the use of 95% confidence intervals. These intervals are calculated to provide an estimate of the range of values where the true benefit of treatment may be located. Trials with larger samples will, in general, have smaller 95% confidence intervals. If confidence intervals are wide and especially if the lower boundary is a negative number and the higher boundary is a positive number, the observed effect of treatment is at risk for inaccuracy. Confidence intervals can facilitate application of research data to clinical situations. If the lower bound of the confidence interval indicates a treatment benefit that exceeds the minimum acceptable benefit from the patient's perspective, then a treatment would be chosen. An example of a surgical prospective randomized trial where confidence intervals were helpful in confirming benefit was the North American Symptomatic Carotid Endarterectomy Trial reported in 1998.9 This trial showed a stroke risk reduction benefit of 22% (p < .05) with a confidence interval of 7%–52%. This finding indicates that the lowest stroke reduction risk that could be expected from a successful carotid endarterectomy in a symptomatic patient is 7%, with a possible maximum risk reduction of 52%. The risk of perioperative stroke in this study was 2.1% at 90 days after operation and 6.7% at 8 years. Calculation of the number needed to treat indicated that 15 patients would have to undergo carotid endarterectomy to prevent one stroke. This study had > 99% long-term follow-up that strengthened the evidence of benefit considerably. Another randomized prospective trial comparing three methods for control of methicillin-resistant Staphylococcus aureus colonization in critically ill patients reported confidence intervals for risk reduction for bloodstream infection using universal decolonization measures; the intervention reduced risk of bloodstream infection by 44%.10 The confidence interval calculation indicated that the lowest estimated risk reduction was 35%, with the highest being 51%. Even if the actual risk reduction was 35%, this finding would represent significant improvement in control of this prevalent infection in critically ill patients. Confidence intervals are useful also to provide support of a decision not to use a treatment. Guyatt et al.4 emphasize that evaluation of the lower bound of the confidence interval may assist decisions about whether or not to use a treatment. If the lower bound of benefit approaches a minimal reduction in risk, then use of a toxic or expensive treatment would be rejected.
It is tempting to believe that large samples provide “better” data than small samples. Interpretation of data from large sample studies such as multicenter trials and use of data from administrative databases should take into consideration several important factors. In large sample patient studies, especially retrospective studies, data concerning time of enrollment in the study and follow-up intervals should be reported. Patients enrolled in a trial over a prolonged interval may differ depending on time of enrollment because of changes in baseline risk 8of the event that have occurred over time with changing health status of the population being analyzed or the use of more recent adjunctive treatments. Data reported from large administrative databases can be unreliable because of missing values or because the data were gathered for a reason that does not relate to the research question being examined. For example, data from hospital billing records are influenced by the behavior of billing personnel. If the billing data are expressed with a view toward maximizing reimbursement, then disease descriptor data points may not reflect actual patient characteristics. Administrative data that are rigorously controlled for accuracy of patient characteristics and complete data entry (for example, the National Surgical Quality Improvement database) can provide meaningful clinical information. A review of some of the hazards of using administrative datasets is provided in an article by Ioannidis.11 This editorial reviews in detail the sources of error that can be encountered when using data from administrative datasets. Ioannidis notes that all analyses from large administrative datasets are overpowered to detect clinically meaningful differences. Tiny, clinically insignificant differences can be “statistically significant” with the large numbers of subjects in administrative databases. Ioannidis suggests inclusion of “falsification endpoints” or analyses of associations that are known to be null. The use of this technique is reviewed in detail by Prasad and Jena12 who give an example of the use of falsification endpoints. In an analysis of the use of proton pump inhibitors in a large patient database, a falsification endpoint might be to analyze associations of the use of these drugs with associations known not to exist such as increased risk of soft tissue infection. If these associations are statistically significant, then other significant associations may be undependable. Other sources of error include misclassification of variables, erroneous modeling of variables, and unmeasured variables. Finally, identifying modifiable characteristics from administrative data is extremely challenging.
A major challenge to the use of administrative datasets in surgical research is the problem of unknown confounding variables. The impact of variables such as socioeconomic status, nonprescription drug use, or direction of a certain type of patient to one or another healthcare institution on the basis of diagnosis cannot be adjusted for in the administrative dataset. Two methods of adjusting for unknown confounding variables are the use of statistical techniques such as instrumental variables and propensity scores.13,14 An example of this practice is research studies that evaluate the effectiveness of trauma systems.15 In trauma systems, severely injured patients are preferentially directed to designated centers. This factor produces a mortality bias in favor of nontrauma center hospitals, making comparisons of outcomes in trauma centers and nontrauma center hospitals difficult. Adjusting for such confounder characteristics is possible using one of these approaches. Each of these techniques is an acceptable way to adjust for unknown confounding variables. A detailed discussion of these is beyond the scope of this chapter. Interested readers are referred to the other articles cited.
Guyatt8 notes that bias can influence the results of medical research at several points during the conduct of a study. Biased results will be obtained if there are differences in the control and experimental groups at the beginning of a study. Actions to reduce the impact of this form of bias include randomizing patients or conducting preintervention matching of pairs of patients based on clearly defined characteristics. Surgical research studies are frequently reported using “before and after” research models. In this type of study, the intervention group is compared with an historical control group. Bias is a major hazard with this type of study design. Although the two groups may be similar in terms of demographic data, important differences will often go undetected.
During the conduct of a study, bias can emerge because of placebo effects (a problem particularly in studies where the results are reported by the patient) because some patients may have differing compliance characteristics or receive cointerventions that others do not. These factors produce bias in assessment of outcomes. Ideally, such bias can be countered by “blinding” of patients, clinicians, data collectors, and other participants to the treatment applied. Blinding is a challenge in surgical research studies because of the difficulty in concealing the fact that the patient had an operation.
At the end of the research study bias can be introduced if significant numbers of patients are lost to follow-up, if the study is ended prematurely because of perceived increased frequency of adverse effects or because of an unexpectedly large treatment effect emerging in a small number of patients, and if patients “cross over” from the control group to the treatment group. This is a particular problem in trials evaluating surgical procedures because patients’ and caregivers’ desires to have the surgical procedure may increase during the conduct of the study. This problem is remedied by the use of “intent to treat” analysis, which requires that each patient be counted in 9the group to which that patient was originally assigned. Bias can affect the results of surgical research in several ways. One of the most common is bias introduced because of reporting of retrospective data from the experience of a single surgeon or single institution. Another common form of bias is selection bias. Excellent results of using surgical procedures or medical treatments can be obtained if patients with minimal risk for adverse outcomes are reported in a clinical research article and moderate to high-risk patients are excluded. This type of bias would obviously tend to overestimate the benefit of a procedure or treatment.
Bias can also be introduced by the use of composite endpoints. Because most research on medical treatments produces small benefits of one treatment versus another, there is interest in attempting to obtain results that show statistical strength. Composite endpoints that are each interpreted as “beneficial” can, when combined, increase the number of patients in whom benefit is observed. Precautions that need to be taken when trying to interpret composite endpoints are reviewed by Guyatt.8 They note that the composite endpoint should consist of outcomes that are important to patients and that the frequency with which each endpoint occurred should be reported in the results section of the research article. Uncertainty as to the quality of a treatment benefit can occur when there is imbalance in the frequency of each of the composite endpoints. Consider a research study of a surgical intervention that uses a composite endpoint of death within 90 days of operation, readmission to the hospital, and quality of life at 6 months after the operative procedure. If mortality risk and quality of life scores are equivalent in the two comparison groups, but the surgical intervention is associated with fewer readmissions to the hospital, it would be difficult to conclude that benefit in terms of outcomes important to the patient were achieved.
Obviously, the most insidious and dishonest form of bias is the deliberate withholding or alteration of data. This has been a serious problem in the production of pharmaceutical research data. A review of the extent and depth of this form of biased research is found in an article by Washington.16 The wrongdoing cited in this article stimulated the increased use of research study registries where names of investigators, study protocols, and the data from the research are stored and are available for review. The obvious limitation of the registries is the fact that many, if not the majority, of ongoing studies are not registered.
A final way of acquiring surgical knowledge from the medical literature is through the use of practice guidelines. Practice guidelines are produced by national professional organizations and are intended to provide practical guidance to physicians and surgeons who are striving to practice on the basis of the highest quality of available evidence. Practice guidelines usually include detailed explanations of the evidentiary basis for the guidelines and the degree of consensus achieved by the group producing the guidelines. The strength of the evidence supporting the guidelines is assessed and presented in the guidelines document. Entities such as the United States Preventive Services Task Force, the American College of Chest Physicians, and the Oxford Center for Evidence Based Medicine have produced manuals that provide rules and practices for grading the strength of evidence and producing practice guidelines. The production of practice guidelines documents is labor intensive and expensive. Because of the fact that additional scientific evidence is being generated continuously as medical science evolves, there needs to be a commitment by the sponsoring organization periodically to update and revise the guidelines. Some of the problems encountered when trying to ascertain the quality and consistency of practice guidelines are reviewed in an article by Kavanagh.17 The author notes that one set of practice guidelines, the International Guidelines for the Management of Severe Sepsis and Septic Shock,18 has been broadly adopted and graded as high-quality guidelines. Close examination of the guidelines documents produced in 2004 and 2008 show that at least one recommendation presented with a low evidence grade in the early document was repeated with the same supporting data in the 2008 version with a high grade of evidence. The author concludes that current approaches to assigning quality grades to practice guidelines need to be improved.
An additional concern with the application of practice guidelines is a possible conflict of the guidelines with the standards of patient-centered practice. These issues are reviewed in an article by Goldberger and Buxton.19 The authors note that a pitfall in the interpretation of data supporting the practice guideline can occur if subgroup analyses in a clinical trial detect a group of patients in whom an intervention is not effective, but this group is included as eligible for treatment according to the practice guideline. The authors cite data from one randomized clinical trial that identified nonbenefit in 19% of patients enrolled in the trial, and these patients 10could be identified using clinical characteristics. Despite the fact that no benefit occurred, using the intervention in such patients was supported by the guideline. The authors emphasize that, ideally, inclusion of this patient subgroup in the guidelines should only be done after additional prospective research focusing on patients with the characteristics of the subgroup. In the instance cited, further trial research was not done. Practitioners seeking to practice personalized, patient-centered medicine would, on the basis of the evidence, withhold the intervention from the patients in whom no benefit was identified, but these practitioners would then be guilty of noncompliance with guideline-based practice. A potential conflict exists, therefore. They recommend that practitioners remember that guidelines (and trial data) refer to groups of patients and not individual patients. Guidelines are intended to provide suggestions for clinical practice and should not be viewed as rules for practice.
The question that remains for practitioners is: how do I apply data and guidelines in my practice if I don't have time to review all of the data that were used to support the guidelines? There is no single answer. Surgeons will need to assimilate guidelines and data from the literature into their personal body of knowledge and apply this information carefully to their patients. Maintenance of a personal practice database containing information on patient outcomes will help provide a real-world view of the effects of interventions in an individual surgeons practice. Amalgamating surgeon-specific and institutional data on risk-adjusted patient outcomes will serve to guide the use of knowledge gleaned from the literature.
LEARNING FROM EXPERIENCE AND FROM USING SIMULATION
A time-honored axiom of surgical practice is that “good judgment comes from experience and experience comes from bad judgment.” Research into the topic of learning from experience has shown that expertise does not increase linearly with the time spent in medical practice. Current thought regarding the acquisition and maintenance of expertise over time are reviewed by Ericsson.20 He notes that the fundamental tenets of the roles of genetics and deliberate practice in the acquisition of expertise were established by the work of Galton in the 19th century. Subsequent research cited by Ericsson has shown that expertise does not increase over time but actually decays in the absence of deliberate, repetitive practice. Ericsson notes that early learners tend to concentrate on achieving a level of expertise that will permit performance of the desired task without errors. For activities such as typing, playing golf, and driving an automobile, approximately 50 hours of practice are necessary to achieve this level of expertise. The task performance then becomes automated. During the automation period, the subject is generally unable to make conscious changes in the way the tasks are performed, and expertise then reaches a plateau and does not increase with time. Ericsson notes that the main difference in acquisition of medical expertise is the time course of the acquisition. Once a level of medical expertise is attained (at the end of residency training, for example), further improvement in expertise is not recognized without deliberate practice. The author cites research that documents the increase in expertise in a specialist who sees many patients with the same or similar diagnosis or a surgeon who does a small number of operations repeatedly, compared with a practitioner in a general community practice who encounters a variety of patients or who performs a variety of procedures. Ericsson notes that introduction of a new set of surgical techniques such as with the introduction of laparoscopic surgery produced a “field experiment” that documented the value of deliberate practice in the acquisition of expertise.
The use of surgical simulators has produced an environment in which the use of deliberate practice can facilitate the acquisition of expertise. This topic is the focus of a report by Crochet et al.,21 who conducted an analysis of skill acquisition after a training course on a virtual reality laparoscopic simulator followed by an interval of deliberate practice. Acquisition of expertise was measured by determining the time required to complete a set of tasks and the number of movements needed to complete each subtask sequentially over the course of several practice sessions. Twenty-two candidates were evaluated. One group was assigned to deliberate practice activities between virtual reality simulator sessions, whereas the control group did not practice. The analysis showed that the deliberate practice group began to improve in terms of time required and movements required after the 10th virtual reality session, and improvements continued from that point onwards. These results confirmed the value of deliberate practice. Deliberate practice has been associated with improved performance in clinical situations. A study of this topic is by Stefanidis et al.22 The authors randomly assigned subjects to a deliberate practice group or a traditional training group. The task was to perform the suturing 11maneuvers necessary to complete a laparoscopic Nissen fundoplication. The deliberate practice group used the suturing simulation exercise contained within the Fundamentals of Laparoscopic Surgery course sponsored by the American College of Surgeons. This experience was supplemented by training using a porcine model of a laparoscopic Nissen fundoplication. The intervention group practiced until “automaticity” had been achieved as judged by suturing accuracy and time required to complete a secondary task. When compared to the conventional training group, suture task completion scores during the performance of a laparoscopic Nissen fundoplication in a second porcine model were significantly improved. The data in these two articles support the conclusion that deliberate practice using simulation can improve performance of defined skills tasks.
One distinct advantage of the use of simulation in aviation has been the ability to train pilots in the management of emergency situations. Ericsson20 cites data from military experience that confirm improved outcomes in actual emergencies when pilots had undergone simulator training for that emergency. Whether skills acquisition for the management of surgical emergencies can be acquired with simulator training has been a source of ongoing discussions. One effort to use simulated emergency surgical situations to train surgeons for care of life-threatening emergencies encountered in the operating room is the Advanced Trauma Operative Management Course sponsored by the Committee on Trauma of the American College of Surgeons. In this course problems of bleeding and operative exposure that are actually encountered during the care of injured patients are simulated in a porcine animal model. Participants are coached by experienced instructors through several encounters of various operative problems such as retrohepatic bleeding, inferior vena cava laceration, and penetrating cardiac injury. Assessments of candidate perceptions of improvement have been conducted.23 Participant perceptions of improved understanding of the management of operative emergency situations have been consistently shown. An article describing the implementation of the Advanced Trauma Operative Management course in a surgical residency program is by Ali et al.24 The analysis of questionnaire responses and instructor evaluations showed that participants improved significantly in their knowledge of important technical maneuvers for the care of life-threatening emergencies in injured patients. Instructors documented improved performance by participants with repeated simulations and participants expressed satisfaction with the improved levels of knowledge acquired during the course.
A final study of simulation to improved responses to emergency events in the operating room is described in an article by Acero et al.25 The authors conducted simulator training for an exsanguination emergency in the operating room. A mannequin simulator was used to produce signs and symptoms that would be encountered in a pregnant patient with a carotid artery laceration. Specific instruction was given in the conduct of care of the patient to 171 operating room staff members who were divided into teams. The team members then practiced resuscitation and advanced cardiac life support interventions on the mannequin. At baseline, 50% of participants believed they knew the appropriate roles and interventions for successful management of the simulated patient. At the close of the instructional period and the practice sessions, this perception had risen to 98%.
The available data indicate that simulation training is an effective means of providing deliberate practice that leads to improvements in performance of structured tasks. What remains is to conduct research to confirm that the acquired expertise results in improved clinical outcomes. Simulation provides a viable means for surgeons to perform deliberate practice that should result in improved levels of expertise.
CONCLUSION
Teachers of surgery often say: “the more you know, the easier the practice of surgery is.” While this is true, digesting the knowledge gained from the sources discussed in this chapter will continue to be challenging for surgeons, but the effort will supply the intellectual stimulation that will make the practice of surgery rewarding for the entirety of the surgeon's professional life.
REFERENCES
- Alpert JS. Some simple rules for effective communication in clinical teaching and practice environments. Am J Med. 2011;124(5):381–2.
- American College of Obstetricians and Gynecologists Committee on Health Care for Underserved Women: Committee on Patient Safety and Quality Improvement. ACOG Committee Opinion No. 492: Effective patient-physician communication. Obstet Gynecol. 2011;117(5):1254–7.
- Guyatt GH, Mills EJ, Elbourne D. In the era of systematic reviews, does the size of an individual trial still matter. PLoS Med. 2008;5(1):e4.
- Goodman S, Dickersin K. Metabias: a challenge for comparative effectiveness research. Ann Intern Med. 2011;155(1):61–2.
- Concato J, Horwitz RI. Beyond randomised versus observational studies. Lancet. 2004;363(9422):1660–1.
- Concato J, Lawler EV, Lew RA, et al. Observational methods in comparative effectiveness research. Am J Med. 2010;123(12 Suppl 1):e16–23.
- Guyatt GH. Users' Guides to the Medical Literature. McGraw-Hill Medical; Chicago, Illinois: 2008.
- Barnett HJ, Taylor DW, Eliasziw M, et al. Benefit of carotid endarterectomy in patients with symptomatic moderate or severe stenosis. North American Symptomatic Carotid Endarterectomy Trial Collaborators. N Engl J Med. 1998;339(20):1415–25.
- Huang SS, Septimus E, Kleinman K, et al. Targeted versus Universal Decolonization to Prevent ICU Infection. N Engl J Med. 2013;368(24):2255–65.
- Ioannidis JP. Are mortality differences detected by administrative data reliable and actionable? JAMA. 2013;309(13): 1410–1.
- Prasad V, Jena AB. Prespecified falsification end points: can they validate true observational associations? JAMA. 2013;309(3):241–2.
- Durham R, Pracht E, Orban B, et al. Evaluation of a mature trauma system. Ann Surg. 2006;243(6):775–83; discussion 83–5.
- Martens EP, Pestman WR, de Boer A, et al. Instrumental variables: application and limitations. Epidemiology. 2006;17(3):260–7.
- Joffe MM, Rosenbaum PR. Invited commentary: propensity scores. Am J Epidemiol. 1999;150(4):327–33.
- Washington H. Flacking for big Pharma. Am Sch. 2011. Available at: http://theamericanscholar.org. [Accessed May 19, 2013].
- Kavanagh BP. The GRADE system for rating clinical guidelines. PLoS Med. 2009;6(9):e1000094.
- Dellinger RP, Levy MM, Rhodes A, et al. Surviving Sepsis Campaign: international guidelines for management of severe sepsis and septic shock, 2012. Intensive Care Med. 2013;39(2):165–228.
- Goldb