Validity in a surgical article (as in most scientific articles) can be categorised as internal validity, which is the accuracy of the conclusions about the effects of an intervention on a given group of subjects under the specific circumstances of the study design, and external validity, which is the applicability of the study findings or how a clinician can rely on the research findings to apply to patients in their real-world practice beyond the circumstances of the study design.1 Validity is compromised by bias (systematic error) which is defined in the dictionary of the Spanish Royal Academy2 as the "systematic error that may occur when sampling or testing selects or favours some responses over others". Biases in a study can be "methodological" (in undertaking the study) during data collection, analysis, interpretation, or review, occurring before, during or after the study is conducted,1 or they can be "cognitive" (in applying the results), i.e., barriers intrinsic to human nature, and therefore to surgeons, that affect the interpretation and incorporation of evidence in decision making.3
“Methodological” biasesThis is not exhaustive and covers the most important in our opinion:
Before the study is conducted, these can be in selecting patients with different characteristics that potentially influence the outcome, the remedy is randomisation; allocation bias when individual prognostic factors influence the allocation of an intervention, the remedy is stratification; ambiguous and poorly defined protocols, the remedy to which is a well-defined protocol and their registration in databases, such as ClinicalTrials.gov4; surrogate endpoints that may not correlate with clinically useful endpoints, the remedy is that the surrogate endpoints correlate appropriately with clinically useful endpoints.
While the study is being conducted, these can be: outcome detection bias through non-uniform measurement of outcomes, the remedy is standardisation (uniformity) of measurements or blinding; outcome ascertainment bias with distorted ascertainment of important elements of the study, the remedy may be blinding or pre-randomisation; follow-up bias with differences between groups, the remedy is to homogenise follow-up; non-uniformity bias in the interventions performed, the remedy may be stratification, for example by surgical experience, institution or surgeon.
After the study has been conducted, perhaps the best known is publication bias: the tendency to publish only studies with positive results, although there may be others such as duplicate publication or incomplete study information, remedies may be to register the paper or follow established standards in conducting studies, such as the CONSORT statement for randomised studies.4,5
All the above-mentioned methodological biases produce a classic pyramid graph or hierarchy of evidence that essentially represents the progression (from the bottom to the top) of the strength of confidence of different study designs.
“Cognitive” biasesA two-route model of decision-making has been described.3 The analytical (reason) and the intuitive (heuristic). When surgeons judge situations, make decisions and solve problems they, like any other human beings, primarily use mental shortcuts, i.e., the intuitive route. They make use of tacit knowledge based on experience, on "exchange" with other surgeons, on reading or watching videos made by opinion leaders or experts in a given technique. These cerebral shortcuts are essential for daily practice as they allow a large amount of information to be processed in a short time and under pressure. Obviously, the surgeon must make frequent use of these shortcuts, although this implies a significant number of cognitive biases that hinder the practice of evidence-based surgery. We list the most characteristic of these below:
Action bias or the tendency to favour “action” over “inaction”. Action is motivated to avoid regretting a missed opportunity, for example, not performing a surgical procedure or performing it late, not ordering a diagnostic test or not prescribing an antibiotic. This type of bias can lead to overuse of certain surgical procedures (overtreatment).
Omission bias or the tendency to favour “inaction” to avoid making mistakes or failure. This bias arises whenever either an omission or an action is likely to cause harm. At that point omission may be chosen because the harm this causes could subjectively appear to be less.
Status quo bias or the preference for the status quo, which can be explained as an aversion to loss. This bias may contribute to the surgeon's inertia to continue using the same technique when there are alternatives with better efficacy or not to discontinue procedures that have not been shown to be effective.
Halo effect bias or the tendency to define a person’s overall image based on one of their features. If a surgeon is a virtuoso in a particular technique, it is assumed that they will also be a virtuoso in indicating the technique or being ethical in their professional conduct.
Confirmation bias or the tendency to use only information (analytical data, imaging, evidence from studies) that confirms the surgeon’s pre-existing information and conforms to a prior preconception or hypothesis, ignoring data that runs counter to the surgeon’s prior conception.
Availability bias or the tendency to overestimate the likelihood of events when they readily come to the surgeon’s mind. For example, a recent complex case that debuted in a similar way and had a complex or fatal outcome.
Framing bias. This may cause the surgeon to make decisions based on the context and presentation of an option. For example, when the results of a study are shown to them as an absolute effect (relative risk) or relative effect (relative risk reduction) they will interpret the information differently and then also make the decision differently.
Optimism bias or believing that new is better.
Rhetoric bias, which refers to using an argument to persuade the surgeon by reading or hearing that argument without making use of quality evidence. It might not be uncommon for surgeons to cite evidence without an actual in-depth analysis of the quality of that evidence. The rhetoric used is persuasive and may show a relevant effect, without solid evidence. This bias may be for or against an intervention or surgical technique and the arguments are opinions, beliefs, or experiences rather than evidence.
Hot topic bias. When a topic is in vogue (“hot”) researchers (and editors) may be less critical of the research protocol and execution.
Conflict of interest bias, when the surgeon’s opinion on a condition (a surgical technique, validity of research) may be influenced by an extraneous secondary interest (a surgical technique, validity of research) may be influenced by an extraneous secondary interest (financial, personal projection, etc.).
CorollaryIt is 25 years since the Lancet published a controversial editorial entitled “Surgical research or comic opera: questions, but few answers”6 on surgical publications and research, in which the title spoke for itself. This editorial can obviously be contested from different perspectives; however, 25 years later we must recognise as a surgical community that we need to continuously improve, on the one hand, the quality of our publications by limiting the number of methodological biases, and on the other by identifying, reviewing, and controlling potential cognitive biases that may hamper the implementation of the best available evidence. Perhaps the only way to achieve this is with adequate training in these issues during the degree course and continuous updating throughout the specialty.