The provision of health care that is based on clinical practice guidelines is the result of a process that should include these five stages: design, development, dissemination, implementation and analysis of impact.1-3 Saura et al4 have studied the quality of clinical practice guidelines with a set of criteria that assess elements of the design phase. The results of their study indicate that the best clinical practice guidelines were those designed by multidisciplinary groups, at teaching centers, within the context of a specific health program, dealing with a chronic health problem, and linked with services regularly provided by the center.
Now that the findings are in hand we should ask ourselves whether these criteria are appropriate to evaluate the quality of guidelines. It seems to me that they are necessary but not sufficient.
Firstly, an evaluation of clinical guidelines needs to distinguish between this type of instrument, intended as an aid clinical decision-making, and similar documents such as protocols, algorithms and referral pathways. This distinction avoids the so-called Hepburn bias, in which some people confuse Katharine Hepburn with Audrey Hepburn because the two have a certain feature (the surname, in this case) in common. Thus the process used to select guidelines for study should include those documents that fulfil the criteria of multidisciplinarity, scientific accreditation and, possibly, inclusion as part of a structured program of guideline design.1 These criteria would make it possible to distinguish between clinical practice guidelines and other types of recommendations of lower quality.
Secondly, the evaluation of guidelines needs a wider set of criteria that take into account the different stages noted above. Generic evaluation instruments for this purpose have been validated, including the criteria proposed by the AGREE collaboration (www.agreecollaboration.org). This wider-ranging evaluation makes it possible to avoid confusing form with content, and process with outcomes. These two sources of confusion are the result of considering a guideline appropriate for good clinical practice (content) or for producing improvements in the quality of care (outcome) simply because it is well designed (form-process).
To avoid confusion between form and content, criteria based on scientific evidence should be used to judge the appropriateness of the recommendations in the guidelines for the clinical condition to be treated. In this connection, a clinical practice guideline can include multiple recommendations, some based on high-quality scientific evidence, and others based on low-quality evidence.2 Preventing confusion between process and outcomes should involve an evaluation of the impact of clinical practice guidelines on improvements in health care. Thus, the value of the guidelines for improving clinical practice in terms of effectiveness and efficiency would ideally be assessed with instruments specifically designed for this purpose. In other words, guidelines should be examined with the aim of determining whether they are cost-effective. These issues cannot be resolved with evaluation criteria for structural quality, as such factors restrict the evaluation to aspects related with form and the design process.
Whether guidelines have the hoped-for impact on improvements in health care depends to a large extent on how the last three stages of the process--dissemination, implementation and impact analysis--are carried out. It is not enough to have good clinical practice guidelines; they must reach the professionals who will use them, and these practitioners must understand them. This is where the risk of the third type of confusion listed above (the so-called ods confusion) --attributing clinical relevance to results that may have statistical significance but lack clinical significance--comes into play.
Dissemination of the guidelines is favored by the progressive implementation of information and communication technologies in health care. Of special note are the existing databases specifically for clinical guidelines (www.fbjoseplaporte.org), which allow the texts to be consulted from anywhere. In addition to dissemination of the guidelines, the need to design strategies to promote their implementation is evident. Implementation constitutes the true Achilles´ heel of the process, as simply having guidelines is not enoughthey must also be used so that they can fulfil their mission of helping to improve the quality of care. In this connection, the fact the good quality guidelines tended to deal with problems that are covered by services normally offered by a given center4 may be related with the presence of incentives that favor the use of guidelines. A good implementation process involves appropriate training for professionals in the interpretation, evaluation, adaptation and utilization of clinical practice guidelines, as well as the development of incentives that favor their use. The absence of such incentives may favor the use of defensive medicine strategies, and may favor variability in clinical practice. Thus suitable strategies must be designed to create incentives for the appropriate use of guidelines as instruments to improve clinical practice.5,6
The process of developing clinical practice guidelines, which includes all stages from design to implementation, is expensive. It is therefore inefficient to create multiple guidelines for the management of the same clinical condition a phenomenon known as guidelinemania. The future development of guidelines should not aim to create additional guidelines, but to evaluate existing ones and adapt them, with suitable protocols, to specific health care contexts.3 Moreover, research on clinical practice guidelines should take into account issues related with patients' preferences, and the costs to health care budgets involved in their design. In this connection it should be recalled that clinical practice guidelines are simply instruments designed to aid decision-making, and that scientific evidence is one of the many factors that influence clinical decision-making. Thus, in addition to scientific evidence, the evidence from the clinical case at hand needs to be considered, which means that the clinical practice guideline must be adapted to a specific clinical history. The physician´s role is essential to ensure that the guidelines are applied correctly to each individual patient´s case. This is why good physicians, as well as good guidelines, are needed.
The results of the study that appears in this issue of Atención Primaria4 illustrate the differences in quality of clinical practice guidelines as evaluated with a small set of structural criteria. Of note is the lack of data on how (or whether) the guidelines were updated, this being one of the prime features of such documents. As scientific evidence becomes outdated, updating the guidelines will depend on well-planned reviews of the available knowledge. As noted at the beginning of this article, it should be recalled that it is just as difficult to find a well-designed protocol as it is to find a well-designed clinical guideline.