A recent editorial from Colombia Medica journal calls for a national agreement with the San Francisco Declaration on Research Assessment,1 however, some good points presented in this declaration can end up in a bad recommendation, such as to not trust and to not use the impact factor! Although the number of journals and researchers that adhered since 2012 up to now to this initiative seems impressive (547 organizations and 12,055 researchers), these numbers reflect indeed that the vast majority of journals (≈27.000) and researchers do not agree. The impact factor is still widely used, perhaps because in our daily work as researchers we can compare articles from high and low impact factor journals and easily figure out that quality is very well correlated with this scienciometric measure. Despite there are undeniable occasional drawbacks, impact factor is still a reliable and reasonable measure of journal quality. Moreover, Thomson's ISI no longer has the monopoly to calculate the impact factor and Scimago is now doing a good work that has gain it the trust of the scientific community.
Our last editorial also showed the limitations when using bibliometric indicators to evaluate the work of researchers,2 therefore the combined use of multiple bibliometric sources should be the rule to construct a national indicator of research impact.
Research assessment requires multiple indicators, but we have to be clear: a good indicator of research quality is where a paper is published. The Scienticol index from Colciencias is a very good system to measure and follow research impact on the Colombian society and to gather indicators regarding how research groups work, but evaluating the impact of Colombian research groups and how are they being recognized must reside primarily on publimetric indicators. It is really difficult to understand why research groups with a strong publication record in high impact journals got a low classification because they have not shown the diffusion of their work on a newspaper or do not organize conferences. The core of the research assessment should be the quality of the research itself.
The effort to guarantee that paper evaluation is ethical and scientifically rigorous should not rest on journal editors, but in the research community itself. Peer review evaluation is the key to improve quality and there is a need for engaging researchers into ethically complying with this duty. The pressure to publish is increasing everywhere and it has resulted in some surprising news as the discovery by BioMed Central of inappropriate attempts to manipulate the peer review process of several journals, which led to retracting 43 papers (http://blogs.biomedcentral.com/bmcblog/2015/03/26/manipulation-peer-review).
The science is essentially written and good science produces mainly good written papers.