metricas
covid
Buscar en
Atención Primaria
Toda la web
Inicio Atención Primaria Commentary: The Limits of Objective Structured Clinical Examination
Journal Information
Vol. 34. Issue 2.
Pages 73-74 (June 2004)
Full text access
Commentary: The Limits of Objective Structured Clinical Examination
Comentario: Los límites de la evaluación clínica objetiva y estructurada (ECOE)
Visits
2983
C. Blaya
a Médico de Familia, ICS ABS Santa Eugènia de Berga, Berga, España, Unitat Avaluació Competència Clínica, Institut d'Estudis de la Salut, Barcelona, Spain.
Related content
M Pedregal González, E Molina Fernández, JA Prados Castillejo, F Quesada Jiménez, P Bonal Pitz, C Iglesias Arrabal
This item has received
Article information
Full Text
Bibliography
Download PDF
Statistics
Full Text

As predicted in the 1990s,1 the competence of health care practitioners and ways in which competence can be evaluated have become current topics of debate within medical circles. The number of institutional, legislative, and research initiatives that have appeared show clearly that the debate on professionalism that has been taking place throughout the world has not been ignored in our milieu. Consequently, projects in Spain that set out to evaluate physicians' clinical competence are becoming increasingly numerous and visible.

The excellent article this editorial is based on is a prefect example of this phenomenon, in two ways. On one hand it reflects the methodological rigor that should be required of any exercise intended to evaluate professional competence. In addition, it reveals the limits, paradoxes and areas of progress that typify evaluation--issues that I will discuss briefly in the following sections.

Establishing Strategical Foundations

Any evaluation exercise should establish its strategic foundations clearly. This means that the project should specify who is to be evaluated, what level of performance will be required, and especially, what the purpose of the evaluation is. These characteristics will determine the structure and content of the exercise.

Evaluation of the complex construct known as professional competence requires different formats depending on the ideology that underlies each project2. Different formats are appropriate for different situations, and evaluation with an objective structured clinical examination (OSCE) is not suitable for all purposes.

Formative Versus Summative Evaluation

These characteristics of OSCE are particularly evident when the choice is between evaluation aimed at certifying competence (summative) and evaluation aimed at guiding training (formative).

When summative evaluation is required, the reliability of the test, its predictive validity and construct validity are absolutely crucial. When a formative approach is needed, verification efforts should center on the format's educational impact and content validity.

An OSCE comprising only 10 stations may be a good instrument for summative certification, but regardless of its reliability, it is of limited use in providing high-quality formative feedback, if quality is understood to mean that the feedback is true, complete and useful to the subject.

Interpreting Psychometric Parameters Appropriately

One of the risks of professional competence evaluation is the trivialization of statistical considerations. Reliability (understood as internal consistency and measured with Cronbach´s alpha) is often at the heart of disputes over the usefulness of a given evaluation as a measuring instrument. Although reliability is a crucial consideration in any exercise designed for certification, a few brief comments are in order.

 

1. Sometimes--especially in exercises with a small number of participants--the alpha value is artificially high. When this happens an apparently reliable test can lead to unfair decisions.

2. Validity is as important as reliability. Drawing conclusions--especially for summative evaluations--before the prototype test has been validated, is surely ill-advised and unfair to the professionals who are to be evaluated.

3. When the aim is to provide feedback to persons who are being evaluated regarding their competence profile, it must first be determined to what extent each of the components of competence behaves reliably. This is generally not accomplished with small-scale OSCE.

What OSCE Are Good for

Objective structured clinical examination is one test format among many potentially appropriate combinations of testing instruments. Although OSCE is the paradigm for evaluating practical clinical skills in a simulated environment, it is not a structure that is suitable for all purposes.

The OSCE is an excellent way to certify professional competence, and especially to detect individuals whose skills are clearly inadequate. It is also capable of providing some measure of feedback on the skills of the collective that participate in the exercise.

But in general, its capacity to provide formative feedback to individuals is limited to the global results ("What's my global score in comparison to others in the group?"). Thus an OSCE is inadequate for characterizing an individual's profile as an actual practitioner. As a mechanism of formative feedback, reflection on "how I felt while I was taking the OSCE" is a much more powerful instrument than "what I scored on each component of competence."

Competence of Expert Professionals

Competence is a dynamic phenomenon3 that is changeable and can be expressed in different ways as the professional acquires experience. Two experienced professionals may both be competent and make equally correct decisions when faced with particular situations, but may use completely different approaches.

Moreover, professional competence goes beyond clinical dexterity. Elements of the affective and moral world, and aspects related with professional attitude, are even more important components of competence in expert professionals. This is what makes evaluating experienced professionals a much more complex process than evaluating junior colleagues. Examining more experienced practitioners thus requires a larger and more refined combination of evaluation tools.

Evaluating professionals makes sense only if the results of the exercise are true, fair and useful in terms of decisions making, self-reflection and motivation to improve. Nothing could be more contrary to this end than the inappropriate use of evaluation tools.

In view of the growing presence of such tools, we should make efforts to put them to good use--for the good of the profession.

Key Points

* A current topic in debates on professionalism is the spread of models for evaluating professional competence.

* Objective structured clinical examination has been shown to be an excellent method for certifying competence, although its capacity to generate formative feedback is limited.

* Poor evaluations are worse than no evaluations at all.

Bibliography
[1]
Evaluación de la competencia profesional: ¿están cambiando los tiempos? Aten Primaria 1995;16:2-4.
[2]
The certification and recertification of doctors: issues in the assessment of clinical competence. Cambridge: Cambridge University Press, 1994.
[3]
Defining and assessing professional competence. JAMA 2002;287:226-35.
Download PDF
Article options
es en pt

¿Es usted profesional sanitario apto para prescribir o dispensar medicamentos?

Are you a health professional able to prescribe or dispense drugs?

Você é um profissional de saúde habilitado a prescrever ou dispensar medicamentos