metricas
covid
Buscar en
Radiología (English Edition)
Toda la web
Inicio Radiología (English Edition) Concordance of the ACR TI-RADS
Journal Information
Visits
131
Vol. 63. Issue 6.
Pages 469-475 (November - December 2021)
Original articles
Full text access
Concordance of the ACR TI-RADS
Concordancia del TIRADS-ACR
Visits
131
C.M. González Vásqueza,
Corresponding author
, J.A. Muñoz Duránb, S. Isaza Zapatab, J.F. González Londoñob, V. García Gómeza
a Radiólogo, Hospital Pablo Tobón Uribe, Antioquia, Colombia
b Residente de Radiología, Universidad CES, Antioquia, Colombia
This item has received
Article information
Abstract
Full Text
Bibliography
Download PDF
Statistics
Figures (3)
Show moreShow less
Tables (3)
Table 1. ACR TI-RADS.
Tables
Table 2. Intraobserver variability.
Tables
Table 3. Interobserver variability.
Tables
Show moreShow less
Abstract
Background

Ultrasonography (US) is the method of choice for evaluating thyroid nodules. In 2017, the American College of Radiology (ACR) created a classification system based on US characteristics. For the system to be adopted, it must be reproducible.

Objectives

To determine the intraobserver and interobserver variability of the ACR TI-RADS.

Methods

Cross-sectional study; three radiologists with different levels of experience used the ACR TI-RADS to classify 100 nodules on two occasions one month apart, and we calculated the intraobserver and interobserver variability.

Results

Regarding intraobserver variability, the first radiologist had nearly perfect concordance for composition, echogenicity, shape, and margins and substantial concordance for echogenic foci; the second radiologist had nearly perfect concordance for composition, echogenicity, shape, and margins and substantial concordance for echogenic foci, and the third radiologist had nearly perfect concordance for composition, echogenicity, and shape and substantial concordance for margins and echogenic foci. The interobserver concordance was calculated for the two readings; the concordance was substantial except for shape in the first reading and for echogenicity and margins in the second reading, which had moderate concordance.

Conclusions

The ACR TI-RADS classification system is reproducible.

Keywords:
Thyroid
Thyroid nodule
ACR TI-RADS
Ultrasonography
Risk of malignancy
Resumen
Introducción

La ecografía es el método de elección para evaluar el nódulo tiroideo. En 2017, el American College of Radiology (ACR) creó un sistema de clasificación basado en características ecográficas. Para que el sistema se pueda adoptar, debe ser reproducible.

Objetivos

Determinar la variabilidad intraobservador e interobservador del TIRADS-ACR.

Metodología

Estudio transversal; 100 nódulos fueron clasificados de manera independiente por tres radiólogos con diferente experiencia utilizando el TIRADS-ACR. Se clasificaron los nódulos en dos oportunidades con intervalo de 1 mes. Se obtuvo la variabilidad interobservador e intraobservador.

Resultados

La variabilidad intraobservador fue la siguiente: primer evaluador: concordancia casi perfecta para composición, ecogenicidad, forma y bordes y concordancia sustancial para focos ecogénicos. Segundo evaluador: concordancia casi perfecta para composición, ecogenicidad, forma y bordes y concordancia sustancial para focos ecogénicos. Tercer evaluador: concordancia casi perfecta para composición, ecogenicidad, forma y concordancia sustancial para focos ecogénicos y bordes. Las concordancias interobservador se obtuvieron en las dos evaluaciones. Los resultados estuvieron en categoría de concordancia sustancial excepto para forma en la primera evaluación y para ecogenicidad y bordes en la segunda evaluación que tuvieron concordancia moderada.

Conclusiones

El sistema de clasificación TIRADS-ACR es reproducible.

Palabras clave:
Tiroides
Nódulo tiroideo
TIRADS-ACR
Ecografía
Riesgo de malignidad
Full Text
Introduction

The prevalence of thyroid nodules depends on the diagnostic method, ranging from 20% to 60% with ultrasound and post-mortem studies.1,2 Although less than 10% of these nodules are malignant, the incidence of thyroid cancer has increased as more nodules are being diagnosed and more biopsies are being performed. We currently have a problem of over-diagnosis3 and over-treatment.4–6

Ultrasound is the method of choice to assess thyroid nodules,7,8 due to its ease of use and high availability, and the fact that it is not ionising, has high resolution in soft tissues, provides information in real time and assesses the vasculature. It also serves as a guide for biopsy.

Several ultrasound classifications have been designed to establish the risk of malignancy of thyroid nodules (American Thyroid Association, Chilean Thyroid Imaging Reporting and Data System [TIRADS], Korean TIRADS, Fernández Sánchez TIRADS, French TIRADS and European Thyroid Association TIRADS [EU-TIRADS]).9–17 These systems have been measured for their predictive value with the pathology results (Bethesda system).17–19

The American College of Radiology (ACR) created its classification system (ACR TI-RADS)20,21 (Table 1) based on ultrasound characteristics, and classifies thyroid nodules into five categories. Use of this system has led to a reduction in the number of biopsies,22 concordance with fine-needle aspiration (FNA) in excess of 83%18,19,23 and safety in paediatrics24; and that is why we are using it.

Table 1.

ACR TI-RADS.

Composition (Choose 1)  Echogenicity (Choose 1)  Shape (Choose 1)  Margin (Choose 1)  Echogenic foci (Choose all that apply) 
Cystic or almost completely cystic: 0 points  Anechoic: 0 points  Wider than tall: 0 points  Smooth: 0 points  None or large comet-tail artifacts: 0 points 
Spongiform: 0 points  Hyperechoic or isoechoic: 1 point  Taller than wide: 3 points  Ill-defined: 0 points  Macrocalcifications: 1 point 
Mixed cystic and solid: 1 point  Hypoechoic: 2 points    Lobulated or irregular: 2 points  Peripheral (rim) calcifications: 2 points 
Solid or almost completely solid: 2 points  Very hypoechoic: 3 points    Extra-thyroidal extension: 3 points  Punctate echogenic foci: 3 points 
[0,1–5]Add points from all categories
0 points  2 points  3 points  4 to 6 points  7 points or more 
TR1  TR2  TR3  TR4  TR5 
Benign  Not suspicious  Mildly suspicious  Moderately suspicious  Highly suspicious 
No FNA  No FNA  FNA if ≥2.5 cm  FNA if ≥1.5 cm  FNA if ≥1 cm 
    Follow if ≥1.5 cm  Follow if ≥1 cm  Follow if ≥0.5 cm 

FNA: fine-needle aspiration.

However, to be adopted, the system must be reproducible. The aim of our study was to determine the intraobserver and interobserver variability of the ACR TI-RADS.

Materials and methods

We designed a cross-sectional study. The study was approved by the hospital's independent ethics committee. Informed consent was not required.

Patients

Two of the investigators, third- and fourth-year radiology residents who were not going to act as assessors, with prior study of the ACR TI-RADS system, retrospectively selected 100 thyroid nodules from the hospital's picture archive system. Other similar studies had a similar sample size.25–29 With a sample size of 100, there was a 90% confidence level to reject the two-tailed null hypothesis that the kappa value was equal to 0.5 (that is, agreement between the assessors was exclusively by chance), with a kappa value to be detected of 0.8 and assuming a rate of the finding under study in the total sample of at least 30%. The indications for these ultrasound scans ranged from screening to pain, suspected nodule on physical examination and incidental nodule. They were selected consecutively and by consensus until 20 nodules were obtained from each of the five categories proposed by ACR TI-RADS (Fig. 1), using the "augmentation" strategy described by Zhou and cited by Obuchowski to guarantee a sufficient number of cases in each category.30 The ultrasound scans were performed from 1 July 2016 to 30 June 2018. Nodules were chosen from patients 18–70 years of age. There was no discrimination by gender, ethnicity, race or country of origin. The plan was not to include studies with incomplete or low quality images. However, none were excluded.

Figure 1.

Examples of 5 nodules in which the three assessors agreed on their ACR TI-RADS category.

(0.3MB).
Study technique

The ultrasounds were performed by any of the 14 radiologists at our hospital, using one of two available ultrasound scanners (90 nodules were examined with the Siemens Acuson S3000 system and 10 patients were examined with the Siemens SC2000 system with 12 and 18MHz transducers). The images of the nodules were anonymised and presented in JPG format with axial and longitudinal sections (two images for each nodule).

Image analysis

The images were assessed by three radiologists with different amounts of experience. They had no knowledge of the clinical data or the radiology report. Assessor 1 was a third-year resident, assessor 2 was a radiologist with 15 years of experience and assessor 3 was a radiologist with 30 years of experience. The assessors classified the nodules twice, one month apart, using the ACR TI-RADS system.

Statistical analysis

The data obtained with the collection instrument were tabulated, differentiating each of the categories that make up the ACR TI-RADS. Intraobserver and interobserver variability was evaluated using the kappa statistic. As prevalence bias was found in the distribution of the results for intraobserver variability (marked imbalance in the frequency of the category assigned by the assessors due to the higher frequency of a response category) in the category "nodule shape", when evaluating for the prevalence indices and the asymmetry index in the agreement, the prevalence-adjusted bias-adjusted kappa (PABAK) was obtained, corresponding to the value that the kappa would take in the absence of bias and with equal prevalence of the categories. For the interobserver variability of the ACR TI-RADS category, which has five response possibilities, quadratic weighting was selected for two reasons: first, the more response possibilities, the greater the potential for discrepancies, which underestimates the true concordance; and, second, they wanted less penalisation of discrepancies between responses with neighbouring categories (e.g. where one assessor assigned the category ACR TI-RADS 1 and the other assigned the category ACR TI-RADS 2) and more penalisation of discrepancies between responses with non-neighboring categories (e.g. ACR TI-RADS 1 and ACR TI-RADS 4). For the other concordances, which showed no prevalence bias and fewer possible responses, the raw kappa was estimated.

The kappa index values were classified according to the Landis and Koch table (1977),31 as follows:

  • 0.00: no agreement.

  • 0.01–0.20: slight.

  • 0.21–0.40: fair.

  • 0.41–0.60: moderate.

  • 0.61–0.80: substantial.

  • 0.81–1.00: almost perfect agreement.

Intraobserver variability compared the assessments of the same observer, weighing the results of their first assessment against their second. Interobserver variability compared assessments between the different observers, both in the first and in the second assessment.

The kappa values were calculated with their confidence intervals and P values with the Epidat 4.2 and WINPEPI 11.65 software programs. A P<.05 was considered statistically significant.

Results

Intraobserver variability is shown in Table 2 and Fig. 2.

Table 2.

Intraobserver variability.

  Assessor 1  Assessor 2  Assessor 3 
Composition  0.90 (0.82−0.97) P<.001  0.85 (0.75−0.95) P<.001  0.97 (0.94−0.99) P<.001 
Echogenicity  0.89 (0.83−0.95) P<.001  0.83 (0.72−0.93) P<.001  0.90 (0.85−0.94) P<.001 
Shape  0.94 (0.87−1.00) P<.001  0.84 (0.73−0.94) P<.001  0.90 (0.81−0.99) P<.001 
Margins  0.90 (0.83−0.97) P<.001  0.84 (0.67−1.00) P<.001  0.66 (0.45−0.86) P<.001 
Echogenic foci  0.79 (0.65−0.94) P<.001  0.74 (0.56−0.91) P<.001  0.72 (0.54−0.89) P<.001 

The kappa values are adjusted for prevalence bias.

The numbers in parentheses are the confidence intervals.

Assessor 1: third-year radiology resident. Assessor 2: radiologist with 15 years of experience. Assessor 3: radiologist with 30 years of experience.

Figure 2.

Intraobserver variability.

The numerical values on the left from 0 to 1 correspond to the kappa index.

(0.12MB).

Intraobserver variability was as follows:

  • The first assessor, a radiology resident, had almost perfect agreement for composition, echogenicity, shape and margins, and substantial agreement for the category of echogenic foci.

  • The second assessor, a radiologist with 15 years of experience, had almost perfect agreement for composition, echogenicity, shape and margins, and substantial agreement for the category of echogenic foci.

  • The third assessor, a radiologist with 30 years of experience, had almost perfect agreement for composition, echogenicity and shape, and substantial agreement for echogenic foci and margins; the greatest variability was for margins.

The interobserver variability is shown in Table 3 and Fig. 3. Agreement was obtained in both the first and second assessments:

  • Composition: substantial agreement in the first and second assessments.

  • Echogenicity: substantial agreement in the first assessment and moderate in the second.

  • Shape: moderate agreement in the first assessment and substantial in the second.

  • Margins: substantial agreement in the first assessment and moderate in the second.

  • Echogenic foci: substantial agreement in the first and second assessments.

  • ACR TI-RADS category: substantial agreement in both assessments.

Table 3.

Interobserver variability.

  Assessment 1  Assessment 2 
Composition  0.77 (0.72−0.82) P<.001  0.66 (0.55−0.76) P<.001 
Echogenicity  0.75 (0.70−0.80) P<.001  0.54 (0.38−0.69) P<.001 
Shape  0.52 (0.35−0.70) P<.001  0.75 (0.60−0.90) P<.001 
Margins  0.75 (0.70−0.80) P<.001  0.57 (0.45−0.70) P<.001 
Echogenic foci  0.73 (0.60−0.86) P<.001  0.71 (0.57−0.85) P<.001 
ACR TI-RADS categorya  0.77 (0.72−0.82) P<.001  0.76 (0.70−0.83) P<.001 

The numbers in parentheses are the confidence intervals.

Interval between the two assessments: 30 days.

a

The value corresponds to kappa adjusted by quadratic weight.

Figure 3.

Interobserver variability.

The numerical values on the left from 0 to 1 correspond to the kappa index.

(0.11MB).
DiscussionIntraobserver variability

In our study, we found that intraobserver agreement ranged from substantial to almost perfect, and was better than interobserver variability. Our results were consistent with the findings of other similar studies in which intraobserver and interobserver agreement was measured.29,32 This was indicative that each assessor had well-defined, consistent criteria.

Within our intraobserver agreement, the category of echogenic foci had the lowest agreement index for the three assessors. In a study by Persichetti,29 the lowest level of intraobserver agreement was for margins. This may have been influenced by the way in which the nodules were presented (still photos in which the factors in our study could not be changed compared to picture archiving and communication system [PACS] images in the cited study). Routine daily practice conditions could produce different results.

The intraobserver agreement figures were higher for the resident than for the assessor with 15 years of experience, without affecting the agreement categories. In similar studies in which agreement was evaluated according to experience, performance was poorer among assessors with less experience.27,33 It has also been shown that performance improves with time and feedback.27,32,34,35 One possible explanation is, then, the rigour with which the resident applied the definitions and the resident's practice time with this classification.

Interobserver variability

The interobserver variability ranged from substantial to moderate, being moderate in the categories of shape, echogenicity and echogenic foci. None of the categories showed acceptable, mild or poor agreement, in contrast with a study by Hoang et al., in which shape and echogenic foci had acceptable interobserver agreement.25 Another study36 reported acceptable interobserver variability for composition, echogenicity and echogenic foci, and poor interobserver variability for shape and margins. A third study37 showed moderate interobserver agreement for composition, echogenic foci and shape, and acceptable agreement for echogenicity and margins. In studies of interobserver variability for other classifications of thyroid nodules, the most important factor for their behaviour was whether the study was multicentre or conducted at a single institution.29 Interobserver variability is better when, as in our case, the study is conducted at a single institution. When working at the same institution, definitions and criteria are unified; this is represented in our results.

Limitations

The retrospective selection of the nodules and their presentation to the assessors in JPG format prevented variations in technical factors when assessing the nodules. This might have affected how they were perceived.

Despite the one-month interval between the two assessments, there could still have been recall bias. In similar studies, intervals ranging from four weeks to four months have been used.25,29,32

In summary, our study showed that most categories had almost perfect intraobserver agreement and substantial interobserver agreement. Assessment of thyroid nodules with the ACR TI-RADS classification system is reproducible.

Authorship

  • 1

    Responsible for study integrity: CMGV and JAMD.

  • 2

    Study conception: CMGV.

  • 3

    Study design: VGG.

  • 4

    Data acquisition: CMGV, JAMD, SIZ, JFGL and VGG.

  • 5

    Data analysis and interpretation: CMGV, JAMD, SIZ, JFGL and VGG.

  • 6

    Statistical processing: N/A

  • 7

    Literature search: N/A

  • 8

    Drafting of the manuscript: CMGV, JAMD, SIZ and JFGL.

  • 9

    Critical review of the manuscript with intellectually significant contributions: CMGV, JAMD, SIZ and JFGL.

  • 10

    Approval of the final version: CMGV, JAMD, JFGL and VGG.

Conflicts of interest

The authors declare that they have no conflicts of interest.

References
[1]
J.J. Cronan.
Thyroid nodules: is it time to turn off the US machines?.
Radiology., 247 (2008), pp. 602-604
[2]
A. Sanabria, L.P. Kowalski, J.P. Shah, I.J. Nixon, M.D. Williams, A. Rinaldo, et al.
Growing incidence of thyroid carcinoma in recent years: factors underlying overdiagnosis.
Head Neck., 40 (2018), pp. 855-866
[3]
M.J. Díaz Candamio, S. Jha, J. Martel Villagrán.
Sobrediagnóstico por imagen.
Radiología., 60 (2018), pp. 362-367
[4]
S. Vaccarella, S. Franceschi, F. Bray, C.P. Wild, M. Plummer, L. Dal Maso.
Worldwide thyroid-cancer epidemic? the increasing impact of overdiagnosis.
N Engl J Med., 375 (2016), pp. 614-617
[5]
J.K. Hoang, X.V. Nguyen, L. Davies.
Overdiagnosis of thyroid cancer.
Acad Radiol., 22 (2015), pp. 1024-1029
[6]
L. Davies, H.G. Welch.
Current thyroid cancer trends in the United States.
JAMA Otolaryngol Neck Surg., 140 (2014), pp. 317
[7]
C. Franco Uliaque, F.J. Pardo Berdún, R. Laborda Herrero, C.P. Lórenz.
Utilidad de la ecografía en la evaluación de los nódulos tiroideos.
Radiología., 58 (2016), pp. 380-388
[8]
W.-J. Moon, S.L. Jung, J.H. Lee, D.G. Na, J.-H. Baek, Y.H. Lee, et al.
Benign and malignant thyroid nodules: US differentiation—multicenter retrospective study.
Radiology., 247 (2008), pp. 762-770
[9]
B.R. Haugen, E.K. Alexander, K.C. Bible, G.M. Doherty, S.J. Mandel, Y.E. Nikiforov, et al.
2015 American Thyroid Association Management Guidelines for Adult Patients with Thyroid Nodules and Differentiated Thyroid Cancer: The American Thyroid Association Guidelines Task Force on Thyroid Nodules and Differentiated Thyroid Cancer.
Thyroid., 26 (2016), pp. 1-133
[10]
M.W. Yeh, A.J. Bauer, V.A. Bernet, R.L. Ferris, L.A. Loevner, S.J. Mandel, et al.
American Thyroid association statement on preoperative imaging for thyroid cancer surgery.
Thyroid., 25 (2015), pp. 3-14
[11]
J. Fernández Sánchez.
Clasificación TI-RADS de los nódulos tiroideos en base a una escala de puntuación modificada con respecto a los criterios ecográficos de malignidad.
Rev Argentina Radiol., 78 (2014), pp. 138-148
[12]
E. Horvath, S. Majlis, R. Rossi, C. Franco, J.P. Niedmann, A. Castro, et al.
An ultrasonogram reporting system for thyroid nodules stratifying cancer risk for clinical management.
J Clin Endocrinol Metab., 94 (2009), pp. 1748-1751
[13]
G. Russ.
Risk stratification of thyroid nodules on ultrasonography with the French TI-RADS: description and reflections.
Ultrasonography., 35 (2016), pp. 25-38
[14]
G. Russ, C. Bigorgne, B. Royer, A. Rouxel, M. Bienvenu-Perrard.
Le système TIRADS en échographie thyroïdienne.
J Radiol., 92 (2011), pp. 701-713
[15]
L. Leenhardt, M.F. Erdogan, L. Hegedus, S.J. Mandel, R. Paschke, T. Rago, et al.
2013 European Thyroid Association Guidelines for Cervical Ultrasound Scan and Ultrasound-Guided Techniques in the Postoperative Management of Patients with Thyroid Cancer.
Eur Thyroid J., 2 (2013), pp. 147-159
[16]
H. Gharib, E. Papini, J.R. Garber, D.S. Duick, R. Mack Harrell, L. Hegedüs, et al.
American Association of Clinical Endocrinologists, American College of Endocrinology, and Associazione Medici Endocrinologi medical guidelines for clinical practice for the diagnosis and management of thyroid nodules—2016 update.
Endocr Pract., 22 (2016), pp. 1-60
[17]
L. Gao, X. Xi, Y. Jiang, Y. Yang, S. Zhu, X. Lai, et al.
Comparison among TIRADS (ACR TI-RADS and KWAK- TI-RADS) and 2015 ATA guidelines in the diagnostic efficiency of thyroid nodules.
Endocrine., 64 (2019), pp. 90-96
[18]
E.J. Ha, D.G. Na, J.H. Baek, J.Y. Sung, J. Kim, S.Y. Kang.
US fine-needle aspiration biopsy for thyroid malignancy: diagnostic performance of Seven Society Guidelines applied to 2000 thyroid nodules.
Radiology., 287 (2018), pp. 893-900
[19]
A. Lauria Pantano, E. Maddaloni, S.I. Briganti, G. Beretta Anguissola, E. Perrella, C. Taffon, et al.
Differences between ATA, AACE/ACE/AME and ACR TI-RADS ultrasound classifications performance in identifying cytological high-risk thyroid nodules.
Eur J Endocrinol., 178 (2018), pp. 595-603
[20]
E.G. Grant, F.N. Tessler, J.K. Hoang, J.E. Langer, M.D. Beland, L.L. Berland, et al.
Thyroid ultrasound reporting lexicon: white paper of the ACR Thyroid Imaging, Reporting and Data System (TIRADS) Committee.
J Am Coll Radiol., 12 (2015), pp. 1272-1279
[21]
F.N. Tessler, W.D. Middleton, E.G. Grant, J.K. Hoang, L.L. Berland, S.A. Teefey, et al.
ACR Thyroid Imaging, Reporting and Data System (TI-RADS): white paper of the ACR TI-RADS Committee.
J Am Coll Radiol., 14 (2017), pp. 587-595
[22]
J.K. Hoang, W.D. Middleton, A.E. Farjat, J.E. Langer, C.C. Reading, S.A. Teefey, et al.
Reduction in thyroid nodule biopsies and improved accuracy with American College of Radiology Thyroid imaging reporting and data system.
Radiology., 287 (2018), pp. 185-193
[23]
R.M. Singaporewalla, J. Hwee, T.U. Lang, V. Desai.
Clinico-pathological correlation of thyroid nodule ultrasound and cytology using the TIRADS and Bethesda Classifications.
World J Surg., 41 (2017), pp. 1807-1811
[24]
J.E. Lim-Dunham, I.E. Toslak, M.P. Reiter, B. Martin.
Assessment of the American College of Radiology Thyroid Imaging Reporting and Data System for Thyroid Nodule Malignancy Risk Stratification in a Pediatric Population.
Am J Roentgenol., 212 (2019), pp. 188-194
[25]
J.K. Hoang, W.D. Middleton, A.E. Farjat, S.A. Teefey, N. Abinanti, F.J. Boschini, et al.
Interobserver variability of sonographic features used in the American College of Radiology Thyroid Imaging Reporting and Data System.
Am J Roentgenol., 211 (2018), pp. 162-167
[26]
J.R. Wienke, W.K. Chong, J.R. Fielding, K.H. Zou, C.A. Mittelstaedt.
Sonographic features of benign thyroid nodules.
J Ultrasound Med., 22 (2003), pp. 1027-1031
[27]
S.H. Kim, C.S. Park, S.L. Jung, B.J. Kang, J.Y. Kim, J.J. Choi, et al.
Observer variability and the performance between faculties and residents: US criteria for benign and malignant thyroid nodules.
Korean J Radiol., 11 (2010), pp. 149
[28]
W. Phuttharak, A. Boonrod, V. Klungboonkrong, T. Witsawapaisan.
Interrater Reliability of Various Thyroid Imaging Reporting and Data System (TIRADS) classifications for differentiating benign from malignant thyroid nodules.
Asian Pacific J Cancer Prev., 20 (2019), pp. 1283-1288
[29]
A. Persichetti, E. Di Stasio, C. Coccaro, F. Graziano, A. Bianchini, V. Donna, et al.
Inter- and intraobserver agreement in the assessment of thyroid nodule ultrasound features and classification systems: a blinded multicenter study.
Thyroid., 30 (2020), pp. 237-242
[30]
N.A. Obuchowski, J.A. Bullen.
Receiver operating characteristic (ROC) curves: review of methods with applications in diagnostic medicine.
Phys Med Biol., 63 (2018), pp. 07TR01
[31]
J.R. Landis, G.G. Koch.
The measurement of observer agreement for categorical data.
Biometrics., 33 (1977), pp. 159-174
[32]
S.H. Choi, E.-K. Kim, J.Y. Kwak, M.J. Kim, E.J. Son.
Interobserver and intraobserver variations in ultrasound assessment of thyroid nodules.
Thyroid., 20 (2010), pp. 167-172
[33]
J. Koh, S.-Y. Kim, H.S. Lee, S.-Y. Kim, J.Y. Kwak, H.J. Moon, et al.
Diagnostic performances and interobserver agreement according to observer experience: a comparison study using three guidelines for management of thyroid nodules.
Acta radiol., 59 (2018), pp. 917-923
[34]
G. Grani, L. Lamartina, V. Cantisani, M. Maranghi, P. Lucia, C. Durante.
Interobserver agreement of various thyroid imaging reporting and data systems.
Endocr Connect., 7 (2018), pp. 1-7
[35]
P. Seifert, R. Görges, M. Zimny, M.C. Kreissl, S. Schenke.
Interobserver agreement and efficacy of consensus reading in Kwak-, EU-, and ACR-thyroid imaging recording and data systems and ATA guidelines for the ultrasound risk stratification of thyroid nodules.
Endocrine., 67 (2020), pp. 143-154
[36]
Z.T. Sahli, A.K. Sharma, J.K. Canner, F. Karipineni, O. Ali, S. Kawamoto, et al.
TIRADS interobserver variability among indeterminate thyroid nodules: a single‐institution study.
J Ultrasound Med., 38 (2019), pp. 1807-1813
[37]
M. Itani, R. Assaker, M. Moshiri, T.J. Dubinsky, M.K. Dighe.
Inter-observer Variability in the American College of Radiology Thyroid Imaging Reporting and Data System: in-depth analysis and areas for improvement.
Ultrasound Med Biol., 45 (2019), pp. 461-470

Please cite this article as: González Vásquez CM, Muñoz Durán JA, Isaza Zapata S, González Londoño JF, García Gómez V. Concordancia del TIRADS-ACR. Radiología. 2021;63:469–475.

Copyright © 2020. SERAM
Download PDF
Article options
es en pt

¿Es usted profesional sanitario apto para prescribir o dispensar medicamentos?

Are you a health professional able to prescribe or dispense drugs?

Você é um profissional de saúde habilitado a prescrever ou dispensar medicamentos