metricas
covid
Buscar en
Revista Española de Cirugía Ortopédica y Traumatología
Toda la web
Inicio Revista Española de Cirugía Ortopédica y Traumatología Exploring the potential of artificial intelligence in traumatology: Conversatio...
Información de la revista
Vol. 69. Núm. 1.
Páginas T38-T46 (enero - febrero 2025)
Compartir
Compartir
Descargar PDF
Más opciones de artículo
Visitas
138
Vol. 69. Núm. 1.
Páginas T38-T46 (enero - febrero 2025)
Original Paper
Acceso a texto completo
Exploring the potential of artificial intelligence in traumatology: Conversational answers to specific questions
Explorando el potencial de la inteligencia artificial en traumatología: respuestas conversacionales a preguntas específicas
Visitas
138
F. Canillas del Reya,b,
Autor para correspondencia
fercanillas@yahoo.es

Corresponding author.
, M. Canillas Ariasb
a Servicio de Cirugía Ortopédica y Traumatología, Hospital Universitario Cruz Roja, Madrid, Spain
b Facultad de Medicina, Universidad Alfonso X el Sabio, Madrid, Spain
Contenido relacionado
F. Canillas del Rey, M. Canillas Arias
Este artículo ha recibido
Información del artículo
Resumen
Texto completo
Bibliografía
Descargar PDF
Estadísticas
Figuras (4)
Mostrar másMostrar menos
Tablas (5)
Table 1. Frequency of topic/year.
Table 2. Number of chatbot correct answers.
Table 3. Analysis of answers.
Table 4. Word/answer statistics.
Table 5. Legibility of answers.
Mostrar másMostrar menos
Abstract
Background and objective

Generative artificial intelligence is a technology that provides greater connectivity with people through conversational bots (“chatbots”). These bots can engage in dialogue using natural language indistinguishable from humans and are a potential source of information for patients. The aim of this study is to examine the performance of these bots in solving specific issues related to orthopedic surgery and traumatology using questions from the Spanish MIR exam between 2008 and 2023.

Material and methods

Three “chatbot” models (ChatGPT, Bard and Perplexity) were analyzed by answering 114 questions from the MIR. Their accuracy was compared, the readability of their responses was evaluated, and their dependence on logical reasoning and internal and external information was examined. The type of error was also evaluated in the failures.

Results

ChatGPT obtained 72.81% correct answers, followed by Perplexity (67.54%) and Bard (60.53%). Bard provides the most readable and comprehensive responses. The responses demonstrated logical reasoning and the use of internal information from the question prompts. In 16 questions (14%), all three applications failed simultaneously. Errors were identified, including logical and information failures.

Conclusions

While conversational bots can be useful in resolving medical questions, caution is advised due to the possibility of errors. Currently, they should be considered as a developing tool, and human opinion should prevail over generative artificial intelligence.

Keywords:
Generative artificial intelligence
Chatbot
Multi-choice question answering
ChatGPT
Bard
Perplexity
Resumen
Antecedentes y objetivo

La inteligencia artificial generativa es una tecnología que ofrece su mayor conectividad con las personas gracias a los bots conversacionales («chatbot»). Estos pueden mantener un diálogo con un lenguaje natural indistinguible del humano y son una fuente potencial de información para los pacientes. El objetivo de este trabajo es estudiar el rendimiento de estos bots en la resolución de cuestiones específicas de cirugía ortopédica y traumatología empleando las preguntas del examen MIR español entre 2008 y 2023.

Material y métodos

Se analizaron 3 modelos de «chatbots» (ChatGPT, Bard y Perplexity) respondiendo a 114 preguntas del MIR. Se compararon aciertos, se valoró la legibilidad de las respuestas y se examinó su dependencia con el razonamiento lógico y la información interna y externa. En los fallos también se evaluó el tipo de error.

Resultados

ChatGPT obtuvo un 72,81% de aciertos, seguido por Perplexity (67,54%) y Bard (60,53%). Las respuestas más legibles y completas las ofrece Bard. Las respuestas demostraron un razonamiento lógico y el uso de información interna de los enunciados de preguntas. En 16 preguntas (14%) las 3 aplicaciones fallaron simultáneamente. Se identificaron errores, que incluían fallos lógicos y de información.

Conclusiones

Aunque los bots conversacionales pueden ser útiles en la resolución de preguntas médicas, se señala la necesidad de precaución debido a la posibilidad de errores. Actualmente deben considerarse como una herramienta en desarrollo, y la opinión humana debe prevalecer sobre la inteligencia artificial generativa.

Palabras clave:
Inteligencia artificial generativa
Robot conversacional
Respuesta a preguntas de opción múltiple
ChatGPT
Bard
Perplexity
Texto completo
Introduction

Artificial intelligence (AI) encompasses technological developments that emulate the cognitive abilities of humans. In orthopaedic surgery and traumatology, applications include image recognition and diagnosis, medical text records, rehabilitation and postoperative care, surgical training, and predictive algorithms.1 In recent years, AI has gone a step further by becoming generative AI. That is to say, it no longer simply analyses problems and solves them, but with the data it is provided with, it improves its learning and generates original content (text, images, videos, presentations, molecules, etc.).2

Recently, chatbots or conversational bots have become popular. These are generative AIs that have the ability to maintain a conversation, giving coherent and human-like responses. Their behaviour can even be modulated when responding, lending a certain amount of “personality” to the responses.

In November 2022, the free conversational bot (or “chatbot”) called ChatGPT (generative pre-trained transformer) by the company OpenAI (OpenAI, LLC, San Francisco, California, USA) developed in Python language was launched. Its main limitation was that it only had access to the Internet until 2021, but the new (paid) version has already updated access to the network. In addition, it is multimodal, allowing text and image input to generate response.

In response to this application, several technology companies launched other “chatbots” onto the market, such as Bing by Microsoft (Microsoft Corporation, Redmond, Washington, USA) or Bard by Google (Google LLC, Mountain View, California, USA) or Perplexity by designers Denis Yarats, Aravind Srinivas, Johnny Ho and Andy Konwinsk. The latter has the particularity of providing citations to support its information.

These ground-breaking technological developments are creating a range of options yet to be explored and have enormous implications in the fields of medicine and teaching.3 This is not only because they offer answers to questions that are raised, but because they can generate information with multiple uses. Their capabilities continuously grow and they are even beginning to recognise voices, images and videos, resulting in exponential potential.

Patients use this technology to answer questions about medicine and health4 and professionals to solve cases5 or create scientific texts.6,7 They even generate answers which contain more, better quality information, and with greater empathy than humans are capable of,8 although some patients rarely accept its use as a substitute for professionals.9 Obviously, this has important legal and ethical connotations, which involve responsibility in decision-making or in authorship of scientific production.10 All the more so, when the presence of errors, known as “hallucinations”,11 is detected, and these are more common than what a sensitive-data-generating “intelligence” should supposedly produce. We should remember that, in 2022, 40% of Internet users turned to the Internet in search of information on health issues.12

The development of conversational AI is based on three pillars: machine learning, big data and natural language processing.2 Machine learning enables computers to automatically learn and improve from experience without having been specifically programmed to do so. Data is analysed using algorithms to identify patterns and make decisions with varying degrees of human supervision. The machine architecture uses neural networks that mimic the functioning of the human brain. This enables deep learning by processing vast amounts of data, or “big data”, which includes unstructured or unlabelled data such as images, audio and text, to perform tasks such as voice or image recognition. Version 3 of ChatGPT was trained with 175 billion parameters. This means that the AI does not search the network for information, but rather generates responses by following predictive models from the information it has collected and processed into smaller units (known as “tokens”). In this sense, there are authors who advocate talking about “computational statistical learning” instead of “artificial intelligence”.13 This explains why AI has failures, or “hallucinations” in computer jargon, largely due to its ability or lack thereof to handle the data it feeds on and generate results following stochastic patterns. If we add to this the effect known as GIGO (Garbage In Garbage Out)7 we are looking at a system that requires a learning and supervision process.

Natural language processing is another area of AI that deals with the interaction between machine and human language. Thanks to this application, text can be understood, interpreted and generated efficiently. All of this leads to processing the order or “prompt”, “understanding” it and responding in a natural and suposedly correct way. It also adds the ability to remember previous conversations.

In the field of medicine,14 huge amounts of electronic medical records can be processed, enabling them to be analysed and organised to obtain efficient and accurate information. In the surgical field,1 AI can provide the simulation of complex surgical procedures in virtual environments. Moreover, patterns of success and areas of improvement in surgical training can be identified, improving the quality and safety of medical care.

The objective of this study was to study and compare the capacity of conversational AI to solve orthopaedic surgery and traumatology questionnaires used in the national exam to obtain a position as a resident physician.

Material and method

All questions from the Spanish medical training entrance exam (MIR exam) from 2008 to 2023 were reviewed. The inclusion criteria were: questions relating to traumatology and orthopaedic surgery. The exclusion criteria were: questions cancelled by the governing body or those containing images that were necessary for answering the question.

The questions were multiple choice with four or five answer options and only one correct answer. Some of the questions included images. Since not all applications have image recognition, those that required perusal of the image to answer were excluded. The questions were grouped according to the subject in traumatology, adult orthopaedics, paediatric orthopaedics and spine, and according to the year. In addition, they were classified into two types according to the knowledge needed to give the answer15: type 1, only one piece of knowledge was required and type 2, several steps were needed to reach the answer.

The answers were analysed by assessing narrative coherence.16 The data were binary and included three sections: logical reasoning (if the answer was selected based on the information presented), internal information (the answer included information provided in the question) and external information (the answer provided information external to that provided). We also analysed the readability of Flesch–Kincaid,17,18 adapted to Spanish by Fernández Huerta19 and corrected by Law.20 The result shows the ease of understanding a text and correlates it with the level of education as a result of this formula:

L is “readability”; P, the average number of syllables per word; F, the average number of words per sentence.

The INFLESZ21 scale was also used, which measures the ease of reading a text, as well as the number of words in each response.

When errors occurred, they were labelled as logical errors (correct information is found but not transferred to the response), information errors (a key piece of information provided in the question or in external information is not identified) and statistical errors (based on arithmetic errors, e.g. incorrect estimation of the frequency of a disease).22

The AI engines evaluated were ChatGPT (version 3.5), Bard and Perplexity (Fig. 1). All responses were obtained within a period of 48h. When writing this work, the Bard application was renamed Gemini, but since the results were obtained with the first name, it was kept in the results and conclusions.

Figure 1.

Logos of the “chatbot” conversations ChatGPT, Bard and Perplexity.

(0.02MB).

The data collected were quantitative, qualitative and descriptive. Qualitative variables are presented with their absolute frequency and relative percentage. To compare qualitative values of a dichotomous nature, Cochran's Q test was used, and for quantitative values, the ANOVA test was used if it met normality and the Kruskal–Wallis test if it did not. The accepted alpha risk for all hypothesis contrasts was .05. If the test was significant, a post-hoc test was performed for pairwise comparison of variables. The data were analysed using the MedCalc program version 22.016 (MedCalc Software Ltd, Ostend, Belgium; https://www.medcalc.org; 2023) for data processing and statistical study. The null hypothesis (H0) that we assumed was that the success rate of the different programmes evaluated was the same.

Results

Initially, 129 questions were collected from the MIR exams related to orthopaedic surgery and traumatology. Fifteen were excluded because they included images (14 questions) and one because it had been cancelled. Table 1 shows the distribution of the questions by year and topic.

Table 1.

Frequency of topic/year.

Year  Topic  Frequencies  % of the total  Year  Topic  Frequencies  % of the total 
2009  Trauma  5.3%  2017  Trauma  3.5% 
  OrthoAdult  1.8%    OrthoAdult  2.6% 
  OrthoChild  1.8%    OrthoChild  .0% 
  Spine  .9%    Spine  .9% 
2010  Trauma  1.8%  2018  Trauma  5.3% 
  OrthoAdult  1.8%    OrthoAdult  2.6% 
  OrthoChild  1.8%    OrthoChild  .9% 
  Spine  .9%    Spine  .0% 
2011  Trauma  3.5%  2019  Trauma  .9% 
  OrthoAdult  3.5%    OrthoAdult  2.6% 
  OrthoChild  1.8%    OrthoChild  .9% 
  Spine  .0%    Spine  .0% 
2012  Trauma  .0%  2020  Trauma  4.4% 
  OrthoAdult  1.8%    OrthoAdult  2.6% 
  OrthoChild  .9%    OrthoChild  .9% 
  Spine  .0%    Spine  .9% 
2013  Trauma  1.8%  2021  Trauma  .9% 
  OrthoAdult  2.6%    OrthoAdult  6.1% 
  OrthoChild  .0%    OrthoChild  .9% 
  Spine  .9%    Spine  1.8% 
2014  Trauma  .9%  2022  Trauma  1.8% 
  OrthoAdult  .9%    OrthoAdult  1.8% 
  OrthoChild  .9%    OrthoChild  .9% 
  Spine  .0%    Spine  1.8% 
2015  Trauma  3.5%  2023  Trauma  2.6% 
  OrthoAdult  1.8%    OrthoAdult  2.6% 
  OrthoChild  .0%    OrthoChild  1.8% 
  Spine  .0%    Spine  1.8% 
2016  Trauma  2.6%         
  OrthoAdult  1.8%         
  OrthoChild  .9%         
  Spine  .9%         

According to the type of question, there were 49 (43%) of type I (requiring one piece of knowledge) and 65 (57%) of type II (requiring several to obtain the answer).

The ChatGPT application obtained 83 correct answers (72.81%), compared to 69 (60.53%) for Bard and 77 (67.54%) for Perplexity. We found statistical significance in the Cochran Q test (0.049) regarding the frequency distribution between the groups, and when the post-hoc analysis was performed, a significant difference was only found in ChatGPT compared to Bard, but not between the other pairs. If we group the answers according to the type of question, we observe that in type I ChatGPT gets 36 (73.47%) correct, Bard gets 27 (55.10%) and Perplexity 36 (73.47%) which shows a significant difference (.030) but in the multiple comparisons it does not find differences between pairs due to the sample size of the subgroup. In type II questions we do not find significant differences between groups, ChatGPT obtained 47 (72.31%) correct answers, compared to 42 (64.62%) for Bard and 41 (63.08%) for Perplexity.

Table 2 shows the accumulation of correct answers for each of the questions when answering the three “chatbots”.

Table 2.

Number of chatbot correct answers.

Three  Two  One  None  Questions 
49  33  16  16  114 
43%  28.90%  14%  14%   
Two correct answers    One correct answer   
ChatGPT and Bard  ChatGPT 
ChatGPT and Perplexity  19  Bard 
Bard and Perplexity  Perplexity 

In the analysis of each programme's responses, ChatGPT used logical reasoning in 92 (80.7%), internal information in 108 (94.7%) and external information in 50 (43.8%). In Bard, logical reasoning was used in 114 (100%) responses, internal information in 114 (100%) and external information in 111 (97.3%). In Perplexity, logical reasoning was used in 109 (95.6%) responses, internal information in 113 (99.1%) and external information in 60 (52.6%). In this last section, we noted that this application included citations that were linked to web pages but external information was not considered since it would require an evaluation that differed from the purpose of this study. When analysing logical reasoning, we found statistical significance (p<.001) in favour of Bard compared to the rest. This difference (p<.001) is also present in the external information provided, favouring Bard over its competitors. The internal information shows a significant difference (p=.012) between Bard and ChatGPT in favour of the former. Table 3 analyses the responses according to the correct and incorrect answers.

Table 3.

Analysis of answers.

  ChatGPTBardPerplexity
  Correct(n=83)  Incorrect(n=31)  Correct(n=69)  Incorrect(n=45)  Correct(n=77)  Incorrect(n=37) 
Logical reasoning
Yes  69 (83.1%)  23 (74.4%)  69 (100%)  45 (100%)  72 (93.5%)  37 (100%) 
No  14 (16.9%)  8 (25.8%)  0 (0%)  0 (0%)  5 (6.5%)  0 (0%) 
Internal information
Yes  79 (95.2%)  29 (93.5%)  69 (100%)  45 (100%)  76 (98.7%)  37 (100%) 
No  4 (8%)  2 (6.5%)  0 (0%)  0 (0%)  1 (1.3%)  0 (0%) 
External information
Yes  41 (49.45)  9 (29.1%)  66 (95.7%)  45 (100%)  44 (57.1%)  16 (43.3%) 
No  42 (50.6%)  22 (70.9%)  3 (4.3%)  0 (0%)  33 (42.9%)  21 (56.7%) 

Statistical analysis of the subgroups “Correct” and “Failure” flagged up significant differences (p<.001) in the external information and in the logical reasoning in favour of Bard. We found no significant differences in the analysis of the internal information in the subgroups that get the question right or wrong.

Comparison between the number of words in each answer clarified a significant difference. The analysis of the pairwise differences is shown in Table 4.

Table 4.

Word/answer statistics.

AI  Words  Tukey post-hoc test – words
      ChatGPT  Bard  Perplexity 
ChatGPT  98.4±56.4  Mean differences  –  −148  13.1 
    p value  –  <.001*  .143 
Bard  246±63.7  Mean differences    –  160.8 
    p value    –  <.001* 
Perplexity  85.3±30.8  Mean differences      – 
    p value      – 
*

Statistical significance.

This significant difference was also obtained in the analysis of the modified index of “readability” of Fernández Huerta and in the INFLESZ scale. The comparisons between groups are shown in Table 5. To show the relationship between the scores and the levels of “readability”, the different levels of each scale are provided in the table.

Table 5.

Legibility of answers.

  ChatGPT  Bard  Perplexity  FH  INFLESZ  ChatGPT  Bard  Perplexity 
100  FH    INFLESZ
        Muy fácilMuy fácil     
90             
        Fácil     
80             
        Algo fácilBastante fácil     
70             
    62.7±8.83Normal     
60    Normal  57.9±9.1 
  58.2±11.1  56.3±10Algo difícil   
50    Algo difícil53.2±11.3  51.4±10.3
        Difícil 
40         
        Muy difícil
30       
        Muy difícil
20       
       
10       
     
                 
  Comparación entre parejas - FH    Comparación entre parejas - Inflesz
      p          p 
  ChatGPT  Bard  .004*      ChatGPT  Bard  .002* 
  ChatGPT  Perplexity  .422      ChatGPT  Perplexity  .507 
  Bard  Perplexity  <.001*      Bard  Perplexity  <.001* 

FH: Fernández Huerta index.

*

Statistical significance.

There are 31 wrong answers in ChatGPT. The reason is logical error in 10 (32.2%), information error in 13 (41.9%) (Fig. 2), statistical error in one (3.2%) and combination of logical and information error in 7 (22.5%).

Figure 2.

Example of information error.

(0.38MB).

Bard had 45 incorrect answers. Logical error in 3 (6.6%), information error in 3 (6.6%), statistical error in 1 (2.2%) (Fig. 3) and combination of logical and information error in 38 (84.4%).

Figure 3.

Example of statistical error.

(0.27MB).

Perplexity had 37 inaccurate answers, the cause was logical error in 9 (24.3%) (Fig. 4), information error in 14 (37.8%), statistical error in 2 (5.4%) and combination of logical and information error in 2 (32.4%).

Figure 4.

Example of logical error.

(0.57MB).
Discussion

Our results indicate that the conversational AI programmes analysed (ChatGPT, Bard and Perplexity) pass the exam with orthopaedic surgery and traumatology questions taken from the MIR exam for the period 2008–2023. The ChatGPT application is better than Bard but similar in results to Perplexity. Our data improves on the previous data of Jin et al.15 in similar North American exams and those of Carrasco et al.23 who analysed the Spanish MIR exam of 2022 where 54.8% of global questions without images were answered correctly and which rose to 62.5% in the subgroup of traumatology questions. This improvement relies on the process of continuous improvement these systems are prone to. The analysis of correct answers according to the type of question is inconclusive, which suggests that the app responds similarly to questions with one, or several, reasons, at least with this sample size.

Our results reveal that all three apps use logical reasoning and external information in a large number of their correct answers, with Bard ranking first here. However, this advantage does not translate into obtaining the best results.

AI hallucinations include biases (they can give xenophobic answers, for example), mistakes or omissions.24 This problem, inherent to the design and operational structure of the model, undermines the reputation of the AI, negatively affects decision-making and can lead to ethical and legal conflicts.25 To avoid this problem, it has been recommended to use several AIs to reinforce the quality of the answer. Our results indicate that almost half of the questions were answered correctly by the three AIs simultaneously. However, 14% of the questions were incorrectly answered by all of them. The use of several AIs does not therefore completely eliminate the presence of hallucinations.

It is interesting to note that incorrect answers include logical reasoning and use of internal information. In ChatGPT it was known that the errors had less support from these two factors.22 In the case of Bard and Perplexity, they appear in 100% of the errors, which would indicate that this model justifies its information based on providing more complete answers backed up by the actual information from the question. Regarding the use of external information, Bard stands out for providing as many correct answers as erroneous ones. In all three, external information appears more in correct answers than in incorrect ones, particularly in ChatGPT.22,23 As previously stated, Perplexity includes citations to other sites, but the objective of this study was to determine the response the app provided, not to analyse its sources. The provision of citations could be considered of great value in supporting a response, but it outside the realm of this study.

Text legibility is essential in generative “chatbots” that offer medical information. We know that the chatbot improves with shorter sentences containing fewer words,21 although it is also related to the complexity of the words or the presence of visuals.26 It is interesting to note that Bard offers a better comprehension capacity than the other analysed apps. Furthermore, it often adds images to improve the information. The ChatGPT and Perplexity produce “somewhat difficult” responses, which may affect the dialogic interaction with untrained users.

This study has its limitations. Firstly, the MIR exam questions are incomparable with the questions an AI user could ask. The purpose of this study was not so much to know if they would pass an exam but how they would respond to the questions and how they would offer greater or lesser information. Secondly, we know that the “chatbot” allows the response to be modulated according to the indications we give it, using orders or prompts which improve it in terms of quantity or quality, but we have avoided using this modulation so that the response was as “spontaneous” as possible. Thirdly, two assessment systems out of other existing ones were used for the type of responses. The decision was based on the fact that they had been contrasted for the Spanish language and for medical texts without comparative studies between them. Another possible limitation is that ChatGPT 3.5 had no access to Internet information from 2021 onwards compared with the other two systems, but we understand that the majority of the population uses free apps and the comparison was therefore objective and also the questions analysed did not include data that required information from 2021 onwards. It is interesting to note that systems with a greater volume of answers, legibility and external information did not achieve better results.

This study provides the basis for further research into this novel and interesting topic. We suggest investigating the dialogic interaction of patients with their medical questions, or how a chat can explain the medical information we offer to our patients, or how the modulation of questions through prompts can improve the quality of responses. The reliability of these programmes could also be compared with students or medical personnel, both in training and in practice.

To conclude, conversational chatbots can be a very interesting tool for resolving medical issues, but they are not exempt from making mistakes, or “hallucinations”, which can have important implications for patients and doctors. We should warn against their use in the general untrained population and remember that medical information produced by medical professionals must prevail hierarchically over the development of generative AI.

Level of evidence

Level of evidence IV.

Ethical considerations

The study was not conducted on humans or animals and did not require informed consent. Nor did it require approval from the Ethics Committee.

Funding

No funding was received for this article.

Conflict of interests

The authors have no conflict of interests to declare.

References
[1]
A.B. Lisacek-Kiosoglous, A.S. Powling, A. Fontalis, A. Gabr, E. Mazomenos, F.S. Haddad.
Artificial intelligence in orthopaedic surgery: exploring its applications, limitations, and future direction.
J Bone Joint Res, 12 (2023), pp. 447-454
[2]
D. Benavent Núñez, J. Colomer Mascaró, L. Quecedo Gutiérrez, J. Gol-Montserrat, J.E. del Llano Señarís.
Inteligencia artificial y decisiones clínicas: Cómo está cambiando el comportamiento del médico.
Fundación Gaspar Casal, (2020),
[3]
J. Mayol.
Inteligencia artificial generativa y educación médica.
Educación Médica, 4 (2023), pp. 1-3
[4]
K. Chung, R.C. Park.
Chatbot-based heathcare service with a knowledge base for cloud computing.
Cluster Comput, 22 (2019), pp. 1925-1937
[5]
C. Garcia-Vidal, G. Sanjuan, P. Puerta-Alcalde, E. Moreno-García, A. Soriano.
Artificial intelligence to support clinical decision-making processes.
EBioMedicine, 46 (2019), pp. 27-29
[6]
R. Marlow, D. Wood.
Ghost in the machine or monkey with a typewriter-generating titles for Christmas research articles in The BMJ using artificial intelligence: observational study.
BMJ, 375 (2021), pp. e067732
[7]
J. Cárdenas.
Inteligencia artificial, investigación y revisión por pares: escenarios futuros y estrategias de acción.
[8]
J.W. Ayers, A. Poliak, M. Dredze, E.C. Leas, Z. Zhu, J.B. Kelley, et al.
Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum.
JAMA Intern Med, 183 (2023), pp. 589-596
[9]
O. Miles, R. West, T. Nadarzynski.
Health chatbots acceptability moderated by perceived stigma and severity: a cross-sectional survey.
[10]
D. Aedo-Martín.
Inteligencia artificial: futuro y desafíos en la medicina moderna.
Rev Esp Cir Ortop Traumatol, 68 (2024), pp. 428-429
[11]
T. Dave, S.A. Athaluri, S. Singh.
ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations.
Front Artif Intell, 6 (2023), pp. 1169595
[12]
Encuesta sobre equipamiento y uso de tecnologías de la información y comunicación en los hogares. Instituto Nacional de Estadística (INE). Available from: www.ine.es [updated 16.12.22; consulted 15.03.23].
[13]
M. Pasquinelli.
Cómo una máquina aprende y falla – Una gramática del error para la Inteligencia Artificial.
Hipertextos, 10 (2022), pp. 13-29
[14]
N.D. Clement, A.H.R.W. Simpson.
Artificial intelligence in orthopaedics: what level of evidence does it represent and how is it validated?.
Bone Joint Res, 12 (2023), pp. 494-496
[15]
D. Jin, E. Pan, N. Oufattole, W.H. Weng, H. Fang, P. Szolovits.
What disease does this patient have? A large-scale open domain question answering dataset from medical exams.
[16]
T. Trabasso.
The development of coherence in narratives by understanding intentional action.
Adv Psychol, 79 (1991), pp. 297-314
[17]
R. Flesch.
A new readability yardstick.
J Appl Psychol, 32 (1948), pp. 221-233
[18]
J.P. Kincaid, R.P. Fishburne, R.L. Rogers, B.S. Chissom.
Institute for Simulation and Training, (1975), pp. 8-75
[19]
J. Fernández Huerta.
Medidas sencillas de lecturabilidad.
Consigna, 214 (1959), pp. 29-32
[20]
Law G. Error in the Fernandez Huerta Readability Formula. Available from: https://linguistlist.org/issues/22/22-2332/ [consulted 15.03.23].
[21]
I.M. Barrio-Cantalejo, P. Simón-Lorda, M. Melguizo, I. Escalona, M.I. Marijuán, P. Hernando.
Validación de la Escala INFLESZ para evaluar la legibilidad de los textos dirigidos a pacientes.
An Sist Sanit Navar, 31 (2008), pp. 135-152
[22]
A. Gilson, C.W. Safranek, T. Huang, V. Socrates, L. Chi, R.A. Taylor, et al.
How does ChatGPT perform on the United States medical licensing examination? The implications of large language models for medical education and knowledge assessment.
JMIR Med Educ, 9 (2023), pp. e45312
[23]
J.P. Carrasco, E. García, D.A. Sánchez, E. Porter, L. de la Puente, J. Navarro, et al.
¿Es capaz “ChatGPT” de aprobar el examen MIR de 2022? Implicaciones de la inteligencia artificial en la educación médica en España.
Rev Esp Educ Méd, 4 (2023), pp. 55-59
[24]
K. Van Deemter, E. Reiter.
Lying and computational linguistics.
Oxford Handbooks, (2018), pp. 420-435 http://dx.doi.org/10.1093/oxfordhb/9780198736578.013.32
[25]
S.A. Athaluri, S.V. Manthena, V.K.M. Kesapragada, V. Yarlagadda, T. Dave, R.T.S. Duddumpudi.
Exploring the boundaries of reality: investigating the phenomenon of artificial intelligence hallucination in scientific writing through ChatGPT references.
Cureus, 15 (2023), pp. e37432
[26]
A. Blanco Pérez, U. Gutierrez Couto.
Legibilidad de las páginas web sobre salud dirigidas a pacientes y lectores de la población general.
Rev Esp Salud Pública [online], 76 (2002), pp. 321-331
Copyright © 2024. SECOT
Descargar PDF
Opciones de artículo
es en pt

¿Es usted profesional sanitario apto para prescribir o dispensar medicamentos?

Are you a health professional able to prescribe or dispense drugs?

Você é um profissional de saúde habilitado a prescrever ou dispensar medicamentos