Artificial intelligence (AI) is currently developing at an unprecedented rate. While this technology has brought incredible achievements, there has also been a “dehumanisation” in some sectors. AI is no longer science fiction and has come to the forefront of the general press with tools such as ChatGPT from the company OpenAI, and more recently the incorporation of this technology by Microsoft to its Bing search engine. AI deals with the study and development of systems and algorithms that allow machines to perform tasks that require human intelligence. In this sense, machine learning and deep learning are subfields of AI that use algorithms and systems to automatically learn from data.
In upcoming years it is probable that we will see a constant scaling of this technology, which will change our way of perceiving and relating, and will replace human labour in many cases, not only in mechanical or low-skilled jobs, but even in some that require intellectual and creative skills. We have already seen how these tools are capable of performing countless tasks: driving autonomous vehicles with precision, transforming text into images or writing scientific articles1 are just a few examples. Inevitably, AI will sooner or later impact our profession, whether for better or worse.
New technologies based on AI and big data are already a reality in medicine and will be essential in improving treatments and the quality of life of patients. Medical interest in this technology is growing, with more articles having been published in the last 4 years than in the previous 2 decades combined, using the terms artificial intelligence in Pubmed.
In the field of orthopaedic and trauma surgery, AI can be a valuable tool to improve the quality of medical care and efficiency in patient treatment.2 From medical image analysis3 to complication risk prediction, AI is proving to be a technology capable of significantly improving our daily practice.4
However, it is important to carefully consider the potential risks and challenges posed by its use. It is crucial to ensure transparency and accountability in the development and use of medical AI algorithms, and that these are used ethically and fairly. Furthermore, it is essential to ensure that AI does not supplant us, and we must not forget that doctors remain ultimately responsible for decisions in the treatment of patients.5
To conclude, AI is a tool that has the potential to revolutionise the world. However, it is important to carefully and responsibly address the challenges and concerns raised by its use, always bearing in mind our patients’ wellbeing.
I hope these comments are of interest to your readers and I appreciate this opportunity to share my views. I would like to add that this article has been reviewed and corrected by ChatGPT in its current version (3.5), which presents an additional ethical and authorship problem in scientific literature and patents. The debate is open.
Level of evidenceLevel of evidence v.