Artificial intelligence in thoracic imaging—a new paradigm for diagnosing pulmonary diseases: a narrative review
Article information
Abstract
Purpose
This review explores the current applications and future prospects of artificial intelligence (AI) in thoracic imaging, with a particular focus on chest radiography (chest X-ray, CXR) and computed tomography (CT).
Current Concepts
Recently developed CXR AI algorithms have improved the efficiency, accuracy, and consistency of radiologists' routine clinical workflows by assisting in the detection of a wide range of thoracic diseases on CXR. These AI systems demonstrate diagnostic performance comparable to that of radiology residents who have limited interpretive experience. Furthermore, generative CXR AI technologies are capable of not only automatically detecting abnormalities such as pulmonary nodules, pneumonia, pneumothorax, and tuberculosis, but also generating radiology reports. These advancements represent a paradigm-shifting innovation that may significantly alter the current landscape of CXR interpretation in thoracic radiology. Although performance varies depending on the specific algorithm and dataset, AI applied to low-dose chest CT has demonstrated diagnostic accuracy ranging from 0.81 to 0.98 for nodule detection and malignancy assessment, with sensitivity ranging from 0.88 to 0.99 and specificity from 0.82 to 0.93. Incorporating AI as a second reader in CT interpretation can reduce reading time by approximately 20%, while also improving sensitivity for pulmonary nodule detection by 5% to 20% and malignant nodule diagnosis by 3% to 15%.
Discussion and Conclusion
Both CXR AI and chest CT AI streamline image interpretation by assisting with simple and repetitive tasks. Simultaneously, they provide novel diagnostic insights that are expected to influence and potentially reshape the interpretative patterns of radiologists in the near future.
Introduction
Artificial intelligence (AI) refers to computer systems endowed with intelligence similar to human cognition. Since the advent of deep learning in 2006, the subsequent decade has witnessed a boom in AI, propelled by advancements in computational power, the development of diverse algorithms, and the utilization of big data [1–3]. In recent years, the application and development of AI technologies in medicine have gained significant international momentum [4]. In South Korea, AI is already integrated into clinical practice, particularly in aiding diagnostic imaging [5–7]. However, as AI currently remains limited to an assistive role for physicians, it is imperative to fully comprehend its limitations, with final clinical judgments remaining the responsibility of medical professionals [2,3,6–9].
The evolution of AI is fundamentally transforming the diagnostic paradigm for various pulmonary diseases within thoracic imaging [3,6,8]. Historically, the interpretation of chest radiography (chest X-ray, CXR) and computed tomography (CT) relied heavily on radiologists' extensive experience and intuition. With the integration of AI technologies, a new paradigm has emerged wherein radiologists employ AI-driven image analysis for CXR and chest CT, thereby enhancing the accuracy and efficiency of pulmonary disease diagnosis [2,6,7]. This transformation is primarily attributable to AI's ability to augment radiologists' seasoned expertise and intuition by recognizing patterns imperceptible to the human eye. In thoracic imaging, AI is already operational in clinical settings, with applications including the detection of pulmonary nodules on CXR and chest CT [1,5,10,11], assisting in distinguishing benign from malignant pulmonary nodules [3,9,12], and automated quantification of emphysema or pulmonary fibrosis severity within the lungs through image processing [13–15].
This review aims to elucidate the current applications and future prospects of AI in thoracic imaging, highlighting the emerging paradigm for diagnosing pulmonary diseases using CXR and chest CT.
As a literature-based study not involving human subjects, neither institutional review board approval nor informed consent was required.
Current and future applications of AI in chest radiography image interpretation
CXR has long served as a cornerstone imaging modality for lung cancer screening and for evaluating respiratory symptoms to ascertain the presence of pulmonary diseases. The interpretation of CXR findings and their utilization for diagnosing respiratory conditions represents a fundamental skill for physicians [12,16]. However, the inherent limitation of CXR lies in its representation of complex thoracic structures as a single planar image, which poses ongoing challenges for accurate interpretation. Notably, diagnostic errors in early lung cancer detection using CXR are reported to range from 20% to 50%, and missed diagnoses can result in delayed treatment, profoundly affecting patient prognosis [12,16,17]. Factors contributing to missed lung cancer diagnoses on CXR include reader error, tumor characteristics, and technical aspects related to CXR acquisition. For instance, when lesion size is less than 1 cm, the likelihood of missing lung cancer on CXR is approximately 30%, a rate that can only be mitigated through improvements in reader proficiency and advancements in CXR imaging technology. Although developments in digital imaging have significantly enhanced CXR acquisition techniques, reader error persists, with reported rates between 25% and 40% (Figure 1). This error rate is particularly pronounced among residents or physicians with limited experience in CXR interpretation compared to thoracic imaging specialists [12,17]. Strategies have been proposed to reduce reader error, including restricting CXR interpretation to thoracic imaging specialists or requiring double-reading involving specialists and less experienced physicians. However, these approaches remain impractical and currently unfeasible in South Korea.
Chest posteroanterior (PA) radiograph illustrating missed lung cancer. (A) The initial chest X-ray (CXR) was interpreted as normal by a radiologist. (B) Two years later, another radiologist interpreted the patient's CXR as revealing a mass in the right upper lung field (white arrow), noting an increase in size compared to the previous examination. (C) The patient underwent a chest computed tomography scan, confirming lung cancer in the right upper lobe (black arrow). The author provided the chest PA image after obtaining informed consent from the patient.
The growing necessity for AI in CXR interpretation (Figures 2, 3) stems from its potential to alleviate the workload of radiologists while simultaneously reducing diagnostic errors [5,6,11,18]. Among AI methodologies, convolutional neural networks (CNNs) are widely utilized in CXR analysis, with their efficacy substantiated through extensive research and clinical applications. What, then, is the current diagnostic capability of CXR AI? Although the performance of AI systems varies, Wu et al. [5] compared an AI trained on anteroposterior (AP) CXRs from emergency department patients with 3 radiologists experienced in CXR interpretation. The AI achieved a sensitivity of 0.716 (95% confidence interval [CI], 0.704–0.729), comparable to the radiologists’ sensitivity of 0.720 (95% CI, 0.709–0.732), without a statistically significant difference (P=0.66). However, the positive predictive value (PPV) was significantly higher for the AI at 0.730 (95% CI, 0.718–0.742), compared to 0.682 (95% CI, 0.670–0.694) for the radiologists (P<0.001). Specificity was also superior for the AI at 0.980 (95% CI, 0.980–0.981) versus 0.973 (95% CI, 0.971–0.974) for the radiologists (P<0.001). These findings suggest that AI outperforms radiologists in interpreting AP CXRs from emergency patients.
Chest posteroanterior (PA) radiograph of lung cancer in the left lower lung field. (A) A large mass (black arrow) is present in the retrocardiac area of the left lower lung field on the chest X-ray (CXR). (B) The mass in the retrocardiac area of the left lower lung field is identified by the artificial intelligence-based computer-aided detection software, indicating an abnormality probability of 72%. (C) Chest computed tomography scan demonstrates an 8.5 cm solid nodule (black arrow) in the left lower lobe. The patient underwent a percutaneous needle biopsy, confirming adenocarcinoma. The author provided the chest PA image after obtaining informed consent from the patient.
Chest posteroanterior (PA) radiograph of lung cancer accompanied by interstitial lung disease in the right lower lung field. (A) The chest X-ray shows a solitary pulmonary nodule (black arrow) in the right lower lung field, along with increased opacity in the right upper lung field and both lower lung fields (black arrow). (B) The solitary pulmonary nodule (black arrow) in the right lower lung field is detected by artificial intelligence (AI)-based computer-aided detection (CAD) software, with an abnormality probability of 94%. Increased opacity (black arrow) in the right upper lung field is also detected by the AI-based CAD software (abnormality probability 60%). However, the AI-based CAD software fails to detect the increased opacity in the basal lower lung field. (C) A chest computed tomography scan reveals a 2.9 cm solid nodule (black arrow) in the right lower lobe. Percutaneous needle biopsy confirmed adenocarcinoma. Additionally, fibrosis due to old pulmonary tuberculosis was present in the right upper lobe, and lung fibrosis associated with usual interstitial pneumonia was observed in both lower lobes. The author provided the chest PA image after obtaining informed consent from the patient.
Ahn et al. [18] evaluated 6 radiologists (2 thoracic imaging specialists, 2 thoracic imaging fellows, and 2 residents) against AI in interpreting 497 CXRs (247 from the MIMIC-CXR dataset and 250 from Massachusetts General Hospital [MGH]) for 4 major findings: pneumonia, nodules, pneumothorax, and pleural effusion, present in 351 CXRs. The AI demonstrated higher sensitivity across all findings: nodules (AI: 0.816 [95% CI, 0.732–0.882] vs. radiologists: 0.567 [95% CI, 0.524–0.611]), pneumonia (AI: 0.887 [95% CI, 0.834–0.928] vs. radiologists: 0.673 [95% CI, 0.632–0.714]), pleural effusion (AI: 0.872 [95% CI, 0.808–0.921] vs. radiologists: 0.889 [95% CI, 0.862–0.917]), and pneumothorax (AI: 0.988 [95% CI, 0.932–1.000] vs. radiologists: 0.792 [95% CI, 0.756–0.827]). Moreover, AI-assisted interpretation significantly improved radiologists’ sensitivity, especially for detecting pneumothorax and pulmonary nodules. Additionally, AI assistance reduced reading time by 10% (40.8 seconds vs. 36.9 seconds; difference, 3.9 seconds; 95% CI, 2.9–5.2 seconds; P<0.001). In another study, Nam et al. [11] compared AI-assisted and non-AI-assisted groups in CXR interpretation for health-screening participants, finding that AI significantly improved detection rates of clinically significant pulmonary nodules (0.59% [31/5,238] in the AI group vs. 0.25% [13/5,238] in the non-AI group, P=0.008). However, the positive call rate for nodule detection showed no significant difference (2.3% [122/5,238] in the AI group vs. 1.9% [100/5,238] in the non-AI group, P=0.14), and false-referral rates were comparable (45.9% [56/122] in the AI group vs. 56.0% [56/100] in the non-AI group, P=0.14). The AI group exhibited higher sensitivity (56.4% vs. 23.2%, P<0.001), PPV (35.6% vs. 18.8%, P<0.02), and negative predictive value (99.0% vs. 98.2%, P<0.03), with similar specificity (97.6% vs. 97.7%, P=0.94). Notably, the non-AI group showed variability in positive call rates among radiologists, whereas the AI group demonstrated consistent positive call rates (P=0.87). These findings indicate that AI can reduce inter-radiologist variability in nodule detection and interpretation. These studies suggest that current AI diagnostic capabilities are comparable to those of less experienced radiologists, help mitigate variability among radiologists, and reduce errors in lung cancer diagnosis using CXR. However, it is critical to recognize that CXR AI is not a universal solution; its applicability depends on the patient population utilized for training (Table 1). For example, AI trained on emergency department patients aids in emergency CXR interpretation, whereas AI trained on health-screening participants enhances diagnostic accuracy in that context [2–5,7,10,11,19].
A list of commercially approved artificial intelligence solutions that have presented training data for chest X-rays
Looking forward to future developments in AI in CXR, the advent of generative AI technologies marks a transition from CNN-based systems to generative models [20,21]. Generative AI learns the distribution patterns of existing data to produce novel outputs, exemplified by systems such as OpenAI’s ChatGPT. In this context, “novel outputs” refer to results that resemble training data but remain distinct from it. For example, generative CXR AI could generate diagnostic reports and diagnoses directly from CXR data annotated with textual descriptions (Figure 4) [22,23]. How does this differ from current CNN-based CXR AI? CNN-based systems provide lesion localization or presence determination based solely on trained CXR image data, serving exclusively as diagnostic aids [3,7,10,11,19]. In contrast, generative AI can be trained using CXRs accompanied by textual reports, enabling it to generate entirely new diagnostic reports for unseen CXRs through iterative learning processes applied to large datasets. Although research on generative AI for CXR interpretation remains in the early stages, Huang et al. [20] compared a transformer-based encoder-decoder AI model with radiologists in evaluating the clinical significance of CXRs from 500 emergency department patients. No significant differences were observed across report types (radiologists: mean [standard error], 0.98 [0.01]; AI: 0.96 [0.01], teleradiology: 0.94 [0.02]; P=0.12) or between normal and abnormal findings (abnormal, 0.97 [0.01]; normal, 0.97 [0.01]; P=0.64).
Chest anteroposterior (AP) radiograph and chest computed tomography (CT) scan showing lung cancer in the left lower lobe. (A) Chest AP radiograph demonstrates a mass (black arrow) in the left hilar region. (B) Using color annotation, generative artificial intelligence (AI) highlights the mass in the left hilar region on the chest AP radiograph. (C) Simultaneously, the generative AI produces a textual report describing the findings on the chest AP radiograph. (D) Chest CT confirms lung cancer (black arrow) in the left lower lobe. The author provided the chest PA and CT images after obtaining informed consent from the patient.
When compared to radiologists’ reports as the standard reference, the generative AI achieved a sensitivity of 84.8% and specificity of 98.5% in identifying abnormal findings and diagnoses. However, this particular model was trained on CXRs from 900,000 emergency department patients presenting primarily with chest pain or dyspnea, limiting its generalizability to other clinical contexts. Nonetheless, generative CXR AI capable of automatically detecting key abnormalities such as pulmonary nodules, pneumonia, pneumothorax, and tuberculosis, as well as generating corresponding diagnostic reports, is poised to significantly transform the paradigm of CXR interpretation in thoracic imaging.
Current and future applications of AI in chest CT image interpretation
At present, AI systems for CXR and chest CT are primarily used as assistive diagnostic tools (AI-assisted diagnosis). However, chest CT AI demands significantly higher precision and reliability than CXR AI due to the requirement for meticulous diagnosis across a broad spectrum of pulmonary diseases. Consequently, developing AI systems for chest CT is considerably more challenging. While CXR is among the most frequently performed imaging modalities in hospitals, providing abundant data for AI training, securing similarly large datasets required for chest CT AI development is substantially more difficult. Moreover, the training process necessitates more sophisticated lesion labeling. Since chest CT imaging involves three-dimensional volumetric data, its analysis requires substantially greater computational resources, more complex model architectures, and increased demands on graphic processing unit memory and storage capacity. A critical obstacle in chest CT AI development is the variability of imaging protocols across institutions, including differences in imaging equipment, reconstruction algorithms, and contrast agents. This variability often leads to “domain bias,” wherein a chest CT AI model performs optimally only at the specific institution where it was initially developed [3,6].
Despite these challenges, AI systems designed to detect and quantify pulmonary nodules using low-dose chest CT for lung cancer screening are widely utilized in clinical practice and play a pivotal role in the early diagnosis of lung cancer (Figure 5) [1–3,6–9]. In lung cancer screening, AI employs various algorithms to maintain optimal image quality while reducing radiation exposure, thus enabling risk stratification of detected lung cancers and facilitating personalized screening protocols. Computer-aided detection systems integrated with AI enhance the sensitivity of pulmonary nodule detection in low-dose chest CT and reduce image interpretation time. Furthermore, AI assists in differentiating benign from malignant pulmonary nodules [1,9,24,25]. However, the diagnostic accuracy (ranging from 0.81 to 0.98), sensitivity (0.88–0.99), and specificity (0.82–0.93) for nodule detection and classification vary depending on the specific AI system and dataset employed [1–3,7,24,26]. Geppert et al. [25] reviewed studies conducted from 2012 to 2023 regarding the utility of AI in lung cancer screening with chest CT, reporting that AI systems developed by 6 companies were applied to 19,770 patients worldwide (Table 2). A consistent finding across these studies is that AI-assisted chest CT interpretation reduces reading time by approximately 20% compared to non-AI-assisted interpretation [26] and improves sensitivity for detecting and diagnosing malignant pulmonary nodules (nodule detection/classification ≥6 mm improved by 5% to 20%; malignant nodule detection/classification improved by 3% to 15%). However, one drawback of AI-assisted chest CT interpretation is the tendency of radiologists to classify pulmonary nodules into higher-risk categories (Figure 6) [24,26–30].
Low-dose chest computed tomography (CT) scan illustrating a mixed ground-glass nodule (GGN) in the right middle lobe. (A) Low-dose chest CT performed as part of the National Lung Cancer Screening program reveals a mixed GGN (black arrow) with a spiculated margin in the right middle lobe. (B) The artificial intelligence software detects the mixed GGN in the right middle lobe, automatically measuring its size (total diameter, 11.4 mm; central solid portion, 2.9 mm), categorizing it as Lung CT Screening Reporting and Data System category 3. The author provided the CT image after obtaining informed consent from the patient.
Low-dose chest computed tomography (CT) scan illustrating a solid pulmonary nodule in the left lower lobe. (A) Low-dose chest CT performed as part of the National Lung Cancer Screening program reveals a solid nodule (white arrow) with a smooth margin in the left lower lobe. (B) Artificial intelligence (AI) software automatically detects and measures the solid nodule in the left lower lobe (6.2 mm), categorizing it as Lung CT Screening Reporting and Data System (Lung-RADS) category 3. However, the AI overestimates the nodule's size. (C) Upon manual correction, the nodule size is accurately measured at 4.1 mm, downgrading the Lung-RADS category from 3 to 2. The author provided the CT image after obtaining informed consent from the patient.
Although still in the research phase, AI models capable of predicting lung cancer risk have been developed. Ardila et al. [31] developed an AI model for lung cancer risk prediction, achieving an area under the curve (AUC) of 0.944 in 6,716 participants from the National Lung Cancer Screening Trial (NLST) and 1,139 participants in an independent clinical validation cohort. Notably, in cases lacking prior chest CT images for comparison, this model outperformed radiologists, reducing false positives by 11% and false negatives by 5%. Similarly, Adams et al. [32] developed a model that combined an AI-based malignant nodule risk score with Lung CT Screening Reporting and Data System (Lung-RADS) classifications from 6 radiologists, based on 3,197 early lung cancer screening CT examinations. This combined model reclassified 41 cases (0.2%) from Lung-RADS categories 1 or 2 to Category 3 and downgraded 5,750 cases (30%) from category 3 or higher to category 2, suggesting that such models could reduce unnecessary follow-up examinations in lung cancer screening. Mikhael et al. [33] developed the Sybil model using low-dose chest CT data from the NLST (6,282 cases), MGH (8,821 cases), and Chang Gung Memorial Hospital (12,280 cases, including non-smokers with varied smoking histories). Sybil achieved AUCs ranging from 0.86 to 0.94 for 1-year lung cancer risk prediction and 0.75 to 0.81 for 6-year lung cancer risk prediction, demonstrating its potential value for early detection and personalized patient management.
Beyond lung cancer, AI systems are being developed and studied for detecting pulmonary embolism in CT pulmonary angiography, classifying interstitial lung disease findings on chest CT, and diagnosing emphysema or small airway diseases using inspiratory and expiratory chest CT scans. AI designed to detect pulmonary embolism in CT pulmonary angiography can identify abnormalities early in patients suspected of acute respiratory distress due to pulmonary embolism (Figure 7), serving as an assistive system by prioritizing cases for radiologist review [34–36]. Rothenberg et al. [34] reported that the average wait time for interpretation of pulmonary embolism cases decreased from 21.5 minutes without AI assistance to 11.3 minutes with AI assistance (P<0.001). Although AI improved diagnostic accuracy (98.6% vs. 97.6%) and reduced the missed pulmonary embolism rate (6.1% vs. 12.3%), these improvements did not achieve statistical significance (P=0.15 and P=0.11, respectively). Quantitative analysis of lung damage in interstitial lung disease or chronic obstructive pulmonary disease (COPD) is being explored to correlate clinical symptoms with chest CT findings and develop prognostic biomarkers [1–3,7,8,13–15,35]. However, manual quantification of lung damage on chest CT is time-consuming, limiting its clinical utility [6,7]. Advances in AI have enabled automated segmentation and quantification of lung damage with high accuracy and efficiency. Walsh et al. [37] trained an AI algorithm using 1,157 anonymized high-resolution CT scans from 2 institutions, based on the 2011 American Thoracic Society/European Respiratory Society/Japanese Respiratory Society/Latin American Thoracic Association (ATS/ERS/JRS/ALAT) idiopathic pulmonary fibrosis (IPF) diagnostic guidelines and Fleischner Society criteria. The algorithm’s performance was compared with 4 thoracic imaging specialists using 75 high-resolution CT scans from patients with IPF. According to the 2011 ATS/ERS/JRS/ALAT guidelines, radiologists achieved an accuracy of 70.7%, whereas the AI demonstrated 73.3% accuracy. Inter-radiologist agreement was excellent (0.67 [interquartile range, 0.58–0.72]), as was agreement between the AI and radiologists (0.69). The authors concluded that AI-based high-resolution CT evaluation of IPF is cost-effective, reproducible, and comparable in accuracy to assessments by thoracic imaging specialists, potentially benefiting institutions with limited thoracic imaging expertise (Figure 8). Chae et al. [14] analyzed low-dose chest CT scans from 3,118 Korean participants in the NLST, identifying interstitial lung abnormalities (ILAs) in 120 (4%) cases with visual extents ≥5%. Using AI for quantitative ILA analysis, a threshold of 1.8% ILA extent corresponded precisely with the visual ≥5% criterion, achieving 100% sensitivity and 99% specificity. The study concluded that AI outperformed radiologists in diagnosing ILAs with greater sensitivity and specificity (Figure 9).
Enhanced chest computed tomography (CT) illustrating acute pulmonary thromboembolism. (A) Enhanced chest CT shows a low-density thrombus (white arrow) in the right interlobar pulmonary artery. (B) Artificial intelligence software automatically detects the thrombus (white arrow) in the pulmonary artery, providing quantified thrombus burden (right pulmonary artery, 513 mm³; pulmonary arteries in the right lower lobe, 978 mm³; pulmonary arteries in the left lower lobe, 163 mm³) in both pulmonary arteries. The author provided the CT image after obtaining informed consent from the patient.
Chest computed tomography (CT) illustrating usual interstitial pneumonia. (A) The initial chest CT reveals mild honeycombing and predominantly reticular opacity (black arrow) in the posterobasal segments of both lower lobes. (B) Quantitative analysis using artificial intelligence (AI) shows reticular opacity (orange color) and honeycombing (red color) comprising 3% and 1%, respectively, of the lung volume on the initial chest CT. (C) Follow-up chest CT performed 3 years and 6 months later demonstrates progression of reticular opacity and honeycombing (black arrow) compared to the initial chest CT. (D) Quantitative analysis of the follow-up CT using AI indicates reticular opacity (orange color) and honeycombing (red color) comprising 4% and 3%, respectively. AI analysis reveals approximately a threefold increase in honeycombing. The author provided the CT image after obtaining informed consent from the patient.
Low-dose chest computed tomography (CT) illustrating an interstitial lung abnormality. (A) The initial low-dose chest CT reveals mild reticular opacity and predominantly ground-glass opacity (black arrow) in the posterobasal segments of both lower lobes. (B) Quantitative analysis using artificial intelligence demonstrates reticular opacity (orange color) comprising 1% of the lung volume on the low-dose chest CT. The author provided the CT image after obtaining informed consent from the patient.
In COPD, studies quantifying emphysema on chest CT and correlating these findings with lung function tests have been ongoing; however, clinical application remains limited [1,3,6,7,13,38]. González et al. [39] developed an AI model trained on 7,983 COPDGene participants, validated using an additional 1,000 COPDGene participants and 1,672 ECLIPSE participants, achieving a COPD diagnostic accuracy of 0.856. The model correctly staged 51.1% of COPDGene and 29.4% of ECLIPSE participants, with 74.9% and 74.6% staged within one stage of error, respectively. Additionally, the model predicted acute respiratory exacerbations with accuracies of 0.64 (COPDGene) and 0.55 (ECLIPSE). Yanagawa et al. [3] developed an automated AI algorithm for classifying emphysema severity using COPDGene CT examinations and Fleischner Society criteria, validated in COPDGene participants. This AI provided more objective assessments compared to visual classification, especially for trace emphysema. Similarly, Humphries et al. [40] validated an AI-based emphysema diagnostic system in 7,143 COPDGene participants, confirming its superior objectivity and improved detection of trace emphysema compared to visual methods. However, emphysema AI remains sensitive to variations in scan parameters, reconstruction algorithms, and radiation doses, necessitating further research prior to clinical implementation [2,3,7,35,38].
Conclusion
In thoracic imaging, AI is experiencing a surge in clinical utility, driven by advancements in both traditional machine learning and deep learning techniques. Over recent years, AI has been widely adopted for CXR in numerous hospitals. Within national lung cancer screening programs, AI-based systems have shown exceptional performance in detecting lung cancer, characterizing pulmonary nodules, and predicting lung cancer risk through low-dose chest CT. Moreover, applying AI in chest CT enables automation of time-consuming and repetitive tasks, such as identifying pulmonary nodules and exploring imaging-based biomarkers. This automation enhances interpretive efficiency and transforms radiologists' reading patterns, ultimately improving clinical outcomes. Additionally, AI’s ability to expedite diagnosis in emergency conditions, such as pneumothorax or acute pulmonary embolism, is expected to significantly impact patient care.
Research evaluating the clinical utility of generative AI remains in the early stages; however, generative AI techniques capable of producing text and images hold promise for generating interpretive reports from CXR images. These techniques could provide critical medical insights, especially in environments requiring rapid CXR interpretation, such as emergency departments, or in settings staffed by healthcare professionals with limited experience in interpreting CXRs. Nevertheless, fully integrating newly developed AI systems into clinical practice presents numerous challenges. Validation and clinical implementation of AI in healthcare are demanding yet essential, primarily due to the current absence of systematic management for imaging and clinical data required for AI training and validation. Addressing this issue is crucial for advancing AI development and its clinical application in medicine. Moreover, even as AI systems become sufficiently robust for widespread clinical use, additional challenges—such as ensuring sustained AI quality assurance, securing financial resources for AI adoption, and providing education to enable healthcare professionals to effectively utilize AI—must also be resolved.
Notes
Conflict of Interest
No potential conflict of interest relevant to this article was reported.
Funding
None.
