Artificial intelligence in thoracic imaging—a new paradigm for diagnosing pulmonary diseases: a narrative review

Article information

J Korean Med Assoc. 2025;68(5):288-300
Publication date (electronic) : 2025 May 10
doi : https://doi.org/10.5124/jkma.25.0054
1Department of Radiology, Jeonbuk National University Medical School, Jeonju, Korea
2Research Institute of Clinical Medicine of Jeonbuk National University-Biomedical Research Institute of Jeonbuk National University Hospital, Jeonju, Korea
Corresponding author: Gong Yong Jin E-mail: gyjin@jbnu.ac.kr
Received 2025 April 9; Accepted 2025 April 22.

Abstract

Purpose

This review explores the current applications and future prospects of artificial intelligence (AI) in thoracic imaging, with a particular focus on chest radiography (chest X-ray, CXR) and computed tomography (CT).

Current Concepts

Recently developed CXR AI algorithms have improved the efficiency, accuracy, and consistency of radiologists' routine clinical workflows by assisting in the detection of a wide range of thoracic diseases on CXR. These AI systems demonstrate diagnostic performance comparable to that of radiology residents who have limited interpretive experience. Furthermore, generative CXR AI technologies are capable of not only automatically detecting abnormalities such as pulmonary nodules, pneumonia, pneumothorax, and tuberculosis, but also generating radiology reports. These advancements represent a paradigm-shifting innovation that may significantly alter the current landscape of CXR interpretation in thoracic radiology. Although performance varies depending on the specific algorithm and dataset, AI applied to low-dose chest CT has demonstrated diagnostic accuracy ranging from 0.81 to 0.98 for nodule detection and malignancy assessment, with sensitivity ranging from 0.88 to 0.99 and specificity from 0.82 to 0.93. Incorporating AI as a second reader in CT interpretation can reduce reading time by approximately 20%, while also improving sensitivity for pulmonary nodule detection by 5% to 20% and malignant nodule diagnosis by 3% to 15%.

Discussion and Conclusion

Both CXR AI and chest CT AI streamline image interpretation by assisting with simple and repetitive tasks. Simultaneously, they provide novel diagnostic insights that are expected to influence and potentially reshape the interpretative patterns of radiologists in the near future.

Introduction

Artificial intelligence (AI) refers to computer systems endowed with intelligence similar to human cognition. Since the advent of deep learning in 2006, the subsequent decade has witnessed a boom in AI, propelled by advancements in computational power, the development of diverse algorithms, and the utilization of big data [13]. In recent years, the application and development of AI technologies in medicine have gained significant international momentum [4]. In South Korea, AI is already integrated into clinical practice, particularly in aiding diagnostic imaging [57]. However, as AI currently remains limited to an assistive role for physicians, it is imperative to fully comprehend its limitations, with final clinical judgments remaining the responsibility of medical professionals [2,3,69].

The evolution of AI is fundamentally transforming the diagnostic paradigm for various pulmonary diseases within thoracic imaging [3,6,8]. Historically, the interpretation of chest radiography (chest X-ray, CXR) and computed tomography (CT) relied heavily on radiologists' extensive experience and intuition. With the integration of AI technologies, a new paradigm has emerged wherein radiologists employ AI-driven image analysis for CXR and chest CT, thereby enhancing the accuracy and efficiency of pulmonary disease diagnosis [2,6,7]. This transformation is primarily attributable to AI's ability to augment radiologists' seasoned expertise and intuition by recognizing patterns imperceptible to the human eye. In thoracic imaging, AI is already operational in clinical settings, with applications including the detection of pulmonary nodules on CXR and chest CT [1,5,10,11], assisting in distinguishing benign from malignant pulmonary nodules [3,9,12], and automated quantification of emphysema or pulmonary fibrosis severity within the lungs through image processing [1315].

This review aims to elucidate the current applications and future prospects of AI in thoracic imaging, highlighting the emerging paradigm for diagnosing pulmonary diseases using CXR and chest CT.

As a literature-based study not involving human subjects, neither institutional review board approval nor informed consent was required.

Current and future applications of AI in chest radiography image interpretation

CXR has long served as a cornerstone imaging modality for lung cancer screening and for evaluating respiratory symptoms to ascertain the presence of pulmonary diseases. The interpretation of CXR findings and their utilization for diagnosing respiratory conditions represents a fundamental skill for physicians [12,16]. However, the inherent limitation of CXR lies in its representation of complex thoracic structures as a single planar image, which poses ongoing challenges for accurate interpretation. Notably, diagnostic errors in early lung cancer detection using CXR are reported to range from 20% to 50%, and missed diagnoses can result in delayed treatment, profoundly affecting patient prognosis [12,16,17]. Factors contributing to missed lung cancer diagnoses on CXR include reader error, tumor characteristics, and technical aspects related to CXR acquisition. For instance, when lesion size is less than 1 cm, the likelihood of missing lung cancer on CXR is approximately 30%, a rate that can only be mitigated through improvements in reader proficiency and advancements in CXR imaging technology. Although developments in digital imaging have significantly enhanced CXR acquisition techniques, reader error persists, with reported rates between 25% and 40% (Figure 1). This error rate is particularly pronounced among residents or physicians with limited experience in CXR interpretation compared to thoracic imaging specialists [12,17]. Strategies have been proposed to reduce reader error, including restricting CXR interpretation to thoracic imaging specialists or requiring double-reading involving specialists and less experienced physicians. However, these approaches remain impractical and currently unfeasible in South Korea.

Figure 1.

Chest posteroanterior (PA) radiograph illustrating missed lung cancer. (A) The initial chest X-ray (CXR) was interpreted as normal by a radiologist. (B) Two years later, another radiologist interpreted the patient's CXR as revealing a mass in the right upper lung field (white arrow), noting an increase in size compared to the previous examination. (C) The patient underwent a chest computed tomography scan, confirming lung cancer in the right upper lobe (black arrow). The author provided the chest PA image after obtaining informed consent from the patient.

The growing necessity for AI in CXR interpretation (Figures 2, 3) stems from its potential to alleviate the workload of radiologists while simultaneously reducing diagnostic errors [5,6,11,18]. Among AI methodologies, convolutional neural networks (CNNs) are widely utilized in CXR analysis, with their efficacy substantiated through extensive research and clinical applications. What, then, is the current diagnostic capability of CXR AI? Although the performance of AI systems varies, Wu et al. [5] compared an AI trained on anteroposterior (AP) CXRs from emergency department patients with 3 radiologists experienced in CXR interpretation. The AI achieved a sensitivity of 0.716 (95% confidence interval [CI], 0.704–0.729), comparable to the radiologists’ sensitivity of 0.720 (95% CI, 0.709–0.732), without a statistically significant difference (P=0.66). However, the positive predictive value (PPV) was significantly higher for the AI at 0.730 (95% CI, 0.718–0.742), compared to 0.682 (95% CI, 0.670–0.694) for the radiologists (P<0.001). Specificity was also superior for the AI at 0.980 (95% CI, 0.980–0.981) versus 0.973 (95% CI, 0.971–0.974) for the radiologists (P<0.001). These findings suggest that AI outperforms radiologists in interpreting AP CXRs from emergency patients.

Figure 2.

Chest posteroanterior (PA) radiograph of lung cancer in the left lower lung field. (A) A large mass (black arrow) is present in the retrocardiac area of the left lower lung field on the chest X-ray (CXR). (B) The mass in the retrocardiac area of the left lower lung field is identified by the artificial intelligence-based computer-aided detection software, indicating an abnormality probability of 72%. (C) Chest computed tomography scan demonstrates an 8.5 cm solid nodule (black arrow) in the left lower lobe. The patient underwent a percutaneous needle biopsy, confirming adenocarcinoma. The author provided the chest PA image after obtaining informed consent from the patient.

Figure 3.

Chest posteroanterior (PA) radiograph of lung cancer accompanied by interstitial lung disease in the right lower lung field. (A) The chest X-ray shows a solitary pulmonary nodule (black arrow) in the right lower lung field, along with increased opacity in the right upper lung field and both lower lung fields (black arrow). (B) The solitary pulmonary nodule (black arrow) in the right lower lung field is detected by artificial intelligence (AI)-based computer-aided detection (CAD) software, with an abnormality probability of 94%. Increased opacity (black arrow) in the right upper lung field is also detected by the AI-based CAD software (abnormality probability 60%). However, the AI-based CAD software fails to detect the increased opacity in the basal lower lung field. (C) A chest computed tomography scan reveals a 2.9 cm solid nodule (black arrow) in the right lower lobe. Percutaneous needle biopsy confirmed adenocarcinoma. Additionally, fibrosis due to old pulmonary tuberculosis was present in the right upper lobe, and lung fibrosis associated with usual interstitial pneumonia was observed in both lower lobes. The author provided the chest PA image after obtaining informed consent from the patient.

Ahn et al. [18] evaluated 6 radiologists (2 thoracic imaging specialists, 2 thoracic imaging fellows, and 2 residents) against AI in interpreting 497 CXRs (247 from the MIMIC-CXR dataset and 250 from Massachusetts General Hospital [MGH]) for 4 major findings: pneumonia, nodules, pneumothorax, and pleural effusion, present in 351 CXRs. The AI demonstrated higher sensitivity across all findings: nodules (AI: 0.816 [95% CI, 0.732–0.882] vs. radiologists: 0.567 [95% CI, 0.524–0.611]), pneumonia (AI: 0.887 [95% CI, 0.834–0.928] vs. radiologists: 0.673 [95% CI, 0.632–0.714]), pleural effusion (AI: 0.872 [95% CI, 0.808–0.921] vs. radiologists: 0.889 [95% CI, 0.862–0.917]), and pneumothorax (AI: 0.988 [95% CI, 0.932–1.000] vs. radiologists: 0.792 [95% CI, 0.756–0.827]). Moreover, AI-assisted interpretation significantly improved radiologists’ sensitivity, especially for detecting pneumothorax and pulmonary nodules. Additionally, AI assistance reduced reading time by 10% (40.8 seconds vs. 36.9 seconds; difference, 3.9 seconds; 95% CI, 2.9–5.2 seconds; P<0.001). In another study, Nam et al. [11] compared AI-assisted and non-AI-assisted groups in CXR interpretation for health-screening participants, finding that AI significantly improved detection rates of clinically significant pulmonary nodules (0.59% [31/5,238] in the AI group vs. 0.25% [13/5,238] in the non-AI group, P=0.008). However, the positive call rate for nodule detection showed no significant difference (2.3% [122/5,238] in the AI group vs. 1.9% [100/5,238] in the non-AI group, P=0.14), and false-referral rates were comparable (45.9% [56/122] in the AI group vs. 56.0% [56/100] in the non-AI group, P=0.14). The AI group exhibited higher sensitivity (56.4% vs. 23.2%, P<0.001), PPV (35.6% vs. 18.8%, P<0.02), and negative predictive value (99.0% vs. 98.2%, P<0.03), with similar specificity (97.6% vs. 97.7%, P=0.94). Notably, the non-AI group showed variability in positive call rates among radiologists, whereas the AI group demonstrated consistent positive call rates (P=0.87). These findings indicate that AI can reduce inter-radiologist variability in nodule detection and interpretation. These studies suggest that current AI diagnostic capabilities are comparable to those of less experienced radiologists, help mitigate variability among radiologists, and reduce errors in lung cancer diagnosis using CXR. However, it is critical to recognize that CXR AI is not a universal solution; its applicability depends on the patient population utilized for training (Table 1). For example, AI trained on emergency department patients aids in emergency CXR interpretation, whereas AI trained on health-screening participants enhances diagnostic accuracy in that context [25,7,10,11,19].

A list of commercially approved artificial intelligence solutions that have presented training data for chest X-rays

Looking forward to future developments in AI in CXR, the advent of generative AI technologies marks a transition from CNN-based systems to generative models [20,21]. Generative AI learns the distribution patterns of existing data to produce novel outputs, exemplified by systems such as OpenAI’s ChatGPT. In this context, “novel outputs” refer to results that resemble training data but remain distinct from it. For example, generative CXR AI could generate diagnostic reports and diagnoses directly from CXR data annotated with textual descriptions (Figure 4) [22,23]. How does this differ from current CNN-based CXR AI? CNN-based systems provide lesion localization or presence determination based solely on trained CXR image data, serving exclusively as diagnostic aids [3,7,10,11,19]. In contrast, generative AI can be trained using CXRs accompanied by textual reports, enabling it to generate entirely new diagnostic reports for unseen CXRs through iterative learning processes applied to large datasets. Although research on generative AI for CXR interpretation remains in the early stages, Huang et al. [20] compared a transformer-based encoder-decoder AI model with radiologists in evaluating the clinical significance of CXRs from 500 emergency department patients. No significant differences were observed across report types (radiologists: mean [standard error], 0.98 [0.01]; AI: 0.96 [0.01], teleradiology: 0.94 [0.02]; P=0.12) or between normal and abnormal findings (abnormal, 0.97 [0.01]; normal, 0.97 [0.01]; P=0.64).

Figure 4.

Chest anteroposterior (AP) radiograph and chest computed tomography (CT) scan showing lung cancer in the left lower lobe. (A) Chest AP radiograph demonstrates a mass (black arrow) in the left hilar region. (B) Using color annotation, generative artificial intelligence (AI) highlights the mass in the left hilar region on the chest AP radiograph. (C) Simultaneously, the generative AI produces a textual report describing the findings on the chest AP radiograph. (D) Chest CT confirms lung cancer (black arrow) in the left lower lobe. The author provided the chest PA and CT images after obtaining informed consent from the patient.

When compared to radiologists’ reports as the standard reference, the generative AI achieved a sensitivity of 84.8% and specificity of 98.5% in identifying abnormal findings and diagnoses. However, this particular model was trained on CXRs from 900,000 emergency department patients presenting primarily with chest pain or dyspnea, limiting its generalizability to other clinical contexts. Nonetheless, generative CXR AI capable of automatically detecting key abnormalities such as pulmonary nodules, pneumonia, pneumothorax, and tuberculosis, as well as generating corresponding diagnostic reports, is poised to significantly transform the paradigm of CXR interpretation in thoracic imaging.

Current and future applications of AI in chest CT image interpretation

At present, AI systems for CXR and chest CT are primarily used as assistive diagnostic tools (AI-assisted diagnosis). However, chest CT AI demands significantly higher precision and reliability than CXR AI due to the requirement for meticulous diagnosis across a broad spectrum of pulmonary diseases. Consequently, developing AI systems for chest CT is considerably more challenging. While CXR is among the most frequently performed imaging modalities in hospitals, providing abundant data for AI training, securing similarly large datasets required for chest CT AI development is substantially more difficult. Moreover, the training process necessitates more sophisticated lesion labeling. Since chest CT imaging involves three-dimensional volumetric data, its analysis requires substantially greater computational resources, more complex model architectures, and increased demands on graphic processing unit memory and storage capacity. A critical obstacle in chest CT AI development is the variability of imaging protocols across institutions, including differences in imaging equipment, reconstruction algorithms, and contrast agents. This variability often leads to “domain bias,” wherein a chest CT AI model performs optimally only at the specific institution where it was initially developed [3,6].

Despite these challenges, AI systems designed to detect and quantify pulmonary nodules using low-dose chest CT for lung cancer screening are widely utilized in clinical practice and play a pivotal role in the early diagnosis of lung cancer (Figure 5) [13,69]. In lung cancer screening, AI employs various algorithms to maintain optimal image quality while reducing radiation exposure, thus enabling risk stratification of detected lung cancers and facilitating personalized screening protocols. Computer-aided detection systems integrated with AI enhance the sensitivity of pulmonary nodule detection in low-dose chest CT and reduce image interpretation time. Furthermore, AI assists in differentiating benign from malignant pulmonary nodules [1,9,24,25]. However, the diagnostic accuracy (ranging from 0.81 to 0.98), sensitivity (0.88–0.99), and specificity (0.82–0.93) for nodule detection and classification vary depending on the specific AI system and dataset employed [13,7,24,26]. Geppert et al. [25] reviewed studies conducted from 2012 to 2023 regarding the utility of AI in lung cancer screening with chest CT, reporting that AI systems developed by 6 companies were applied to 19,770 patients worldwide (Table 2). A consistent finding across these studies is that AI-assisted chest CT interpretation reduces reading time by approximately 20% compared to non-AI-assisted interpretation [26] and improves sensitivity for detecting and diagnosing malignant pulmonary nodules (nodule detection/classification ≥6 mm improved by 5% to 20%; malignant nodule detection/classification improved by 3% to 15%). However, one drawback of AI-assisted chest CT interpretation is the tendency of radiologists to classify pulmonary nodules into higher-risk categories (Figure 6) [24,2630].

Figure 5.

Low-dose chest computed tomography (CT) scan illustrating a mixed ground-glass nodule (GGN) in the right middle lobe. (A) Low-dose chest CT performed as part of the National Lung Cancer Screening program reveals a mixed GGN (black arrow) with a spiculated margin in the right middle lobe. (B) The artificial intelligence software detects the mixed GGN in the right middle lobe, automatically measuring its size (total diameter, 11.4 mm; central solid portion, 2.9 mm), categorizing it as Lung CT Screening Reporting and Data System category 3. The author provided the CT image after obtaining informed consent from the patient.

Artificial intelligence for detection and diagnosis of pulmonary nodules on chest CT

Figure 6.

Low-dose chest computed tomography (CT) scan illustrating a solid pulmonary nodule in the left lower lobe. (A) Low-dose chest CT performed as part of the National Lung Cancer Screening program reveals a solid nodule (white arrow) with a smooth margin in the left lower lobe. (B) Artificial intelligence (AI) software automatically detects and measures the solid nodule in the left lower lobe (6.2 mm), categorizing it as Lung CT Screening Reporting and Data System (Lung-RADS) category 3. However, the AI overestimates the nodule's size. (C) Upon manual correction, the nodule size is accurately measured at 4.1 mm, downgrading the Lung-RADS category from 3 to 2. The author provided the CT image after obtaining informed consent from the patient.

Although still in the research phase, AI models capable of predicting lung cancer risk have been developed. Ardila et al. [31] developed an AI model for lung cancer risk prediction, achieving an area under the curve (AUC) of 0.944 in 6,716 participants from the National Lung Cancer Screening Trial (NLST) and 1,139 participants in an independent clinical validation cohort. Notably, in cases lacking prior chest CT images for comparison, this model outperformed radiologists, reducing false positives by 11% and false negatives by 5%. Similarly, Adams et al. [32] developed a model that combined an AI-based malignant nodule risk score with Lung CT Screening Reporting and Data System (Lung-RADS) classifications from 6 radiologists, based on 3,197 early lung cancer screening CT examinations. This combined model reclassified 41 cases (0.2%) from Lung-RADS categories 1 or 2 to Category 3 and downgraded 5,750 cases (30%) from category 3 or higher to category 2, suggesting that such models could reduce unnecessary follow-up examinations in lung cancer screening. Mikhael et al. [33] developed the Sybil model using low-dose chest CT data from the NLST (6,282 cases), MGH (8,821 cases), and Chang Gung Memorial Hospital (12,280 cases, including non-smokers with varied smoking histories). Sybil achieved AUCs ranging from 0.86 to 0.94 for 1-year lung cancer risk prediction and 0.75 to 0.81 for 6-year lung cancer risk prediction, demonstrating its potential value for early detection and personalized patient management.

Beyond lung cancer, AI systems are being developed and studied for detecting pulmonary embolism in CT pulmonary angiography, classifying interstitial lung disease findings on chest CT, and diagnosing emphysema or small airway diseases using inspiratory and expiratory chest CT scans. AI designed to detect pulmonary embolism in CT pulmonary angiography can identify abnormalities early in patients suspected of acute respiratory distress due to pulmonary embolism (Figure 7), serving as an assistive system by prioritizing cases for radiologist review [3436]. Rothenberg et al. [34] reported that the average wait time for interpretation of pulmonary embolism cases decreased from 21.5 minutes without AI assistance to 11.3 minutes with AI assistance (P<0.001). Although AI improved diagnostic accuracy (98.6% vs. 97.6%) and reduced the missed pulmonary embolism rate (6.1% vs. 12.3%), these improvements did not achieve statistical significance (P=0.15 and P=0.11, respectively). Quantitative analysis of lung damage in interstitial lung disease or chronic obstructive pulmonary disease (COPD) is being explored to correlate clinical symptoms with chest CT findings and develop prognostic biomarkers [13,7,8,1315,35]. However, manual quantification of lung damage on chest CT is time-consuming, limiting its clinical utility [6,7]. Advances in AI have enabled automated segmentation and quantification of lung damage with high accuracy and efficiency. Walsh et al. [37] trained an AI algorithm using 1,157 anonymized high-resolution CT scans from 2 institutions, based on the 2011 American Thoracic Society/European Respiratory Society/Japanese Respiratory Society/Latin American Thoracic Association (ATS/ERS/JRS/ALAT) idiopathic pulmonary fibrosis (IPF) diagnostic guidelines and Fleischner Society criteria. The algorithm’s performance was compared with 4 thoracic imaging specialists using 75 high-resolution CT scans from patients with IPF. According to the 2011 ATS/ERS/JRS/ALAT guidelines, radiologists achieved an accuracy of 70.7%, whereas the AI demonstrated 73.3% accuracy. Inter-radiologist agreement was excellent (0.67 [interquartile range, 0.58–0.72]), as was agreement between the AI and radiologists (0.69). The authors concluded that AI-based high-resolution CT evaluation of IPF is cost-effective, reproducible, and comparable in accuracy to assessments by thoracic imaging specialists, potentially benefiting institutions with limited thoracic imaging expertise (Figure 8). Chae et al. [14] analyzed low-dose chest CT scans from 3,118 Korean participants in the NLST, identifying interstitial lung abnormalities (ILAs) in 120 (4%) cases with visual extents ≥5%. Using AI for quantitative ILA analysis, a threshold of 1.8% ILA extent corresponded precisely with the visual ≥5% criterion, achieving 100% sensitivity and 99% specificity. The study concluded that AI outperformed radiologists in diagnosing ILAs with greater sensitivity and specificity (Figure 9).

Figure 7.

Enhanced chest computed tomography (CT) illustrating acute pulmonary thromboembolism. (A) Enhanced chest CT shows a low-density thrombus (white arrow) in the right interlobar pulmonary artery. (B) Artificial intelligence software automatically detects the thrombus (white arrow) in the pulmonary artery, providing quantified thrombus burden (right pulmonary artery, 513 mm³; pulmonary arteries in the right lower lobe, 978 mm³; pulmonary arteries in the left lower lobe, 163 mm³) in both pulmonary arteries. The author provided the CT image after obtaining informed consent from the patient.

Figure 8.

Chest computed tomography (CT) illustrating usual interstitial pneumonia. (A) The initial chest CT reveals mild honeycombing and predominantly reticular opacity (black arrow) in the posterobasal segments of both lower lobes. (B) Quantitative analysis using artificial intelligence (AI) shows reticular opacity (orange color) and honeycombing (red color) comprising 3% and 1%, respectively, of the lung volume on the initial chest CT. (C) Follow-up chest CT performed 3 years and 6 months later demonstrates progression of reticular opacity and honeycombing (black arrow) compared to the initial chest CT. (D) Quantitative analysis of the follow-up CT using AI indicates reticular opacity (orange color) and honeycombing (red color) comprising 4% and 3%, respectively. AI analysis reveals approximately a threefold increase in honeycombing. The author provided the CT image after obtaining informed consent from the patient.

Figure 9.

Low-dose chest computed tomography (CT) illustrating an interstitial lung abnormality. (A) The initial low-dose chest CT reveals mild reticular opacity and predominantly ground-glass opacity (black arrow) in the posterobasal segments of both lower lobes. (B) Quantitative analysis using artificial intelligence demonstrates reticular opacity (orange color) comprising 1% of the lung volume on the low-dose chest CT. The author provided the CT image after obtaining informed consent from the patient.

In COPD, studies quantifying emphysema on chest CT and correlating these findings with lung function tests have been ongoing; however, clinical application remains limited [1,3,6,7,13,38]. González et al. [39] developed an AI model trained on 7,983 COPDGene participants, validated using an additional 1,000 COPDGene participants and 1,672 ECLIPSE participants, achieving a COPD diagnostic accuracy of 0.856. The model correctly staged 51.1% of COPDGene and 29.4% of ECLIPSE participants, with 74.9% and 74.6% staged within one stage of error, respectively. Additionally, the model predicted acute respiratory exacerbations with accuracies of 0.64 (COPDGene) and 0.55 (ECLIPSE). Yanagawa et al. [3] developed an automated AI algorithm for classifying emphysema severity using COPDGene CT examinations and Fleischner Society criteria, validated in COPDGene participants. This AI provided more objective assessments compared to visual classification, especially for trace emphysema. Similarly, Humphries et al. [40] validated an AI-based emphysema diagnostic system in 7,143 COPDGene participants, confirming its superior objectivity and improved detection of trace emphysema compared to visual methods. However, emphysema AI remains sensitive to variations in scan parameters, reconstruction algorithms, and radiation doses, necessitating further research prior to clinical implementation [2,3,7,35,38].

Conclusion

In thoracic imaging, AI is experiencing a surge in clinical utility, driven by advancements in both traditional machine learning and deep learning techniques. Over recent years, AI has been widely adopted for CXR in numerous hospitals. Within national lung cancer screening programs, AI-based systems have shown exceptional performance in detecting lung cancer, characterizing pulmonary nodules, and predicting lung cancer risk through low-dose chest CT. Moreover, applying AI in chest CT enables automation of time-consuming and repetitive tasks, such as identifying pulmonary nodules and exploring imaging-based biomarkers. This automation enhances interpretive efficiency and transforms radiologists' reading patterns, ultimately improving clinical outcomes. Additionally, AI’s ability to expedite diagnosis in emergency conditions, such as pneumothorax or acute pulmonary embolism, is expected to significantly impact patient care.

Research evaluating the clinical utility of generative AI remains in the early stages; however, generative AI techniques capable of producing text and images hold promise for generating interpretive reports from CXR images. These techniques could provide critical medical insights, especially in environments requiring rapid CXR interpretation, such as emergency departments, or in settings staffed by healthcare professionals with limited experience in interpreting CXRs. Nevertheless, fully integrating newly developed AI systems into clinical practice presents numerous challenges. Validation and clinical implementation of AI in healthcare are demanding yet essential, primarily due to the current absence of systematic management for imaging and clinical data required for AI training and validation. Addressing this issue is crucial for advancing AI development and its clinical application in medicine. Moreover, even as AI systems become sufficiently robust for widespread clinical use, additional challenges—such as ensuring sustained AI quality assurance, securing financial resources for AI adoption, and providing education to enable healthcare professionals to effectively utilize AI—must also be resolved.

Notes

Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Funding

None.

References

1. Cellina M, Cacioppa LM, Cè M, et al. Artificial intelligence in lung cancer screening: the future is now. Cancers (Basel) 2023;15:4344. 10.3390/cancers15174344. 37686619.
2. Chang JY, Makary MS. Evolving and novel applications of artificial intelligence in thoracic imaging. Diagnostics (Basel) 2024;14:1456. 10.3390/diagnostics14131456. 39001346.
3. Yanagawa M, Ito R, Nozaki T, et al. trend in artificial intelligence-based assistive technology for thoracic imaging. Radiol Med 2023;128:1236–1249. 10.1007/s11547-023-01691-w. 37639191.
4. Mollura DJ, Culp MP, Pollack E, et al. Artificial intelligence in low- and middle-income countries: innovating global health radiology. Radiology 2020;297:513–520. 10.1148/radiol.2020201434. 33021895.
5. Wu JT, Wong KC, Gur Y, et al. Comparison of chest radiograph interpretations by artificial intelligence algorithm vs radiology residents. JAMA Netw Open 2020;3e2022779. 10.1001/jamanetworkopen.2020.22779. 33034642.
6. Kim Y, Park JY, Hwang EJ, Lee SM, Park CM. Applications of artificial intelligence in the thorax: a narrative review focusing on thoracic radiology. J Thorac Dis 2021;13:6943–6962. 10.21037/jtd-21-1342. 35070379.
7. Jin GY. Preface for special issue on clinical experience of artificial intelligence for thoracic disease in daily practice. J Korean Soc Radiol 2024;85:691–692. 10.3348/jksr.2024.0105. 39130783.
8. Gleeson F, Revel MP, Biederer J, et al. Implementation of artificial intelligence in thoracic imaging-a what, how, and why guide from the European Society of Thoracic Imaging (ESTI). Eur Radiol 2023;33:5077–5086. 10.1007/s00330-023-09409-2. 36729173.
9. Chae KJ, Jin GY, Ko SB, et al. Deep learning for the classification of small (≤2 cm) pulmonary nodules on CT imaging: a preliminary study. Acad Radiol 2020;27:e55–e63. 10.1016/j.acra.2019.05.018. 31780395.
10. Lee DE, Chae KJ, Jin GY, Park SY, Jeong JS, Ahn SY. External validation of deep learning-based automated detection algorithm for chest radiograph: practical issues in outpatient clinic. Acta Radiol 2023;64:2898–2907. 10.1177/02841851231202323. 37750179.
11. Nam JG, Hwang EJ, Kim J, et al. AI improves nodule detection on chest radiographs in a health screening population: a randomized controlled trial. Radiology 2023;307e221894. 10.1148/radiol.221894. 36749213.
12. Choi G, Nam BD, Hwang JH, Kim KU, Kim HJ, Kim DW. Missed lung cancers on chest radiograph: an illustrative review of common blind spots on chest radiograph with emphasis on various radiologic presentations of lung cancers. Taehan Yongsang Uihakhoe Chi 2020;81:351–364. 10.3348/jksr.2020.81.2.351. 36237379.
13. Byon JH, Jin GY, Han YM, Choi EJ, Chae KJ, Park EH. Quantitative CT analysis based on smoking habits and chronic obstructive pulmonary disease in patients with normal chest CT. J Korean Soc Radiol 2023;84:900–910. 10.3348/jksr.2022.0130. 37559818.
14. Chae KJ, Lim S, Seo JB, et al. Interstitial lung abnormalities at CT in the Korean National Lung Cancer Screening Program: prevalence and deep learning-based texture analysis. Radiology 2023;307e222828. 10.1148/radiol.222828. 37097142.
15. Jin GY. Interstitial lung abnormality in asian population. Tuberc Respir Dis (Seoul) 2024;87:134–144. 10.4046/trd.2023.0117. 38111097.
16. Gibbs JM, Chandrasekhar CA, Ferguson EC, Oldham SA. Lines and stripes: where did they go?: from conventional radiography to CT. Radiographics 2007;27:33–48. 10.1148/rg.271065073. 17234997.
17. Del Ciello A, Franchi P, Contegiacomo A, Cicchetti G, Bonomo L, Larici AR. Missed lung cancer: when, where, and why? Diagn Interv Radiol 2017;23:118–126. 10.5152/dir.2016.16187. 28206951.
18. Ahn JS, Ebrahimian S, McDermott S, et al. Association of artificial intelligence-aided chest radiograph interpretation with reader performance and efficiency. JAMA Netw Open 2022;5e2229289. 10.1001/jamanetworkopen.2022.29289. 36044215.
19. Hwang EJ. Clinical application of artificial intelligence-based detection assistance devices for chest X-ray interpretation: current status and practical considerations. J Korean Soc Radiol 2024;85:693–704. 10.3348/jksr.2024.0052. 39130790.
20. Huang J, Neill L, Wittbrodt M, et al. Generative artificial intelligence for chest radiograph interpretation in the emergency department. JAMA Netw Open 2023;6e2336100. 10.1001/jamanetworkopen.2023.36100. 37796505.
21. Ziegelmayer S, Marka AW, Lenhart N, et al. Evaluation of GPT-4's chest X-ray impression generation: a reader study on performance and perception. J Med Internet Res 2023;25e50865. 10.2196/50865. 38133918.
22. Lee KH, Lee RW, Kwon YE. Validation of a deep learning chest X-ray interpretation model: integrating large-scale AI and large language models for comparative analysis with ChatGPT. Diagnostics (Basel) 2023;14:90. 10.3390/diagnostics14010090. 38201398.
23. Lee RW, Lee KH, Yun JS, Kim MS, Choi HS. Comparative analysis of M4CXR, an LLM-based chest X-ray report generation model, and ChatGPT in radiological interpretation. J Clin Med 2024;13:7057. 10.3390/jcm13237057. 39685515.
24. Chamberlin J, Kocher MR, Waltz J, et al. Automated detection of lung nodules and coronary artery calcium using artificial intelligence on low-dose CT scans for lung cancer screening: accuracy and prognostic value. BMC Med 2021;19:55. 10.1186/s12916-021-01928-3. 33658025.
25. Geppert J, Asgharzadeh A, Brown A, et al. Software using artificial intelligence for nodule and cancer detection in CT lung cancer screening: systematic review of test accuracy studies. Thorax 2024;79:1040–1049. 10.1136/thorax-2024-221662. 39322406.
26. Hsu HH, Ko KH, Chou YC, et al. Performance and reading time of lung nodule identification on multidetector CT with or without an artificial intelligence-powered computer-aided detection system. Clin Radiol 2021;76:626.e23–e32. 10.1016/j.crad.2021.04.006. 34023068.
27. Hall H, Ruparel M, Quaife SL, et al. The role of computer-assisted radiographer reporting in lung cancer screening programmes. Eur Radiol 2022;32:6891–6899. 10.1007/s00330-022-08824-1. 35567604.
28. Hwang EJ, Goo JM, Kim HY, Yi J, Kim Y. Optimum diameter threshold for lung nodules at baseline lung cancer screening with low-dose chest CT: exploration of results from the Korean Lung Cancer Screening Project. Eur Radiol 2021;31:7202–7212. 10.1007/s00330-021-07827-8. 33738597.
29. Lancaster HL, Heuvelmans MA, Oudkerk M. Low-dose computed tomography lung cancer screening: clinical evidence and implementation research. J Intern Med 2022;292:68–80. 10.1111/joim.13480. 35253286.
30. Lancaster HL, Zheng S, Aleshina OO, et al. Outstanding negative prediction performance of solid pulmonary nodule volume AI for ultra-LDCT baseline lung cancer screening risk stratification. Lung Cancer 2022;165:133–140. 10.1016/j.lungcan.2022.01.002. 35123156.
31. Ardila D, Kiraly AP, Bharadwaj S, et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat Med 2019;25:954–961. 10.1038/s41591-019-0447-x. 31110349.
32. Adams SJ, Mondal P, Penz E, Tyan CC, Lim H, Babyn P. Development and cost analysis of a lung nodule management strategy combining artificial intelligence and lung-RADS for baseline lung cancer screening. J Am Coll Radiol 2021;18:741–751. 10.1016/j.jacr.2020.11.014. 33482120.
33. Mikhael PG, Wohlwend J, Yala A, et al. Sybil: a validated deep learning model to predict future lung cancer risk from a single low-dose chest computed tomography. J Clin Oncol 2023;41:2191–2200. 10.1200/jco.22.01345. 36634294.
34. Rothenberg SA, Savage CH, Abou Elkassem A, et al. Prospective evaluation of AI triage of pulmonary emboli on CT pulmonary angiograms. Radiology 2023;309e230702. 10.1148/radiol.230702. 37787676.
35. Hwang EJ, Goo JM, Park CM. AI Applications for Thoracic Imaging: Considerations for Best Practice. Radiology 2025;314e240650. 10.1148/radiol.240650. 39998373.
36. Cheikh AB, Gorincour G, Nivet H, et al. How artificial intelligence improves radiological interpretation in suspected pulmonary embolism. Eur Radiol 2022;32:5831–5842. 10.1007/s00330-022-08645-2. 35316363.
37. Walsh SL, Calandriello L, Silva M, Sverzellati N. Deep learning for classifying fibrotic lung disease on high-resolution computed tomography: a case-cohort study. Lancet Respir Med 2018;6:837–845. 10.1016/s2213-2600(18)30286-8. 30232049.
38. Paik SH, Jin GY. Using artificial intelligence software for diagnosing emphysema and interstitial lung disease. J Korean Soc Radiol 2024;85:714–726. 10.3348/jksr.2024.0050. 39130780.
39. González G, Ash SY, Vegas-Sánchez-Ferrero G, et al. Disease staging and prognosis in smokers using deep learning in chest computed tomography. Am J Respir Crit Care Med 2018;197:193–203. 10.1164/rccm.201705-0860oc. 28892454.
40. Humphries SM, Notary AM, Centeno JP, et al. Deep learning enables automatic classification of emphysema pattern at CT. Radiology 2020;294:434–444. 10.1148/radiol.2019191022. 31793851.

Article information Continued

Figure 1.

Chest posteroanterior (PA) radiograph illustrating missed lung cancer. (A) The initial chest X-ray (CXR) was interpreted as normal by a radiologist. (B) Two years later, another radiologist interpreted the patient's CXR as revealing a mass in the right upper lung field (white arrow), noting an increase in size compared to the previous examination. (C) The patient underwent a chest computed tomography scan, confirming lung cancer in the right upper lobe (black arrow). The author provided the chest PA image after obtaining informed consent from the patient.

Figure 2.

Chest posteroanterior (PA) radiograph of lung cancer in the left lower lung field. (A) A large mass (black arrow) is present in the retrocardiac area of the left lower lung field on the chest X-ray (CXR). (B) The mass in the retrocardiac area of the left lower lung field is identified by the artificial intelligence-based computer-aided detection software, indicating an abnormality probability of 72%. (C) Chest computed tomography scan demonstrates an 8.5 cm solid nodule (black arrow) in the left lower lobe. The patient underwent a percutaneous needle biopsy, confirming adenocarcinoma. The author provided the chest PA image after obtaining informed consent from the patient.

Figure 3.

Chest posteroanterior (PA) radiograph of lung cancer accompanied by interstitial lung disease in the right lower lung field. (A) The chest X-ray shows a solitary pulmonary nodule (black arrow) in the right lower lung field, along with increased opacity in the right upper lung field and both lower lung fields (black arrow). (B) The solitary pulmonary nodule (black arrow) in the right lower lung field is detected by artificial intelligence (AI)-based computer-aided detection (CAD) software, with an abnormality probability of 94%. Increased opacity (black arrow) in the right upper lung field is also detected by the AI-based CAD software (abnormality probability 60%). However, the AI-based CAD software fails to detect the increased opacity in the basal lower lung field. (C) A chest computed tomography scan reveals a 2.9 cm solid nodule (black arrow) in the right lower lobe. Percutaneous needle biopsy confirmed adenocarcinoma. Additionally, fibrosis due to old pulmonary tuberculosis was present in the right upper lobe, and lung fibrosis associated with usual interstitial pneumonia was observed in both lower lobes. The author provided the chest PA image after obtaining informed consent from the patient.

Figure 4.

Chest anteroposterior (AP) radiograph and chest computed tomography (CT) scan showing lung cancer in the left lower lobe. (A) Chest AP radiograph demonstrates a mass (black arrow) in the left hilar region. (B) Using color annotation, generative artificial intelligence (AI) highlights the mass in the left hilar region on the chest AP radiograph. (C) Simultaneously, the generative AI produces a textual report describing the findings on the chest AP radiograph. (D) Chest CT confirms lung cancer (black arrow) in the left lower lobe. The author provided the chest PA and CT images after obtaining informed consent from the patient.

Figure 5.

Low-dose chest computed tomography (CT) scan illustrating a mixed ground-glass nodule (GGN) in the right middle lobe. (A) Low-dose chest CT performed as part of the National Lung Cancer Screening program reveals a mixed GGN (black arrow) with a spiculated margin in the right middle lobe. (B) The artificial intelligence software detects the mixed GGN in the right middle lobe, automatically measuring its size (total diameter, 11.4 mm; central solid portion, 2.9 mm), categorizing it as Lung CT Screening Reporting and Data System category 3. The author provided the CT image after obtaining informed consent from the patient.

Figure 6.

Low-dose chest computed tomography (CT) scan illustrating a solid pulmonary nodule in the left lower lobe. (A) Low-dose chest CT performed as part of the National Lung Cancer Screening program reveals a solid nodule (white arrow) with a smooth margin in the left lower lobe. (B) Artificial intelligence (AI) software automatically detects and measures the solid nodule in the left lower lobe (6.2 mm), categorizing it as Lung CT Screening Reporting and Data System (Lung-RADS) category 3. However, the AI overestimates the nodule's size. (C) Upon manual correction, the nodule size is accurately measured at 4.1 mm, downgrading the Lung-RADS category from 3 to 2. The author provided the CT image after obtaining informed consent from the patient.

Figure 7.

Enhanced chest computed tomography (CT) illustrating acute pulmonary thromboembolism. (A) Enhanced chest CT shows a low-density thrombus (white arrow) in the right interlobar pulmonary artery. (B) Artificial intelligence software automatically detects the thrombus (white arrow) in the pulmonary artery, providing quantified thrombus burden (right pulmonary artery, 513 mm³; pulmonary arteries in the right lower lobe, 978 mm³; pulmonary arteries in the left lower lobe, 163 mm³) in both pulmonary arteries. The author provided the CT image after obtaining informed consent from the patient.

Figure 8.

Chest computed tomography (CT) illustrating usual interstitial pneumonia. (A) The initial chest CT reveals mild honeycombing and predominantly reticular opacity (black arrow) in the posterobasal segments of both lower lobes. (B) Quantitative analysis using artificial intelligence (AI) shows reticular opacity (orange color) and honeycombing (red color) comprising 3% and 1%, respectively, of the lung volume on the initial chest CT. (C) Follow-up chest CT performed 3 years and 6 months later demonstrates progression of reticular opacity and honeycombing (black arrow) compared to the initial chest CT. (D) Quantitative analysis of the follow-up CT using AI indicates reticular opacity (orange color) and honeycombing (red color) comprising 4% and 3%, respectively. AI analysis reveals approximately a threefold increase in honeycombing. The author provided the CT image after obtaining informed consent from the patient.

Figure 9.

Low-dose chest computed tomography (CT) illustrating an interstitial lung abnormality. (A) The initial low-dose chest CT reveals mild reticular opacity and predominantly ground-glass opacity (black arrow) in the posterobasal segments of both lower lobes. (B) Quantitative analysis using artificial intelligence demonstrates reticular opacity (orange color) comprising 1% of the lung volume on the low-dose chest CT. The author provided the CT image after obtaining informed consent from the patient.

Table 1.

A list of commercially approved artificial intelligence solutions that have presented training data for chest X-rays

Software name Training data Abnormal findings detected Manufacturer
Lunit INSIGHT CXR 220,000 chest X-ray exams Pulmonary nodule, pulmonary consolidation atelectasis, cardiomegaly, pleural effusion, pneumoperitoneum, pneumothorax, pulmonary fibrosis Lunit
Vuno Med Chest X-ray 120,000 Chest X-ray exams; from 6 institutions Interstitial opacity, pleural effusion, pneumothorax, pulmonary consolidation, pulmonary nodule Vuno
LuCAS-CXR 8,000 Chest X-ray exams; from single institution Abnormal areas with increased opacity and decreased opacity Monitor Corporation
DCXA 13,000 Chest X-ray exams; from 9 institutions Pulmonary nodule, cardiomegaly, pleural effusion, pneumothorax, DK Medical System
Auto Lung Nodule Detection 17,210 Chest X-ray exams (13,710 normal Chest X-ray exams, 3,500 Chest X-ray exams with nodules; from single institution) Pulmonary nodule Samsung Electronics

Table 2.

Artificial intelligence for detection and diagnosis of pulmonary nodules on chest CT

Software (company name) Design Standard reference Outcomes
AVIEW Lungscreen (Coreline Soft) Retrospective analysis of prospective cohort study; 10,424 consecutive participants from K- LUCAS (14 institutions) 1) Lung cancer diagnosed within 1 year (primary outcome); 2) any time after LDCT (secondary outcome) Accuracy of detecting and categorizing actionable nodules to detect lung cancer (Lung-RADS category ≥3)
VUNO Med-LungCT AI (VUNO) MRMC study (fully paired); nodule-enriched from NLST baseline screens; 200 LDCT scans Lung cancer diagnosed within 1 year in the NLST Accuracy of detecting and categorizing actionable nodules to detect lung cancer (Lung-RADS category ≥3)
ClearRead CT, market version (Riverain Technologies) MRMC study (fully paired); consecutive cases with nodules ≤10 mm or no nodules from one hospital in Taiwan; 57 LDCT from lung cancer screening Consensus of 2 thoracic radiologists Accuracy for detecting any nodules
InferRead CT Lung, market version (Infervision) Retrospective test accuracy study and MRMC study (fully paired); consecutive sample from one hospital in China, 860 LDCT scans Consensus of 2 radiologists Accuracy for detecting any nodules
Veolity, version 1.5 (MeVis) Retrospective test accuracy study and MRMC study (fully paired); consecutive sample from UK-based LSUT, 735 LDCT scans Original radiologist reading or consensus of 2 independent radiologists Accuracy for detecting lung nodules ≥5 mm; accuracy for detecting malignant nodules; reading time
AI-Rad Companion, prototype VA10A (Siemens Healthineers) Retrospective test accuracy study (non-comparative); random sample from 1 US center, 117 LDCT scans Consensus of 2 expert radiologists Accuracy for detecting nodules >6 mm

CT, computed tomography; K- LUCAS, Korean Lung Cancer Screening; LDCT, low-dose CT; Lung-RADS, Lung CT Screening Reporting and Data System; AI, artificial intelligence; MRMC, multi-reader, multi-case study; NLST, national lung screening trial; LSUT, lung screen uptake trial.