Implementation and Optimization of Saliency Mapping Algorithms in Convolutional Neural Networks (CNN) to Enhance Transparency in Pneumonia Diagnosis
DOI:
https://doi.org/10.47701/c9jq7074Keywords:
Convolutional Neural Network, Saliency Mapping, Pneumonia Diagnosis, Chest X-ray Imaging, Explainable Artificial IntelligenceAbstract
This study aims to develop a transparent and reliable artificial intelligence model for pneumonia diagnosis using chest X-ray images by implementing and optimizing Convolutional Neural Networks (CNN) with Saliency Mapping. The research employed a combination of advanced optimization techniques, including aggressive data augmentation, class weight balancing, L2 regularization, dropout, batch normalization, and adaptive learning rate scheduling to address overfitting challenges. A functional prototype was then deployed in a Streamlit-based application to provide an interactive diagnostic tool. The evaluation results demonstrated that the model achieved strong performance, with high training accuracy and competitive testing accuracy, while visualization through Saliency Mapping provided meaningful interpretability by highlighting critical lung regions, particularly the mid-to-lower lung fields and hilar area. This interpretability ensured that the system not only delivered accurate predictions but also supported clinical reasoning by aligning with radiological characteristics of early-stage pneumonia and bronchopneumonia. The integration into a user-friendly application illustrates the potential for practical adoption in healthcare settings, especially in regions with limited access to radiologists. Overall, the study demonstrates that combining CNN-based classification with explainable AI techniques can bridge the gap between advanced machine learning and clinical applicability, offering a strategic pathway to improve pneumonia diagnosis and patient outcomes.
References
Graf, R., Čečatka, S., Fink, N., Willem, T., Sabel, B. O., & Lasser, T. (2023). Attention-based saliency maps improve interpretability of pneumothorax classification. Radiology: Artificial Intelligence, 5(3), e220187. https://doi.org/10.1148/ryai.220187
Colin, J., & Surantha, N. (2025). Interpretable deep learning for pneumonia detection using chest X-ray images. Information, 16(1), 53. https://doi.org/10.3390/info16010053
Zhang, Y., Li, M., & Chen, H. (2025). Explainable artificial intelligence for medical imaging systems using deep learning: A comprehensive review. Cluster Computing, 28, 469. https://doi.org/10.1007/s10586-025-05281-5
Liu, Q., Wang, Z., & Zhao, L. (2025). SegX: Improving interpretability of clinical image diagnosis with segmentation-based enhancement. arXiv Preprint. https://arxiv.org/abs/2502.10296
Hou, J., Liu, S., Bie, Y., Wang, H., Tan, A., Luo, L., & Chen, H. (2024). Self-eXplainable AI for medical image analysis: A survey and new outlooks. arXiv Preprint. https://arxiv.org/abs/2410.02331
Gupta, R., & Sharma, P. (2025). Generalizable and explainable deep learning for medical image computing: An overview. Current Opinion in Biomedical Engineering, 33, 100535. https://doi.org/10.1016/j.cobme.2024.100535
Singh, A., Kumar, V., & Prasad, R. (2024). A survey on explainable artificial intelligence (XAI) techniques for visualizing deep learning models in medical imaging. Journal of Imaging, 10(10), 239. https://doi.org/10.3390/jimaging10100239
Wang, L., Zhang, T., & Sun, J. (2024). Saliency-driven explainable deep learning in medical imaging: Bridging visual explainability and statistical quantitative analysis. BioData Mining, 17(18). https://doi.org/10.1186/s13040-024-00370-4
Ali, M., Hassan, A., & Yusuf, H. (2025). Improving pneumonia diagnosis with high-accuracy CNN-based chest X-ray image classification and integrated gradient. Biomedical Signal Processing and Control, 101, 105041. https://doi.org/10.1016/j.bspc.2024.105041
Abukar, F., Mohamed, I., & Hassan, A. (2025). Enhancing deep learning for pneumonia detection: Developing web-based solution for Dr. Sumait Hospital in Mogadishu, Somalia. Discover Applied Sciences, 7, 309. https://doi.org/10.1007/s42452-025-06735-6
Sutrave, K., Mannem, M. R., & Sattu, M. S. C. (2025). Explainable AI methods in medical image analysis. In Proceedings of AMCIS 2025 (Paper No. 2254). Association for Information Systems. https://aisel.aisnet.org/amcis2025/sig_odis/sig_odis/26
Patel, A., & Verma, S. (2023). Towards improving the visual explainability of artificial intelligence in the clinical setting. BMC Digital Health, 1, 23. https://doi.org/10.1186/s44247-023-00022-3
Müller, D., & Schmidt, J. (2023). Explainable AI in medical imaging: An overview for clinical practitioners—Beyond saliency-based XAI approaches. European Journal of Radiology, 162, 110705. https://doi.org/10.1016/j.ejrad.2023.110705
Zhang, X., Chen, Y., & Wu, H. (2025). Explainable AI in medical imaging: An interpretable and collaborative federated learning model for brain tumor classification. Frontiers in Oncology, 15, 1535478. https://doi.org/10.3389/fonc.2025.1535478
Khan, A., & Lee, J. (2024). Explainable AI in medical imaging: Focus on saliency-based methods. Computers in Biology and Medicine, 170, 107922. https://doi.org/10.1016/j.compbiomed.2024.107922