Convolutional Neural Network Interpretability Analysis for Image Classification

Authors

  • Haopeng Fang School of Mathematics and Physics, Lanzhou Jiaotong University, Gansu 730000, China

Keywords:

image classification, convolutional neural networks, interpretability analysis, activation map

Abstract

In order to understand the basis for decision-making by convolutional neural networks in image classification tasks, and then optimize the model and reduce the cost of parameter adjustment, it is necessary to conduct interpretability analysis of convolutional neural networks. To this end, the article takes the fruit image classification task as the starting point, uses a variety of category activation maps, and analyzes the reasons for the results given by the model from multiple angles. The article uses the ResNet model for fine-tuning first. After achieving good classification performance, it conducts basic analysis of semantic features, occlusion analysis, as well as CAM-based interpretability analysis and LIME interpretability analysis to provide convolutional neural networks. Certain interpretability.  Experimental results show that the basis for decision-making by convolutional neural networks is consistent with the semantics understood by humans.

Downloads

Published

2024-04-30

How to Cite

Haopeng Fang. (2024). Convolutional Neural Network Interpretability Analysis for Image Classification. Frontiers in Interdisciplinary Applied Science, 1(1), 30–37. Retrieved from https://fias.com.pk/index.php/journal/article/view/5

Issue

Section

Articles