Convolutional Neural Network Interpretability Analysis for Image Classification
Keywords:
image classification, convolutional neural networks, interpretability analysis, activation mapAbstract
In order to understand the basis for decision-making by convolutional neural networks in image classification tasks, and then optimize the model and reduce the cost of parameter adjustment, it is necessary to conduct interpretability analysis of convolutional neural networks. To this end, the article takes the fruit image classification task as the starting point, uses a variety of category activation maps, and analyzes the reasons for the results given by the model from multiple angles. The article uses the ResNet model for fine-tuning first. After achieving good classification performance, it conducts basic analysis of semantic features, occlusion analysis, as well as CAM-based interpretability analysis and LIME interpretability analysis to provide convolutional neural networks. Certain interpretability. Â Experimental results show that the basis for decision-making by convolutional neural networks is consistent with the semantics understood by humans.