Abstraction hierarchy in deep learning neural networks

Abstract

We develop a methodology to assess knowledge representation in deep neural networks trained to recognize classes of objects. We measure the abstraction level by studying correlations between the neuron activation levels of different layers based on image class. The approach is developed and tested using CIFAR-10 dataset and MatConvNet toolbox. The results show that different kinds of layers, convolutional or pooling, have different effect on the representation. The observations also point to a tendency for incremental increase in the abstraction measure, sometimes interrupted by more significant jumps, which may indicate a qualitative transition between abstraction levels. We describe and interpret current results and outline the direction of future work.

Publication Title

Proceedings of the International Joint Conference on Neural Networks

Share

COinS