Hierarchical feature learning from sensorial data by spherical clustering

Abstract

Surveillance sensors are a major source of unstructured Big Data. Discovering and recognizing spatiotemporal objects (e.g., events) in such data is of paramount importance to the security and safety of facilities and individuals. What kind of computational model is necessary for discovering spatiotemporal objects at the level of abstraction they occur? Hierarchical invariant feature learning is the crux to the problems of discovery and recognition in Big Data. We present a multilayered convergent neural architecture for storing repeating spatially and temporally coincident patterns in data at multiple levels of abstraction. A node is the canonical computational unit consisting of neurons. Neurons are connected in and across nodes via bottom-up, top-down and lateral connections. The bottom-up weights are learned to encode a hierarchy of overcomplete and sparse feature dictionaries from space- and time-varying sensorial data by recursive layer-by-layer spherical clustering. The model scales to full-sized high-dimensional input data and also to an arbitrary number of layers thereby having the capability to capture features at any level of abstraction. The model is fully-learnable with only two manually tunable parameters. The model is generalpurpose (i.e., there is no modality-specific assumption for any spatiotemporal data), unsupervised and online. We use the learning algorithm, without any alteration, to learn meaningful feature hierarchies from images and videos which can then be used for recognition. Besides being online, operations in each layer of the model can be implemented in parallelized hardware, making it very efficient for real world Big Data applications. © 2013 IEEE.

Publication Title

Proceedings - 2013 IEEE International Conference on Big Data, Big Data 2013

Share

COinS