Neural computability II

Abstract

The authors present a general framework within which the computability of solutions to problems by various types of automata networks (neural networks and cellular automata included) can be compared and their complexity analyzed. Problems solvable by global dynamics of neural networks, cellular automata, and, in general, automata networks are studied as self-maps of the Cantor set. The theory derived from this approach generalizes classical computability theory; it allows a precise definition of equivalent models and thus a meaningful comparison of the computational power of these models. The authors show that neural networks are at least as powerful as cellular automata and that the converse is true for finite networks. Evidence indicates that the full classes are probably identical. The proofs of these results rely on the existence of a universal neural network, of interest in its own right.

Share

COinS