Dynamical approximation by recurrent neural networks

Abstract

We examine the approximating power of recurrent networks for dynamical systems through an unbounded number of iterations. It is shown that the natural family of recurrent neural networks with saturated linear transfer functions and synaptic weight matrices of rank 1 are essentially equivalent to feedforward neural networks with recurrent layers. Therefore, they inherit the universal approximation property of real-valued functions in one variable in a stronger sense, namely through an unbounded number of iterations and approximation guaranteed to be within O(1/n), with n neurons and possibly lateral synapses allowed in the hidden-layer. However, they are not as complex in their dynamical behavior as systems defined by Turing machines. It is further proved that every continuous dynamical system can be approximated through all iterations, by both finite analog and boolean networks, when one requires approximation of given arbitrary exact orbits of the (perhaps unknown) map. This result no longer holds when the orbits of the given map are only available as contaminated orbits of the approximant net due to the presence of random noise (e.g., due to digital truncations of analog activations). Neural nets can nonetheless approximate large families of continuous maps, including chaotic maps and maps sensitive to initial conditions. A precise characterization of what maps can be approximated fault-tolerantly by analog and discrete neural networks for unboundedly many iterations remains an open problem.

Publication Title

Neurocomputing

Share

COinS