Recurrent Neural Networks: Associative Memory and Optimization
- *Corresponding Author:
- K. -L. Du
Department of Electrical and Computer Engineering
Concordia University, Montreal, Canada, H3G 1M8
E-mail: [email protected]
Received Date: November 03, 2011; Accepted Date: November 22, 2011; Published Date: November 24, 2011
Citation: Wang H, Wu Y, Zhang B, Du KL (2011) Recurrent Neural Networks: Associative Memory and Optimization. J Inform Tech Soft Engg 1:104. doi:10.4172/2165-7866.1000104
Copyright: © 2011 Wang H, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Due to feedback connections, recurrent neural networks (RNNs) are dynamic models. RNNs can provide more compact structure for approximating dynamic systems compared to feedforward neural networks (FNNs). For some RNN models such as the Hopfield model and the Boltzmann machine, the fixed-point property of the dynamic systems can be used for optimization and associative memory. The Hopfield model is the most important RNN model, and the Boltzmann machine as well as some other stochastic dynamic models are proposed as its generalization. These models are especially useful for dealing with combinatorial optimization problems (COPs), which are notorious NPcomplete problems. In this paper, we provide a state-of-the-art introduction to these RNN models, their learning algorithms as well as their analog implementations. Associative memory, COPs, simulated annealing (SA), chaotic neural networks and multilevel Hopfield models are also important topics treated in this paper.