A decomposition principle for complexity reduction of artificial neural networks
Publication in refereed journal

Times Cited
Web of Science2WOS source URL (as at 25/03/2020) Click here for the latest count
Altmetrics Information

Other information
AbstractA decomposition principle is developed for systematic determination of the dimensionality and the connections of Hopfield-type associative memory networks. Given a set of high dimensional prototype vectors of given memory objects, we develop decomposition algorithms to extract a set of lower dimensional key features of the pattern vectors. Every key feature can be used to build an associative memory with the lowest complexity, and more than one key feature can be simultaneously used to build networks with higher recognition accuracy. In the latter case, we further propose a ''decomposed neural network'' based on a new encoding scheme to reduce the network complexity. In contrast to the original Hopfield network, the decomposed networks not only increase the network's storage capacity, but also reduce the network's connection complexity from quadratic to linear growth with the network dimension. Both theoretical analysis and simulation results demonstrate that the proposed principle is powerful. Copyright (C) 1996 Elsevier Science Ltd
All Author(s) ListXu ZB, Kwong CP
Journal nameNeural Networks
Volume Number9
Issue Number6
Pages999 - 1016
LanguagesEnglish-United Kingdom
Keywordsassociative memories; best approximation projection; decomposition principle; elementary matrix transformation; Hopfield-type networks; interpolation operator
Web of Science Subject CategoriesComputer Science; Computer Science, Artificial Intelligence; COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE; NEUROSCIENCES

Last updated on 2020-26-03 at 02:51