Extended Associative Neural Network library

Open Source Library on Associative Neural Networks

Developed in parnership with Glushkov Cyberntics Center of Ukrainian Ac.Sc.

Overview

The library provides implementation of various Associative Neural Network models. Most of the work has been done as a part of my (still ongoing) PhD project and was supported by the INTAS Young Scientist Fellowship YSF 03-55-795.

Associative Neural Networks can be used for:

    • - Associative memories (Content-Addressable Memory);
  • - Classification problems;
  • - Optimization problems.

Key features:

  • Distributed storage of information;
  • "Graceful degradation", i.e., destruction of individual neurons or of small groups of neurons reduces performance, but does not have the devastating effect;
  • Parallel mode of operation (hardware friendly);
  • Different learning rules, including fast noniterative one that allows to add/erase data.

The neural network model for associative memory was first proposed by J. Hopfield in 1982. It is a dynamical system of simple threshold units (neurons). The Hopfield model is fully connected, that is each neuron receives outputs from all other neurons (including itself).

Using the sparsely connected Hopfield network has a number of advantages:

- Biologically more feasible structure of the network;

- More suitable for hardware implementation;

- Faster and requires less memory in computer simulations;

- Reveals the influence of network connectivity pattern (architecture) on its behaviour.

Architecture of the sparse Hopfield network can be chosen in a number of ways:

- Random architecture – certain number of connections are chosen to connect random pairs of neurons.

- Adaptive architecture – location of connections is chosen so as to (sub-) maximize the associative performance of the network for a particular dataset [Dekhtyarenko2004, Dekhtyarenko2005].

- Cellular architecture – local connectivity pattern. Only neighboring neurons are connected. This architecture favors hardware implementation and is inspired by Cellular Neural Network (CNN) paradigm [Chua1988]. It provides the smallest value of total connection length what is crucial in some applications.

- Small-World architecture it takes the best of two worlds – the associative performance of the network with random architecture and mostly local connectivity pattern of the cellular network.

Note: Comparison of associative performance for Sparse Adaptive/Sparse Random/Small-World/Cellular networks is given subject to the equal number of weights.

There are a number of learning rules (LR) that can be used to train the sparse Hopfield network. Here is a brief summary of implemented LRs.

Note: In Projective learning rule the weight matrix of fully connected network is obtained by [Personnaz1986] and then all connections that do not satisfy architectural constrains are simply cut.

Library Description

The given release is of August 15, 2005.

Library allows creating and testing various Associative Neural Networks (most of them are based on Hopfield model [Hopfield1982]). All networks function in discrete time with bipolar (+1/-1) states and synchronous convergence mode.

Testing functions allow tracking the evolution of network properties during the training or with the change of network parameters.

One of the most important network characteristics is Attraction Radius, which quantifies the network performance as associative memories. It is possible to find either absolute value of network attraction radius (measured as Hamming distance [test.cpp::getRAttraction] function), or it’s normalized value [test.cpp::getNormalizedRAttraction].

Implemented Network Models

class FullNet – abstract base class for fully connected models.

class CellularNet - abstract base class for sparsely connected models, provides efficient weights storage and manipulation.

class FullProjectiveNet – fully-connected network with Projective learning rule [Personnaz1986]. Implements desaturation technique [Gorodnichy1997] and various retraining methods that allow network recovery after the failure of some neurons (this implementation was used in [Reznik2003a]).

class PseudoInverseNet – sparse network implementing Pseudo-Inverse learning rule (PI LR) [Brucoli1995], a learning rule that allows guaranteed storage of memory patterns as stable states for (almost) any network architecture.

class AdaptiveCellularNet – sparse network with PI LR and architecture that is changing depending on a dataset. That is for the specified number of connection network itself finds the architecture that maximizes the associative performance on a given dataset [Dekhtyarenko2004, Dekhtyarenko2005].

class SmallWorldNet – sparse network with Small-World architecture [WattsStrogatz1998] and PI LR. Apart from the originally proposed random rewiring SmallWorldNet implements new systematic rewiring procedure, which further improves the associative properties of the network using the same amount of shortcut connections [Dekhtyarenko2005a].

class HebbianCellularNet - sparse network with Perceptron (Hebbian) learning rule [Diederich1987].

class DeltaCellularNet - sparse network with Widrow-Hoff Delta learning rule [Widrow1960].

class ModularNet – growing modular associative network with large memory capacity [Reznik2003b]. As its basic building blocks (modules) it can use any of the network types mentioned above.

class BAMCellularNet – sparse bi-directional associative memory (under construction).

Compilation

Run compileBorland.bat or compileMS.bat files.

The library is being created in Win OS using Borland C++ Builder 5.0 with “Language compliance” compiler option set to “Borland”, therefore it requires a couple of tricks to compile it using Microsoft CL v. 12 (from MS Visual Studio 6.0)

- # define for if (0) {} else for // scope of definition in “for” statements

- FORCE:MULTIPLE linker option

Input

Run nets.exe with the name of ini-file as its input. Provided file “_SmallWorld.ini” does the following:

1. SmallWorldNet network is created with the following parameters:

- dimension = 256

- neuron connection radius = 10 (connectivity degree of about 8%)

- no diagonal weights

2. The network is trained with the data from the file “_256x256.dat” (bipolar patterns with random equiprobable and independent components) for patterns #1-14 (testNum = 1, numStored = 1:15:1+). After each additional stored vector the network is tested and test results (including attraction radius – “attrR” field) are stored in the “_report.txt” file.

Output

In addition to the value of attraction radius output file “_report.txt” contains a lot of other useful information, such as network architecture properties (number of connections, total connection length, ...), weight matrix properties (norm, trace, asymmetry degree, ...), estimations of associative performance (kappa measure, min aligned local field), actual testing results (average number of iterations, error portion, ...), etc.

The results in the output file are easy to analyze using any table processor (MS Excel, ...).

References