Treffer: Diffusion network architectures for implementation of Gibbs samplers with applications to assignment problems.

Title:
Diffusion network architectures for implementation of Gibbs samplers with applications to assignment problems.
Authors:
Ting PY; Center for Inf. Process. Res., California Univ., Santa Barbara, CA., Iltis RA
Source:
IEEE transactions on neural networks [IEEE Trans Neural Netw] 1994; Vol. 5 (4), pp. 622-38.
Publication Type:
Journal Article
Language:
English
Journal Info:
Publisher: Institute of Electrical and Electronics Engineers Country of Publication: United States NLM ID: 101211035 Publication Model: Print Cited Medium: Print ISSN: 1045-9227 (Print) Linking ISSN: 10459227 NLM ISO Abbreviation: IEEE Trans Neural Netw Subsets: PubMed not MEDLINE
Imprint Name(s):
Original Publication: New York, NY : Institute of Electrical and Electronics Engineers, c1990-
Entry Date(s):
Date Created: 19940101 Date Completed: 20121002 Latest Revision: 20161020
Update Code:
20250114
DOI:
10.1109/72.298232
PMID:
18267835
Database:
MEDLINE

Weitere Informationen

In this paper, analog circuit designs for implementations of Gibbs samplers are presented, which offer fully parallel computation. The Gibbs sampler for a discrete solution space (or Boltzmann machine) can be used to solve both deterministic and probabilistic assignment (association) problems. The primary drawback to the use of a Boltzmann machine for optimization is its computational complexity, since updating of the neurons is typically performed sequentially. We first consider the diffusion equation emulation of a Boltzmann machine introduced by Roysam and Miller (1991), which employs a parallel network of nonlinear amplifiers. It is shown that an analog circuit implementation of the diffusion equation requires a complex neural structure incorporating matched nonlinear feedback amplifiers and current multipliers. We introduce a simpler implementation of the Boltzmann machine, using a "constant gradient" diffusion equation, which eliminates the need for a matched feedback amplifier. The performance of the Roysam and Miller network and the new constant gradient (CG) network is compared using simulations for the multiple-neuron case, and integration of the Chapman-Kolmogorov equation for a single neuron. Based on the simulation results, heuristic criteria for establishing the diffusion equation boundaries, and neuron sigmoidal gain are obtained. The final CG analog circuit is suitable for VLSI implementation, and hence may offer rapid convergence.