Members of the group [from left to right]: John W. Clark (St. Louis), Lidia Ferreira (Lisbon), Karl Kürten (Vienna), Patrick McGuire (Bielefeld), Henrik Bohr (Lyngby) [top right], Michael J. Barber (Cologne) [bottom right]. |

- Coding, information processing, learning, and control in natural and artificial neural networks
- Complex dynamics of neural-network models
- Complex behavior of one-dimensional Hamiltonian lattice models
- Control of quantum dynamics
- Protein structure and dynamics

The aim of this major project is to create and implement a comprehensive framework for quantitative description of neural information processing in the brain, based on the hypothesis that ensembles of neurons represent and manipulate information about analog variables in terms of probability density functions (PDFs) over these variables. This approach unites several dominant themes that are now emerging in the new field of computational neuroscience:

- Population coding: analog sensory input or motor output variables are represented in the activities of populations of neurons, as a means to overcome the limited precision of individual neuronal units (only 2-3 bits).
- Neural circuits in the brain are designed to carry out specific computational tasks essential to successful performance of the organism in a changing environment.
- Neural circuits in the brain perform Bayesian statistical inference as a means to cope with the uncertainties stemming from incomplete information about the world.
- Neural circuits in the brain, notably the visual system, use a stream of "bottom-up" sensory inputs to build an internal model of the sensory data; concurrently, a stream of "top-down" model-driven signals are used to impose global regularities on the perceptual input.
- To efficiently accomplish a rich variety of information-processing tasks, synaptic inputs to dendritic trees undergo nonlinear processing (in particular, coincidence detection on dendritic branches).

Decoding of information: We have shown how a time-dependent probability density over a given input or output analog variable may be decoded from the measured activities of a population of neurons, as a linear combination of basis functions ("decoders"), with coefficients given by the individual neuronal firing rates.

Encoding of information: We have shown how the neuronal encoding process may be described by projecting a set of complementary basis functions ("encoders") on the probability density function, and passing the result through a rectifying nonlinearity.

We have shown how both decoders and encoders may be determined by minimizing cost functions that quantify the inaccuracy of the representation.

Expressing a given computational task in terms of manipulation and transformation of probabilities, we have shown how this neural representation leads to a specific neural circuit that can perform the required computation within a consistent Bayesian framework, the synaptic weights being explicitly generated in terms of encoders, decoders, conditional probabilities, and priors.

We have shown how to facilitate analysis and application by introducing an intermediate representation with orthogonal (rather than overcomplete) basis functions, assigned to hypothetical "metaneurons" having arbitrary precision.

Figure 1: Transformations between representations defined on the implicit space of the analog variable, the explicit space of the neural activities, and the minimal space of the metaneuron "activities". |

We have shown how the representation of probabilistic information and the subsequent design of neural circuits to perform specified computations may be advanced by exploiting the formalism of Bayesian Belief Nets developed by Pearl. A Bayesian belief net is a graphical representation of a probabilistic model that provides an efficient means for organizing the relations of dependence or independence between the random variables of the model. The resulting neural networks retain important properties of Bayesian Belief Nets and are therefore called Bayesian Belief Networks.

Among other applications, we have designed and simulated a neural belief network that can estimate the velocity of a moving target.

Figure 2: (a) The position of the target is copied into two different populations of neurons, with different time delays. (b) The time delay and the difference of the two copies of of position are used to estimate the velocity. |

In a more elaborate application, we have developed a neural network that employs "bottom-up" sensory inputs to create an internal model of the perceptual data, and that in turn employs "top-down" feedback from this higher-level model to impose global constraints on the bottom-up predictive input. The network is capable of resolving ambiguous sensory input and attending to the larger peak of a bimodal distribution. The PDF construction naturally generates feedforward, feedback, and lateral connections between the neurons, analogous to the pathways found in the anatomy of the cerebral cortex.

Except in the case of tree-structured graphs, Neural Belief Networks pool evidence through the multiplication of neural activation states. This implies the presence of multiplicative (or "higher-order") interactions between neurons (see below). There are indications that such interactions occur in neurobiological systems, but the issue of their existence remains uncertain. Accordingly, we have devised an alternative class of Neural Belief Networks that function only through weighted sums of activities and hence only entail conventional binary synapses.

The computational powers of both artificial and natural neural networks are greatly enhanced if the usual binary synaptic interactions felt by a given neuron are supplemented by higher-order (or "multiplicative") interactions that depend on the states of more than one presynaptic neuron. Such interactions are analogous to "many-body" or "multi-spin" interactions in physics. In particular, nth-order couplings (involving one postsynaptic and n presynaptic neurons) provide for a storage capacity proportional to the nth power of the number of neurons, when such networks are used as dynamical content-addressable memories.

Analysis and application of networks with higher-order interactions are hindered by the exponential growth of coupling parameters with order n. An extension of the Hebbian learning prescription serves to specify the coefficients, but there remains the problem of explicit evaluation of the higher-order terms in the stimulus to a generic neuron of the net. This problem has been addressed with considerable success using combinatoric group-theoretical techniques. In particular, the nth-order term in the general "multi-spin" representation of the stimulus has been succinctly expressed in terms of Poly\`a polynomials, and the series has been summed in the thermodynamic limit of a large system. Moreover, this study has revealed an interesting one-to-one correspondence between the nth order term on the stimulus expansion and the sum of planar n-particle cluster diagrams for noninteracting quantum particles obeying Fermi statistics.

Computer simulation is used to investigate the conditions under which randomly-connected hard-threshold (or McCulloch-Pitts) neurons display complex behavior. Special attention is given to the influence of quenched threshold disorder on the repertoire of limit cycles available to the network, and on the complexity of limit-cycles as measured in cycle length and degree of neuronal participation (eligibility). The analysis is aided by the consideration of attractor-occupation entropy as a measure of diversity and a combination of eligibility and diversity as a measure of volatility.

We have carried out extensive studies of the behavior of these measures of dynamical complexity when the threshold of each neuron is altered from its conventional "normal" value by a factor selected randomly from a Gaussian distribution with mean unity and standard deviation d. With the normal thresholds (d=0), taken for each neuron as half the sum of the synaptic weights of its incoming connections, randomly assembled neural nets (RAANNs) of McCulloch-Pitts neurons typically possess only a small repertoire of limit-cycle attractors, which tend to be long and therefore individually complex. For small but finite d (weak disorder), the set of limit cycles remains small and shows little change from the undisturbed, normal case; in other words, there exists a regime of the disorder parameter distinguished by robustness or stability of the unperturbed (normal) cycling behavior. In the opposite extreme of large d (strong disorder), the terminal patterns become less complex, but the number of attractors vastly increases. The middle ground of intermediate d is found to engender features desired for versatile, rapidly responding systems: accessibility to a large set of complex patterns with high diversity and volatility. These findings suggest useful connections with chaos-control theory, in which (for example) the parameters of a system moving on a chaotic attractor are perturbed so as to stabilize a selected periodic orbit among the infinite number of unstable limit cycles embedded in the chaotic attractor.

In summing the parallel outputs of an array of identical Fitzhugh-Nagumo model neurons, Collins et al. demonstrated an emergent property associated with stochastic resonance (SR) in multicomponent systems: the enhancement of the response to weak signals due to SR becomes independent of the exact value of the noise variance as the size of the system increases. We have expanded upon this work and this discovery by examining the response when the input array is assembled from much simpler neuronal units, noisy McCulloch-Pitts neurons having a distribution of thresholds. The same emergent property is exhibited. More specifically, the network sensitivity to the input signal increases as the number of input neurons increases. Adding more units widens the range of signal detection, but does not significantly improve the peak performance of the network. Further, we have documented an advantage of heterogeneity that complements the findings of the preceding study. A network of heterogeneous model neurons outperforms a similar network with homogeneous units, being sensitive to a wider range of input signals (as measured by mean value). The network architectures are identical, the only difference being in the distribution of the thresholds: identical thresholds as opposed to two groups with different thresholds.

Numerous condensed-matter systems are effectively discrete by nature because the relevant length scales are of the order of the interparticle distance. Such systems are described by a Hamiltonian that is discrete in space, while their time evolution is considered as continuous. Their remarkable behavior, exemplified in charge-density waves, magnetic spirals, disordered crystals, adsorbed monolayers, and magnetic multilayers, stems from a competition between two or more forces that leads to locally stable spatially modulated structures. The particles are non-trivially displaced from a reference lattice and spatial disorder is created due to a highly complex energy landscape in configuration space. The number of locally stable configurations typically increases exponentially with the size of the system. A model system can be envisioned as a chain of N particles connected by harmonic springs, each particle also being subject to an external multi-well potential field. A widely used standard model is the so-called linear chain, consisting of a one-dimensional lattice of N oscillators interacting with nearest neighbors via a harmonic intersite potential. The energy of the system is given by an N-particle Hamiltonian comprised of the vibrational kinetic energy, the intersite energy specified by the coupling strength, and the on-site energy specified by an external on-site potential. In addition, the system might be subjected to another external force, notably an external magnetic or electric field.

New techniques involving the use of ultra-high-vacuum systems open the way to the synthesis of novel materials having properties of great technological interest. Artificial thin-film constructs based on ferro- or antiferromagnetic layers separated by non-magnetic spacers have been shown to exhibit quite unusual locally stable structures. Such structures have been experimentally detected, for instance, in Fe/Cr sandwiches and in giant magnetoresistant (GMR) elements consisting of several antiferromagnetically coupled magnetic layers separated from one other by nonmagnetic spacers (e.g., Co/Cu). The highly complex magnetic structures that arise depend on three competing forces: the interlayer exchange energy, the Zeeman term defined by the strength of the magnetic field, and the strength of an intralayer anisotropy energy defined by a periodic on-site potential. The physical variables are specified by the the angles between the magnetic moments within the individual layers and a reference axis (e.g., the easy axis). Control parameters are defined by the direction and the strength of the anisotropy, depending on the material. For special technical applications, modeling of GMR multilayers gives valuable information about the appropriate layer thickness and the appropriate magnetic material.

We have shown that the shape of the magnetoresistance curves and the hysteresis loops characterized by Barkhausen jumps can be tailored by fine-tuning the strength of the interlayer couplings and the strength of the anisotropy constant. The results compare well with experimental GMR and hysteresis shapes.

Another novel finding is that the spatial distribution of the magnetic moments shows fractal patterns which might be accessible to experimental studies. Moreover, the energy landscape -- consisting of exponentially many locally stable minima separated by barriers - turns out to be a Cantor set. The situation is reminiscent of that encountered in a magnetic glass, involving weak interactions of domains and "magnetic solitons".

Since the birth of quantum theory, human control of the behavior of quantum systems has been a prominent goal, with notable successes in particle acceleration and detection, magnetic resonance, electron microscopy, solid-state electronics, and laser optics. However, it is only in the last two decades that scientists have recognized the need for a comprehensive theory of quantum control that absorbs and adapts general concepts and powerful methods developed within systems engineering. In chemistry, the development of quantum control theory together with tremendous advances in laser technology have opened the way to unimolecular control of chemical reactions, a Holy Grail of the field. Even more dramatically, a synergism between quantum control and quantum computation is creating a host of exciting new opportunities for both activities. See this paper for a review of some of these developments, which were surveyed in a ZiF Kolloquium presented by J. W. Clark.

The role of quantum control in quantum computation is being studied in the context of necessary and sufficient conditions for controllability, complete or approximate, both for finite and infinite-dimensional state spaces, and systems with discrete and/or continuous spectra. The infinite-dimensional case has received little attention since the original papers on the subject, and deserves careful re-examination in connection with the notion of quantum computation over continuous variables (a process already begun by Lloyd and Braunstein).

Quantum computation is being studied with a view to its application to ab initio solution of quantum many-body problems hitherto regarded as intractable (e.g. macromolecules, heavy nuclei, hadron structure based on QCD). An exponential speedup of computation is promised by the massive parallel processing of quantum pathways, but the conditions under which this promise can be realized remain to be established in practical detail.

For any but the simplest molecules, the precision of our knowledge of the system interactions and Hamiltonian is very limited. On the other hand, the molecule certainly "knows" its own Hamiltonian, and this fact can be exploited by introducing a feedback loop from the molecular system to the laser that generates pulses intended to control the dynamical evolution of the molecule. Information extracted by probing the system is used to guide the shaping of the laser pulse so as to systematically reduce a positive measure of the difference between the desired and actual system response. The latter process is performed by a suitable incremental learning rule, e.g., a gradient-descent or conjugate-gradient minimization routine or a genetic algorithm, until convergence is achieved to an optimal pulse shape. It is now feasible experimentally to execute up to a million pump-probe cycles per second (involving imposition of a shaped laser pulse and subsequent measurement of response). Accordingly, this hybrid computational-experimental scheme, first proposed by Judson and Rabitz, has become a practical reality. The introduction of a Kalman filter is being explored as a more powerful approach to governance of the learning cycle. An intriguing aspect of the Judson-Rabitz scheme is its exploitation of the joint system of apparatus-plus-molecule as an analog computer to determine the optimal pulse shape. In correspondence with the neurobiological example of the modulation of bottom-up sensory input via feedback from a top-down internal model, the "bottom-up" laser signal from the pulse-shaper is iteratively improved through the intervention of "top-down" signals from the molecule's perfect knowledge of its Hamiltonian.

The determination of protein structure, including the mapping from the primary amino acid sequence to the final, functioning, folded structure in the native environment, is among the most complex and important problems on the current scientific scene. But if anything, the problem of protein dynamics, in its broadest sense, is exponentially more complex!

The challenge of the post-genomic era is to devise and implement novel theoretical and experimental techniques for exploiting and understanding the huge reservoir of data provided by the world effort in genetic sequencing. In pursuing this goal and identifying the functionality of an ever-increasing number of genes, it must be recognized that the lowest energy state of many proteins, e.g. the prion proteins of the mad-cow disease, is not the native, functioning molecule under all conditions. Therefore it is not meaningful to ask for the optimal state of a protein without reference to its environment; rather, one must determine the energy spectrum or the energy landscape in the region of phase space around the native and various non-native states under the various conditions that can occur in the cell. We have devoted considerable effort to the design of an interdisciplinary program to elucidate and explain diverse aspects of protein dynamics and function at the quantum level, enlisting advanced theortical methods for quantitative treatment of electronic structure as well as sophisticated spectrometric tools based on vibrational circular dichroism and Raman optical activity. Proceeding from the level of quantum structure and dynamics, we seek a deeper understanding of the mechanisms by which protein can attain a function, lose it, and subsequently regain it.

K.E. Kürten, *Transitions from non-collinear to collinear structures in a magnetic multilayer model*, ZiF Publication 2000/010.

F. Castiglione and K.E. Kürten, *A dynamical model of B-T cell regulation*, ZiF Publication 2001/048.

M.J. Barber, *Neural propagation of beliefs without multiplication*, ZiF Publication 2001/062.

M.J. Barber and B.K. Dellen, *Noise-induced signal enhancement in heterogeneous neural networks*, ZiF Publication 2001/063.

M.J. Barber, J.W. Clark, and C.H. Anderson, *Neural representation of probabilistic information*, ZiF Publication 2001/064.

M.J. Barber, J.W. Clark, and C.H. Anderson, *Neural propagation of beliefs*, ZiF Publication 2001/065.

K.E. Kürten and J.W. Clark, *Higher-order neural networks, Polyà polynomials, and Fermi cluster diagrams*, ZiF Publication 2001/074.

P.C. McGuire, H. Bohr, J.W. Clark, R. Haschke, C.L. Pershing, and J. Rafelski, *Threshold disorder as a source of diverse and complex behavior in random nets*, ZiF Publication 2001/087.

K.E. Kürten and F.V. Kusmartsev, *Creation of glassy structures in magnetic multilayers*, ZiF Publication 2001/104.

J.W. Clark, D.G. Lucarelli, and T.-J. Tarn, Control of Quantum Systems, in Advances in Quantum Many-Body Theory, Vol. 6, edited by R.F. Bishop, K.A. Gernoth, N.R. Walet (World Scientific, Singapore), in press, ZiF Publication 2001/106.

C.H. Anderson, *Basic elements of biological computational systems*, Int. J. Mod. Phys. C 5, 135-137 (1994).

E. Eliasmith and C.H. Anderson, *Developing and applying a toolkit from a general neurocomputational framework*, Neurocomputing 26, 1013-1018 (1999).

C.H. Anderson, Q. Huang, and J.W. Clark, *Harmonic analysis of spiking neuronal ensembles*, Neurocomputing 32-33, 279-284 (2000).

J. Pearl, *Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference* (Morgan Kaufmann, San Mateo, CA, 1988).

J.W. Clark, K.A. Gernoth, S. Dittmar, and M.L. Ristig, *Higher-order probabilistic perceptrons as Bayesian inference engines*, Phys. Rev. E 59, 6161-6174 (1999).

K.E. Kürten, *Quasi-optimized memorization and retrieval dynamics in sparsely connected neural network models*, J. Phys. France 51, 1585-1594 (1990).

J.W. Clark, K.E. Kürten, and J. Rafelski, *Access and stability of cyclic modes in quasirandom networks of threshold neurons obeying a deterministic synchronous dynamics*, in Computer Simulation in Brain Science, edited by R.M.J. Cotterill (Cambridge University Press, Cambridge, UK, 1988), pp. 316-344.

E. Ott, C. Grebogi, and J. A. Yorke, *Controlling chaos*, Phys. Rev. Lett. 64, 1196-1199 (1990).

J.J. Collins, C.C. Chow, and T.T. Imhoff, *Stochastic resonance without tuning*, Nature 376, 236-238 (1995).

H.S. Dhillon, F.V. Kusmartsev, and K.E. Kürten, *Journal of Nonlinear Mathematical Physics 8*, 38-49 (2001).

K.E. Kürten, *The role of mathematical and physical stability of irregular orbits in Hamiltonian systems*, in Condensed Matter Theories, Vol. 14, edited by D. Ernst, I. Perakis, and S. Umar (Nova Science Publishers, New York, 1999).

K.E. Kürten, in Condensed Matter Theories Regular and irregular dynamical behaviour in Hamiltonian lattice models: *An approach from complex systems theory*, in Condensed Matter Theories, Vol. 15, edited by G. S. Anagnostatos, R. F. Bishop, K. A. Gernoth, J. Ginis, and A. Theophilou (Nova Science Publisher, New York, 2000), pp. 415-424.

G.M. Huang, T.J. Tarn, and J.W. Clark, *On the controllability of quantum-mechanical systems*, J. Math. Phys. 24, 2608-2618 (1983).

C.K. Ong, G.M. Huang, T.J. Tarn, and J.W. Clark, *Invertibility of quantum-mechanical control systems*, Math. Systems Theory 17, 335-350 (1984).

J.W. Clark, *Control of quantum many-body dynamics: Designing quantum scissors*, in Condensed Matter Theories, Vol. 11, edited by E. V. Ludena, P. Vashishta, and R. F. Bishop (Nova Science Publishers, Commack, NY), pp. 3-19.

S. Lloyd, *Universal quantum simulators*, Science 273, 1073 (1996).

S. Lloyd and S.L. Braunstein, *Quantum computation over continuous variables*, Phys. Rev. Lett. 82, 1784-1787 (1999).

R.S. Judson and H. Rabitz, *Teaching lasers to control molecules*, Phys. Rev. Lett. 68, 1500-1503 (1993).

H. Bohr and J. Bohr, *Topology in protein folding*, in Topology in Chemistry (Gordon and Breach, 1999).