Printing Version

Economics and Learning Systems

Informal Workshop

18-Jun-2001 - 21-Jun-2001

Monday, 18-Jun-2001

Time Speaker Title Abstract
10:00 Gerard Weisbuch Interacting Agents and Continuous Opinions Dynamics
We will present a model of opinion dynamics in which agents adjust continuous opinions as a result of random binary encounters whenever their difference in opinion is below a given threshold. High thresholds yield convergence of opinions towards an average opinion, whereas low thresholds result in several opinion clusters, as observed in many empirical studies. In the case of interactions across a social network, the number of clusters is increased; clustering can even get extreme in the case of vectors of opinions. Whenever threshold themselves evolve, opinion clustering is driven by threshold dynamics. More specific variants of the models have been applied in economics and political sciences (Peyton Young, Robert Axelrod...).
10:50   Coffee break
 
11:00 Daniel Heymann Treatment of Expectations and Learning Processes in Macroeconomic Models
The view that misperceptions about the future outcomes of current plans can generate business fluctuations has a long tradition in Macroeconomics. Theories that allow for the existence of intertemporal coordination failures can have quite different features. They have in common the argument that agents decide on the basis of less than perfect knowledge of the laws of motion of the envirionment, that is, they back away from the rational expectations assumption. A class of cyclical ups and downs may emerge when agents cannot forecast accurately the economy s growth path. This can be illustrated using a model with standard features, where agents have to plan their consumption and capital accumulation depending on expectations about the future aggregate behavior of the economy. When forecasts are represented by a simple error-correction learning scheme, simulations indicate that the response to a one-shot productivity shift can follow a non-monotonic path, leading to transitional cycles.

 

Tuesday, 19-Jun-2001

Time Speaker Title Abstract
10:00 Gerhard Hanappi Introduction and Survey of Different Methodological approaches
 
11:00 Thomas A. Matyas Models of Learning in Human Movement
An interdisciplinary consideration of learning requires theoretical language able to transcend disciplinary boundaries. Recently, models of human movement learning have been influenced by several trends also evident in the consideration of learning by societies, markets, biological neural networks and artificial neural networks. At the neurophysiological level evolutionary concepts of neural network construction have begun to impact on the understanding of neuroplasticity in learning and adaptation following brain damage. At the behavioural level the seemingly ubiquitous power law of learning has been challenged, with investigators examining time series of training data to analyze the temporal structure in the apparently complex development of performance improvement. Nonlinear dynamic processes have been suggested in theoretical papers and the tools of time series analysis are being applied to experimental observations. These trends are illustrated in this paper with data recently obtained in our laboratory during practice of a simple novel action that is difficult to perform for most individuals: abduction of the great toe. Analysis of time series obtained via computerized optical kinematic recordings showed that short-term trial-to-trial fluctuation, modeled as uninformative "noise" in power law analyses, did contain meaningful temporal structure. A stochastic ARIMA (0,1,1) model appeared to fit well. However, more detailed examination of lag 1 scatterplots for first order differences revealed nonlinearities capable of accounting for the nonstationary nature of the time series. Nonstationarity in mean performance is a necessary feature in any learning system that can succeed in approaching a performance goal. A possible inference from this nonlinear stochastic model is that a relatively simple neural system capable of comparing successive differences in trials and ordinally evaluating them could be sufficient. Interestingly, the equation is consistent with the idea that noise is necessary to the discovery of improved performance. Without the "random" innovation term in the ARIMA equation there would be no opportunity for the (nonlinear) moving average component to exert a biasing drift towards target performance. Interaction between what appears to be complex, unpredictable variability (perhaps system noise) and a simple comparator circuit with short-term memory could be a model for a variety of structures able to learn under conditions of unsupervised trial-and-error practice and intrinsic feedback of performance outcome.

 

Wednesday, 20-Jun-2001

Time Speaker Title Abstract
11:15 David Batten Artificial Life Approaches to Learning
 

 

Thursday, 21-Jun-2001

Time Speaker Title Abstract
10:00 Jean Pierre Nadal Social Clustering (Birds with a feather flock together)
 
11:00 Roberto Perazzo Models of Evolution with Learning: the Baldwin Effect
 
    Lunch break
 
14:00 Stefania Bandini/Manzoni A Language for Situated Multiagents Systems Based on Reaction-Diffusion Machine
 
15:00 Helmut Markus Knoflacher Sustainability - where in complex systems
 

 

Group Picture


Printing Version