E
### ZiF Research Group

# Robust Finance: Strategic Power, Knightian Uncertainty, and the Foundations of Economic Policy Advice

## Jour Fixe

The Jour Fixe is usually scheduled at 10:00 a.m. on Wednesdays in the Round Table unless otherwise indicated. The talk and discussion are followed by a lunch at 12:30 p.m. in the ZiF cafeteria. To attend a talk, please email Oliver Claas. Guests are also welcome to join the group for lunch. Lunch tickets (8 Euro) are sold in the Apartment Management.

March 11, 2015 (Different room: Elias Room, no lunch at the ZiF) |
Frank Riedel (Bielefeld University, GER)Evaluating the Damage of SpeculationWe discuss an archetypical case study of a misguided speculation made by some local city governor. While it is clear from an economic point of view that the treasurer of the city violated a local law by speculating, the more demanding question is to quantify the damage incurred by the speculation (either at the moment of entering the speculation or ex post after the position was closed). Such a quantitative assessment is asked for by the German Highest Court (Bundesgerichtshof). We discuss that law and ways to quantify the losses in monetary terms. Patrick Beißner (Bielefeld University, GER) Non-Implementability of Arrow–Debreu Equilibria by Continuous Trading under Knightian UncertaintyUnder risk, Arrow–Debreu equilibria can be implemented as Radner equilibria by continuous trading of few long-lived securities. We show that this result generically fails if there is Knightian uncertainty in the volatility. Implementation is only possible if all discounted net trades of the equilibrium allocation are mean ambiguity-free. The talk is based on a joint work with Frank Riedel. Download |

March 18, 2015 | Peter Bank (Technical University of Berlin, GER)Stochastic convex envelopes and convex duality in singular controlWe discuss a notion of convexity for stochastic processes and show how it can be used to develop a theory of convex duality for certain singular control problems arising in Finance and Economics. The talk is based on a joint paper with H. Mete Soner and Moritz Voß. Download Giorgio Ferrari (Bielefeld University, GER) Optimal Boundary Surface for Irreversible Investment with Stochastic CostsIn this talk we examine a Markovian model for the optimal irreversible investment problem of a firm aiming at minimizing total expected costs of production. We model market uncertainty and the cost of investment per unit of production capacity as two independent one-dimensional regular diffusions, and we consider a general convex running cost function. The optimization problem is set as a three-dimensional degenerate singular stochastic control problem. We provide the optimal control as the solution of a Skorohod reflection problem at a suitable boundary surface. Such boundary arises from the analysis of a family of two-dimensional parameter-dependent optimal stopping problems and it is characterized in terms of the family of unique continuous solutions to parameter-dependent nonlinear integral equations of Fredholm type. The talk is based on a joint paper with Tiziano De Angelis and Salvatore Federico. Download |

March 25, 2015 | Peter Klibanoff (Northwestern University, USA)Ellsberg behavior: a monotonicity consideration and its implicationsConsider the set of acts generated from all (Anscombe-Aumann) mixtures of two acts f and g: { αf + (1- α)g : 0 ≤ α ≤ 1} and think of choosing from this set. We propose a preference condition, monotonicity in mixtures, which says that clearly improving the act f (in the sense of weak dominance) makes putting more weight on f more desirable. We show that this property has strong implications for preferences exhibiting ambiguity averse behavior as in the classic Ellsberg (1961) paradoxes. For example, we show that maxmin expected utility (MEU) preferences (Gilboa and Schmeidler 1989) satisfy monotonicity in mixtures if and only if the set of probability measures appearing in the preference representation is a singleton set. In other words, for MEU preferences, monotonicity in mixtures and Ellsberg behavior are incompatible. MEU is not the only class of preferences for which this incompatibility holds, and we explore several directions in which this stark finding may be extended. Moreover, we demonstrate that the incompatibility is not between monotonicity in mixtures and Ellsberg behavior (or even ambiguity aversion more generally) per se by showing compatibility between these properties for other models of preferences. For example, we show that smooth ambiguity preferences (Klibanoff, Marinacci, and Mukerji 2005) can satisfy both properties simultaneously as long as they exhibit a coefficient of relative ambiguity aversion everywhere less than one. We also relate to findings from earlier literature on comparative statics of risky portfolios under expected utility. The talk is based on a joint paper with Soheil Ghili.Sujoy Mukerji (University of Oxford, GBR) Incomplete Information Games with Smooth Ambiguity PreferencesWe propose equilibrium notions for incomplete information games involving players who perceive ambiguity about the types of others. Players have smooth ambiguity preferences (Klibanoff, Marinacci, and Mukerji, 2005) and may be ambiguity averse. In the smooth ambiguity model it is possible to hold the agents’ information fixed while varying the agent's ambiguity attitude from aversion to neutrality (i.e., expected utility). This facilitates a natural way to understand the effect of introducing ambiguity attitude into a strategic environment. Our focus is on extensive form games, specifically multi-stage games with observed actions, and on equilibrium concepts satisfying notions of perfection and dynamic consistency. We propose the notion of Dynamically Consistent Perfect Interim Equilibrium (DCPIE) for such games, investigate its properties and provide several examples applying DCPIE. Download Igor Muraviev (Bielefeld University, GER) Kuhn’s Theorem for Extensive Form Ellsberg GamesThe paper generalizes Kuhn’s Theorem to extensive form games in which players condition their play on the realization of ambiguous randomization devices. It proves that ambiguous behavioral and ambiguous mixed strategies are outcome equivalent only if the latter strategies satisfy a rectangularity condition and the game is of perfect recall. In addition, the paper shows that not only the profile of ambiguous strategies must be appropriately chosen but also the extensive form must satisfy further restrictions beyond those implied by perfect recall in order to ensure that each player respects his ex ante contingent choice with the evolution of play. Download |

April 15, 2015 | Jacco Thijssen (University of York, GBR)Attrition and Preemption in a Continuous-Time Investment Game with Stochastic PayoffsIn this paper we analyse a stochastic dynamic model of investment in a duopoly in which both first and second mover advantages can exist. We construct a subgame perfect equilibrium and show that, in equilibrium, both preemption and attrition can occur along individual sample paths. In order to determine the attrition region a non-standard constrained optimal stopping problem needs to be solved. The solution to this problem is characterized using the Snell envelope and is shown to be non-trivial. We show that, a.s., there is a positive probability of reaching the preemption region from the attrition region. Such behavior is not possible in the deterministic version of the model. A simulation-based numerical example illustrates the model and shows the relative likelihoods of investment taking place in attrition and preemption regions. Download Jan-Henrik Steg (Bielefeld University, GER) Symmetric Equilibria in Stochastic Timing GamesWe construct subgame-perfect equilibria with mixed strategies for symmetric stochastic timing games with arbitrary strategic incentives. The strategies are qualitatively different for local first- or second-mover advantages, which we analyze in turn. When there is a local second-mover advantage, the players may conduct a war of attrition with stopping rates that we characterize in terms of the Snell envelope from the general theory of optimal stopping, which is very general but provides a clear interpretation. With a local first-mover advantage, stopping typically results from preemption and is abrupt. Equilibria may differ in the degree of preemption, precisely at which points it is triggered. We provide an algorithm to characterize where preemption is inevitable and to establish the existence of corresponding payoff-maximal symmetric equilibria. Download Tobias Hellmann (Bielefeld University, GER) Ambiguity in a Real Options GameIn this paper we study a two-player investment game with a first mover advantage in continuous time with stochastic payoffs, driven by a geometric Brownian motion. One of the players is assumed to be ambiguous with maxmin preferences over a strongly rectangular set of priors. We develop a strategy and equilibrium concept allowing for ambiguity and show that equilibria can be preemptive (a player invests at a point where investment is Pareto dominated by waiting) or sequential (one player invests as if she were the exogenously appointed leader). Following the standard literature, the worst case prior for the ambiguous player if she is the second mover is obtained by setting the lowest possible trend in the set of priors. However, if the ambiguous player is the first mover, then the worst case prior can be given by either the lowest or the highest trend in the set of priors. This novel result shows that “worst case prior” in a setting with geometric Brownian motion and kappa-ambiguity does not equate to “lowest trend”. Download |

April 22, 2015 | Johannes Lenhard (Bielefeld University, GER)Knightian Uncertainty Built InI would like to discuss two different senses in which Knightian uncertainty is built into mathematical models. Each one of these senses is related to a transformation of the conception of mathematical modeling. My presentation will consider both transformations in historical order. The first one happened in the 17 ^{th} century when mathematics had become an essential part of science, or better: modern science. The quest for certainty was – and arguably still is – a characteristic motive of modernity and mathematics was always seen as a paradigm of certain knowledge. However, the turn that interests me is the inclusion of probability and chance, i.e. of uncertain events, into mathematical models. Pascal, in particular, framed his famous wager argument where he built uncertainty into a mathematical argument. I will argue that he in fact distinguished between what Frank Knight later called (measurable) risk and (unmeasurable) uncertainty.The second transformation is a recent one that is still on its way, namely the impact of the computer on the conception of mathematical modeling. According to a widespread view, computers simply enlarge the realm of tractability for mathematical models. I will argue that this view is fallacious – the “fallacy of magnitude and power”. My main point is that the methodology of computational modeling is utilizing the interplay between data, predictions, and iterated model adaptations. This inevitably leads to uncertainties that are built into the models. Frederik Herzberg (Bielefeld University, GER) Epistemic justification, formally: Alternatives to foundationalismThis paper formally explores the common ground between mild versions of epistemological coherentism and infinitism; it proposes — and argues for — a hybrid, coherentist–infinitist account of epistemic justification. First, the epistemological regress argument and its relation to the classical taxonomy regarding epistemic justification — of foundationalism, infinitism and coherentism — is reviewed. We then recall recent results proving that an influential argument against infinite regresses of justification, which alleges their incoherence on account of probabilistic inconsistency, cannot be maintained. Furthermore, we prove that the Principle of Inferential Justification has rather unwelcome consequences — formally resembling the Sorites paradox — as soon as it is iterated and combined with a natural Bayesian perspective on probabilistic inferences. We conclude that strong versions of foundationalism and infinitism should be abandoned. Positively, we provide a rough sketch for a graded formal coherence notion, according to which infinite regresses of epistemic justification will often have more than a minimal degree of coherence. Download |

April 29, 2015 | Ulrich Horst (Humboldt University of Berlin, GER)Mathematical Models of Limit Order BooksA significant part of financial transactions is nowadays carried out over an electronic limit order book (LOB) where unexecuted orders are stored and displayed while awaiting execution. From a mathematical perspective, LOBs are high-dimensional complex priority queueing systems. Incoming limit orders can be placed at many different price levels while incoming market orders are matched against standing limit orders according to a set of priority rules. The inherent complexity of limit order books renders their mathematical analysis challenging. In this talk we review recent results on scaling limits for LOBs. Specifically, we consider queueing theoretic LOB models whose dynamics converges to a coupled PDE or SPDE system, depending on the choice of scaling. Our models are flexible enough to allow for a dependence of price dynamics on standing order volumes. This captures the well-documented empirical fact that order imbalances at the top of the book are a significant determinant of stock price dynamics over short time spans. Download 1, Download 2, Download 3 |

May 6, 2015 | Volker Böhm (Bielefeld University, GER)Dynamics of Asset Prices with Geometric DecayThis presentation reexamines and extends the results of Böhm/Chiarella/He/Hüls (2013). It was shown there that the asset price dynamics of a standard CAPM with heterogeneous agents (chartists and fundamentalists) and expectations based on geometric decay were described by a noninvertible two-dimensional map. Its unique stationary point undergoes a period doubling bifurcation (PD) followed by a global Neimark-Sacker bifurcation (NS). The existence of chaotic orbits was shown to exist via a homoclinic bifurcation. After deriving a canonical symmetric form of the dynamical system we obtain the analytical forms of the bifurcation curves for the PD and NS bifurcations in two parameters, one characterizing the behavioral/heterogeneity features and the other the decay parameter. These show that global stability of the stationary state or of a two-period cycle occurs for sufficiently small or large rates of decay δ for all levels of the behavioral parameter.The ensuing alternating quasi-periodic motion on two invariant curves (IC) after the NS bifurcation is destroyed through a sequence of breakups into finite periodic orbits and chaotic attractors parallel to the appearance of homoclinic orbits. For large levels of the behavioral parameter there exists a non-degenerate interval of the decay parameter within which quasi-periodic motion is destroyed in a diversity of different orbit structures, for example with coexisting finite stable cycles, quasi-periodic, and chaotic orbits. Numerical evidence suggests that the range of parameters of the destruction of the two IC’s can be divided into two areas, one where orbits are essentially periodic and sample autocorrelation is significant but monotonically declining and another one where orbits are chaotic and sample autocorrelation is insignificantly small but variable. The talk is based on a joint work with Thorsten Hüls based on Download. |

May 13, 2015 | Jean-Marc Tallon (Paris School of Economics and CNRS, Paris, FRA)Robust Social DecisionsWe provide possibility results on the aggregation of beliefs and tastes for Monotone, Bernoullian and Archimedian preferences of Cerreia-Vioglio, Ghirardato, Maccheroni, Marinacci, and Siniscalchi (2011). We propose a new axiom, Unambiguous Pareto Dominance, which requires that if the unambiguous part of individuals’ preferences over a pair of acts agree, then society should follow them. We characterize the resulting social preferences and show that it is enough that individuals share a prior to allow non dictatorial aggregation. A further weakening of this axiom on common-taste acts, where cardinal preferences are identical, is also characterized. It gives rise to a set of relevant priors at the social level that can be any subset of the convex hull of the individuals’ sets of relevant priors. We then apply these general results to the Maxmin Expected Utility model, the Choquet Expected Utility model and the Smooth Ambiguity model. We end with a characterization of the aggregation of ambiguity attitudes. The talk is based on a joint paper with Eric Danan, Thibault Gajdos, and Brian Hill. Download Alexander Zimper (University of Pretoria, RSA) Rationalizable information equilibriaRoy Radner’s (1979) rational expectations equilibrium (=REE) is the standard general equilibrium concept for asset economies with asymmetric information. In a REE every agent maximizes his expected utility with respect to his private information augmented with the information revealed through a common forecast function that–self-fulfillingly–becomes the equilibrium price function. This paper argues that the REE concept may give rise to implausible equilibrium net-trades. To exclude these implausible equilibria, we introduce the rationalizable information equilibrium (=RIE) as an alternative general equilibrium concept which explicitly formalizes the cognitive reasoning processes of highly rational agents. In a RIE equilibrium prices and net-trades must be consistent with the notion that every agent infers information from net-trade decisions at given prices under the assumption that all agents’ respective expected utility maximization problems are common knowledge between them. We prove the refinement result that every revealing RIE with an one-one equilibrium price function is also a revealing REE whereas the converse statement is not true. We also prove existence of no-trade RIE. Françoise Forges (Paris Dauphine University, FRA) Cheap Talk and Commitment in Bayesian GamesLet us allow the players of a one-shot Bayesian non-cooperative game to agree on a committed joint decision after having exchanged information through cheap talk. Simple examples show that, even if one player is uninformed and does not care for the other – informed – player’s action (generalized sender-receiver game), no jointly agreed decision may be achievable. Sufficient conditions are given for the existence of allocations that can be implemented by means of (perfect Bayesian) equilibria involving cheap talk and unanimous approval. A crucial use is made of results that were originally established for repeated games with incomplete information by Sorin (1983), Simon, Spież, and Toruńczyk (1995) and Renault (2000). The approach is thus similar to the one of Aumann and Hart (2003), who show how repeated games can be used to analyze the effects of cheap talk before the non-cooperative play of a Bayesian game. The talk is based on a joint work with Ulrich Horst and Antoine Salomon. Download |

June 3, 2015 | Birgit Rudloff (Princeton University, USA)Systemic Risk and Beyond: Scalar vs Multivariate ApproachesWe propose a multivariate approach towards the measurement of systemic risk that combines the tasks of measuring risk and allocating it to the d banks in the system. This is in contrast to the scalar approach which is the predominant approach in the current literature and separates the two tasks. We show that the multivariate approach allows to consider feedback effects that take into account the impact of capital regulation to the system of banks. Furthermore, we show that the multivariate risk measures provide significantly more information about systemic risk than previous approaches. We apply our methodology to systemic risk aggregation as described in Chen, Iyengar & Moallemi (2013) and to network models as in Eisenberg & Noe (2001) and Cifuentes, Shin & Ferrucci (2005). We provide dual representation results and give an economic interpretation.In the second part of the talk, we provide an overview over other applications of the multivariate approach in risk measurement and its relation to the more traditional scalar approaches. Download Emanuela Rosazza-Gianin (University of Milano–Bicocca, ITA) Pareto allocations and optimal risk sharing for quasiconvex risk measuresThe main goal of the talk is to generalize the characterization of Pareto optimal allocations and of optimal risk sharing known for convex risk measures (see, among others, Jouini, Schachermayer and Touzi, 2008) to the case of quasiconvex risk measures. The main motivation is an increasing interest devoted to quasiconvex risk measures, since – as discussed by Cerreia-Vioglio et al. (2011), Drapeau and Kupper (2010) and Frittelli and Maggis (2011) – the right formulation of diversification of risk is quasiconvexity (instead of convexity). Following the approach of Jouini et al. (2008) for convex risk measures, in the quasiconvex case we provide sufficient conditions for allocations to be (weakly) Pareto optimal in terms of exactness of the so-called quasiconvex inf-convolution. Moreover, we give a necessary condition for weakly optimal risk sharing that is also sufficient under cash-additivity of at least one between the risk measures. The talk is based on a joint work with Elisa Mastrogiacomo. Download |

June 24, 2015 | Luca Rigotti (University of Pittsburgh, USA)An Introduction to Incomplete Preferences, Knightian Uncertainty, and ApplicationsThe talk focuses on the idea that preferences may not satisfy the completeness axiom, and in particular covers the model of decision making with Knightian Uncertainty introduced by Bewley (1986) in which alternatives are ranked using sets of probability distributions. I will discuss how one can model choices in such a setting, and discuss a characterization result that can be widely applied to describe behavior. I will then focus on applications of this model to general equilibrium theory, contract theory, game theory, and mechanism design. The purpose of the talk is to show that economic applications of this model produce unique and interesting results. |

Tuesday,June 30, 2015 (Different room: Corner 214) |
Herbert Dawid (Bielefeld University, GER), Sander van der Hoog (Bielefeld University, GER)Bubbles, Crashes and the Financial Cycle: The Limits to Credit GrowthThis paper explores how different credit market- and banking regulations affect business fluctuations. Capital adequacy- and reserve requirements are analysed for their effect on the risk of severe downturns. We develop an agent-based macroeconomic model in which financial contagion is transmitted through balance sheets in an endogenous firm-bank network, that incorporates firm bankruptcy and heterogeneity among banks to capture the fact that contagion effects are bank-specific. Using concepts from the empirical literature to identify amplitude and duration of recessions and expansions we show that more stringent liquidity regulations are best to dampen output fluctuations and prevent severe downturns. Under such regulations both leverage along expansions and amplitude of recessions become smaller. More stringent capital requirements induce larger output fluctuations and lead to deeper, more fragile recessions. This indicates that the capital adequacy requirement is pro-cyclical and therefore not advisable as a measure to prevent financial contagion. Download Andreas Hamel (Free University of Bolzano – Bozen, ITA) Set-valued approaches to multi-utility representation and optimizationLet a multi-utility representation of an incomplete preference be given. Using an abstract convexity framework we define complete lattices of sets which serve as image spaces for corresponding set-valued utility maximization problems. Examples include the representations due to Evren and Ok, among others, as well as stochastic dominance orders. Adequate solution concepts for the set-valued problems are introduced. The (still not completely answered) question of how to actually solve these problems is discussed, and the set-valued approach is compared to the vector-valued one: directly maximize the multi-utility function with respect to a (vector) partial order. |

July 8, 2015 | Samuel Drapeau (Shanghai Jiao Tong University, CHN)Numerical Representation of Convex Preferences on Anscombe–Aumann ActsWe study the preferences of agents for diversification and better outcomes when they are facing both, in Frank Knight’s formulation, measurable as well as unmeasurable uncertainty. Following Anscombe and Aumann, such a situation can be modeled by preferences expressed on stochastic kernels, that is scenario dependent lotteries. By means of automatic continuity methods based on Banach–Dieudonné’s Theorem on Fréchet spaces, we provide a robust representation. This gives us some insight into the nature of uncertainty aversion these preferences are expressing. We further investigate under which conditions these two intricate dimensions of uncertainty can be disentangle into a distributional uncertainty, in the direction of von Neumann and Morgenstern’s theory, and a probability model uncertainty, in the spirit of risk measures. These results allow in particular to address both Allais as well as Elsberg’s paradox. The talk is based on a joint work with Patrick Cheridito, Freddy Delbaen, and Michael Kupper. Download Ludovic Tangpi Ndounkeu (Universität Wien, AUT) Robust pricing, hedging and investing in discrete timesWe provide a theoretical framework for pricing, hedging and investing in a model-independent financial market. Our method relies on representation results for convex increasing functionals and extends to hedging problems with given marginals. The talk is based on joint works with Patrick Cheridito and Michael Kupper. Download Qian Lin (Wuhan University, CHN) Dynamic Consistent Generalized α–Maxmin Expected UtilityWe establish a class of fully nonlinear conditional expectations. Similarly to the usage of linear expectations when a probabilistic description of uncertainty is present, we observe analogue quantitative and qualitative properties. The type of nonlinearity captures the agents sentiments of optimism and pessimism in an ambiguous environment. We then introduce an expected utility under a nonlinear expectation, and show monotonicity and continuity of utility. Risk aversion is characterized, and the properties of the certainty equivalent are discussed. An Arrow–Pratt approximation of the static and dynamic certainty equivalent illustrates the shape of risk and ambiguity premia. Finally, we derive an asset pricing formula, where optimism reduces the ambiguity premium. The talk is based on a joint work with Patrick Beißner. Download |