zum Hauptinhalt wechseln zum Hauptmenü wechseln zum Fußbereich wechseln Universität Bielefeld Play Search

Wintersemester 2018/19

Dienstag, 16.10.2018, 12-13 Uhr - Raum: W9-109

Matthias Ulrich
Universität Bielefeld

Distributional regression for demand forecasting in e-grocery - a case study

In traditional brick-and-mortar retailing, information on customer demand typically result from point-of-sale data. These data are censored, and hence biased, due to stock-outs affecting the individual purchase. In contrast, e-retailing allows for the observation of customer preferences before stock-out information becomes known to the buyer and, therefore, yields uncensored demand data. Moreover, in e-grocery the customer selects a future delivery time slot so that future demand is partly known to the retailer at the replenishment decision time. 

 

Dienstag, 30.10.2018,12-13 Uhr - Raum: W9-109

Dietmar Bauer
Universität Bielefeld

Approximation von Zeitreihen mittels Autoregressionen

Autoregressionen spielen in der Zeitreihenanalyse eine große Rolle, da sie relativ einfach zu schätzen, ideal für Prognosezwecke geeignet und sehr flexibel sind. Das gilt sowohl für univariate als auch für multivariate Zeitreihen. Sie können eine große Klasse an stationären (und nichtstationären) Prozessen approximieren. Damit sind sie in vielen Fällen ein idealer Startpunkt für weitergehende Analysen. In dem Vortrag wird in verschiedenen Situationen gezeigt, wie diese Approximationseigenschaften genutzt werden können.

 

Dienstag, 13.11.2018, 12-13 Uhr - Raum: W9-109

Antonello Maruotti
Università di Roma LUMSA

ROBUST HIDDEN (SEMI-)MARKOV MODELS with applications to social and financial data

We introduce multivariate models for the analysis of longitudinal and time-series data. Our models are developed under hidden Markov and semi-Markov settings to describe the temporal evolution of observations, whereas the marginal distribution of observations is described by a mixture of multivariate heavy-tailed distributions. Compared to the normal distribution, the heavy tailed distributions possess one or more additional parameters which permits the modeling of excess kurtosis and the presence of outliers, spurious points, or noise (collectively referred to as bad points), and hence can be viewed as robust extensions of the normal distribution. The possibility of considering kurtosis and bad points allows for a better fit to both the distributional and dynamic features of the data. For these models, we outline an EM algorithm for maximum likelihood estimation which exploits recursions developed within the hidden (semi-)Markov literature. As an illustration, we provide an example based on the analysis of a bivariate time series of stock market returns and on criminal activities in Italian provinces

 

Dienstag, 27.11.2018, 12-13 Uhr - Raum: W9-109

Kevin Tierney
Universität Bielefeld

Hyper-Reactive Tabu Search for MaxSAT

Local search metaheuristics have been developed as a general tool for solving hard combinatorial search problems. However, inpractice, metaheuristics very rarely work straight out of the box. An expert is frequently needed to experiment with an approach and tweak parameters, remodel the problem, and adjust search concepts to achieve a reasonably effective approach. Reactive search techniques aim to liberate the user from having to manually tweak all of the parameters of their approach. In this talk, we propose a hyper-parameterized tabu search approach that dynamically adjusts key parameters of the search using a learned strategy. Experiments on MaxSAT show that this approach can lead to state-of-the-art performance without any expert user involvement, even when the metaheuristic knows nothing more about the underlying combinatorial problem than how to evaluate the objective function.

 

Dienstag, 11.12.2018, 12-13 Uhr - Raum: W9-109

Dorian Tsolak
Universität Bielefeld

Authorship verification with sparse autoencoders

Authorship verification is the problem of identifying the author of an unknown document, given known documents of one candidate author. Well established models in digital forensics for authorship verification are often based on distance functions only and just recently attempts have been made to innovate the field by harnessing the advancements in neural network research. This talk features the first (to best of my knowledge) application of an autoencoder to the task of authorship verification. It discusses the current state of authorship verification and which needs of the research field can be tackled by emplyoing neural networks. A sparse autoencoder is presented as one possible pathway. I will discuss how the data and the model have to be adjusted for the problem at hand and results of this first approach are presented.

 

Dienstag, 08.01.2019, 12-13 Uhr - Raum: W9-109

Aktuelle Forschungsbereiche im ZeSt

 

Dienstag, 22.01.2019, 12-13 Uhr - Raum: W9-109

Katrin Madjar, M.Sc.
Technische Universität Dortmund

Borrowing information across multiple cancer cohorts in sparse Cox models

In cancer research important objectives are the prediction of a patient's risk based on molecular measurements such as gene expression data and the identification of new prognostic biomarkers (e.g. genes). This is often challenging because patient cohorts are typically small and can be heterogeneous with regard to their relationship between predictors and outcome. In this context, we propose a frequentist and a Bayesian approach for gene expression data with survival outcome, which select the important predictors (genes) and provide a separate risk prediction model for each cohort, while at the same time allowing sharing information between cohorts to increase power. This is achieved by a frequentist Cox model with lasso penalty for variable selectionand a weighted version of the Cox partial likelihood that includes patients of all cohorts but assigns them individual weights based on their cohort affiliation. Patients who fit well to the cohort of interest receive higher weights in the cohort-specific model. The other approach is a Bayesian Cox model with Bayesian variable selection prior. We assumea network that links genes within and across different cohorts. Network information is incorporated into variable selection to helpidentifying pathways of functionally related genes and genes that are simultaneously prognostic in different subgroups. We apply both approaches to simulated data and real lung cancer cohorts and compare their performance against a standard subgroup model basedonly on the data of the cohort of interest, and a standard combined model that simply pools all cohorts.

 

Dienstag, 05.02.2019, 12-13 Uhr - Raum: W9-109

Prof. Dr. Carsten Jentsch
Technische Universität Dortmund

Asymptotic Theory and Bootstrap Inference for weak VARs and weak Proxy SVARs

In Bruggemann, Jentsch & Trenkler (2016), we consider a framework for asymptotically valid inference in stable vector autoregressive (VAR) models when the innovations are uncorrelated, but not independent. This setup is referred to as a weak VAR model. We provide asymptotic theory for weak VARs under strong mixing conditions on the innovations and prove a joint central limit theorem for the LS estimators of VAR coeffcients and variance parameters of the innovations. Our results allow for asymptotically correct inference on statistics that depend on both VAR coeffcients and variance parameters of the innovations as e.g. structural impulse response functions (IRFs). To identify structural shocks in VARs, proxy structural VARs (proxy SVARs) use external proxy variables that are correlated with the structural shocks of interest, but uncorrelated with other structural shocks. In Jentsch & Lunsford (2016), we extend the results from weak VARs to weak proxy SVARs and provide asymptotic theory when the VAR innovations and proxy variables are jointly strong mixing. As inference based on normal approximation is cumbersome due to the complicated limiting variance, bootstrap methods are commonly used. In Bruggemann et al. (2016) we showed that (residual-based) wild and pairwise bootstrap schemes are generally inappropriate for inference on (functions of) the variance parameters of the innovations if the VAR innovations are not independent. As discussed in Jentsch & Lunsford (2018+), this bootstrap inconsistency result translates directly also to proxy SVARs. Hence, the wild bootstrap as propagated by Mertens & Ravn (2013) to produce confidence intervals for the IRFs in proxy SVARs is not appropriate and simulations show that their coverage rates for IRFs can be much too low. In contrast, we propose a residual-based moving block bootstrap (MBB) and prove its consistency for inference on statistics that depend jointly on VAR coeffcients and on covariances of the VAR innovations and proxy variables. Using the MBB to reestimate confidence intervals for the IRFs in Mertens & Ravn (2013), we show that inference cannot be made about the effects of tax changes on output, labor, or nonresidential investment.


Zum Seitenanfang