zum Hauptinhalt wechseln zum Hauptmenü wechseln zum Fußbereich wechseln Universität Bielefeld Play Search

Wintersemester 2013/14

Dienstag, 5.11.2013, 12-13 Uhr - Raum: W9-109

Dr. Tobias Koch
Freie Universität Berlin

Analyzing the stability and change of traits and different methods effects in londitudinal multitrait-multirater designs

The advantages of longitudinal multitrait-multi(-rater)-method (MTMM-MO) measurement designs are well known in psychology. With respect to MTMM-MO measurement designs researchers are able to investigate the convergent and discriminant validity among multiple methods (e.g., raters) on each measurement occasion as well as across different measurement occasions. Currently, the most common way to analyze MTMM data is via structural equation models (SEMs). MTMM-SEMs bear many advantages including the possibility (1) to test theoretical assumptions, (2) to separate different variance components, (3) to explicitly model measurement error, and (4) to relate method effects to external variables (Eid, 2000; Eid, 2003). Eid(2008) clarified the conceptual and methodological differences between measurement designs with fixed (i.e., structurally different) methods, random (i.e., inter- or exchangeable) methods, and a mix of both types of methods and proposed three different MTMM-SEMs for these cross-sectional MTMM designs. Specifically, fixed methods (or raters) are methods that stem from out of different method “populations” and thus cannot be easily replaced by one another (e.g., self-report, parent reports, physiological measures). In contrast, random methods (raters) are randomly sampled from out of a unique method (rater) distribution and thus can be conceived as interchangeable (e.g., multiple peer reports for a student). In this talk, I will illustrate how different longitudinal multilevel MTMM-SEMs can be constructively defined for measurement designs with fixed, random and a mix of both types of methods. In total, four different models will be proposed:  (1) a latent state version (LS-COM model), (2) a latent change version (LC-COM model), (3) latent state-trait version (LST-COM model), and (4) a latent growth curve model (LGC-COM model). The statistical performance of the models is scrutinized by extensive simulation studies. Finally, the advantages and limitations of the models are discussed and practical guidelines for modeling complex longitudinal MTMM data are provided. 

 

Dienstag, 12.11.2013, 12-13 Uhr - Raum: W9-109

Dieser Vortrag fällt leider aus.

Dr. Christian Schellhase
Universität Bielefeld

Flexible Pair-Copula Estimation in D-vines using Bivariate Penalized Splines

The talk presents a new method for flexible fitting of D-vines. Pair-copulas are estimated semi-parametrically using penalized Bernstein polynomials or constant and linear B-splines, respectively, as spline bases in each knot of the D-vine throughout each level. A penalty induce smoothness of the fit while the high dimensional spline basis guarantees flexibility. To ensure uniform univariate margins of each pair-copula, linear constraints are placed on the spline coefficients and quadratic programming is used to fit the model. The amount of penalizations for each pair-copula is driven by a penalty parameter which is selected in a numerically efficient way. Simulations and practical examples accompany the presentation.

 

Dienstag, 26.11.2013, 12-13 Uhr - Raum: W9-109

Dipl.-Psych.  Kristian Kleinke (BSocSc Hons)
Universität Bielefeld

countimp 1.0 - A Multiple Imputation Package for Incomplete Count Data

Special data types like count data require special analysis and imputation techniques. Yet, currently available multiple imputation tools are very limited with regard to count data. The countimp package was developed to provide powerful and easy to use multiple imputation (MI) procedures for incomplete count data. Our imputation functions work as an add-on for the popular and powerful mice package in R and can be called directly by mice. The package supports ordinary count data imputation (using a Poisson model), imputation of incomplete overdispersed count data (using a Quasi-Poisson or Negative Binomial model), imputation of zero-inflated ordinary or overdispersed count data (ZIP and ZINB models), and finally, imputation of various kinds of multilevel count data.

 

Dienstag, 10.12.2013, 12-13 Uhr - Raum: W9-109

Dr. Christian Schellhase
Universität Bielefeld

Flexible Pair-Copula Estimation in D-vines using Bivariate Penalized Splines

The talk presents a new method for flexible fitting of D-vines. Pair-copulas are estimated semi-parametrically using penalized Bernstein polynomials or constant and linear B-splines, respectively, as spline bases in each knot of the D-vine throughout each level. A penalty induce smoothness of the fit while the high dimensional spline basis guarantees flexibility. To ensure uniform univariate margins of each pair-copula, linear constraints are placed on the spline coefficients and quadratic programming is used to fit the model. The amount of penalizations for each pair-copula is driven by a penalty parameter which is selected in a numerically efficient way. Simulations and practical examples accompany the presentation.

 

Dienstag, 07.01.2014, 12-13 Uhr - Raum: W9-109

Dipl.-Vw. Christian Heinze
Universität Bielefeld

Filtering and likelihood evaluation in ''partially specified'' state space models

The talk presents algorithms for filtering and likelihood evaluation in ''partially specified'' state space models, so-called descriptor models. State space modeling provides a framework to describe (spatio-)temporal phenomena and is general enough to encompass many common linear time series models. Thus, theory and algorithms for this model class apply to a host of application relevant cases. As the name suggests such modeling centers around the concept of a sequence of state vectors x1, ..., xN, whose dynamics obey an autoregressive state equation. State vectors xt are not directly observable but only in aggregated and error contaminated form yt. An observation equation provides the link between states xt and observations yt. Together, these two equations specify a state space model. The prime tool for forecasting and (Gaussian) likelihood evaluation for linear state space models is an algorithm called Kalman filter. The latter is available in many forms each tuned towards the computation of different quantities, speed, and/or accuracy. Algorithms of such sort form the backbone of linear time series analysis in most common statistical software packages. In their traditional form Kalman filter algorithms require full specification of first and second moments: a burden when working with non-stationary processes. The talk surveys the most common extensions in unified form and concludes with an application to GDP for German counties.

 

Dienstag, 28.01.2014, 12-13 Uhr - Raum: W9-109

Dipl.-Kffr. Silvia Rašković
Universität Bielefeld

Vignettenanalyse – Einsatz des Faktoriellen Surveys zur Untersuchung der Wirkung sozialer Risiko- und Einflussfaktoren auf die Wahrscheinlichkeit des erneuten Kaufs gefälschter Markenbekleidung

Der Vortrag gibt Einblick in eine experimentelle Untersuchung des Einflusses sozialer Risiko- und Einflussfaktoren auf die Wiederkaufwahrscheinlichkeit von gefälschten Markenprodukten am Beispiel der Modebranche. In vielen Studien zum bewussten Kauf von Fälschungen wurde das soziale Risiko (d.h. die Blamagegefahr bzw. der Gesichtsverlust) als wichtige Risikodimension identifiziert (wie z.B. Schlegelmilch & Stöttinger (1999), Jenner & Artun (2005), Veloutsou & Bian (2008)) sowie die Bedeutung des sozialen Umfeldes und dessen Einstellung zu gefälschten Produkten erkannt (wie z.B. Albers-Miller (1999), Kim & Karpova (2010)). Jedoch wurden insbesondere der Fall der Realisation des sozialen Risikos (d.h. das Eintreten der Situation, dass das Tragen gefälschter Markenbekleidung entlarvt wird) und die daraus resultierende Auswirkung auf erneute Käufe von Fälschungen, bisher noch nicht konkreter erforscht. Als wichtige Aspekte in Bezug auf das Zustandekommen einer „Blamage“ werden in der Literatur das Verhalten der Konsumenten und der Umgang mit dem gefälschten Produkt (vgl. Hoe et al. (2003)) vor und nach der Entdeckung, der Grad der Beschuldigung bezüglich der Echtheit des Produktes (vgl. Hoe et al. (2003), Perez et al. (2010)) und die Einstellung der Referenzperson zu gefälschten Markenprodukten (vgl. Albers-Miller (1999), de Matos et al. (2007)) genannt. Um zu untersuchen, inwieweit diese vier Faktoren (Präsentationsverhalten × Grad der Beschuldigung × Reaktion des Konsumenten × Reaktion der Referenzperson) einen Effekt auf die Wiederkaufabsicht (Kaufwahrscheinlichkeit von 0% bis 100%) ausüben, wurde der Faktorielle Survey bzw. die sogenannte Vignettenanalyse als Untersuchungsmethode gewählt (vgl. Alexander & Becker (1978), Becker & Opp (2001), Steiner & Atzmüller (2006), Auspurg et al. (2009), Atzmüller & Steiner (2010)). Die Auswertung des vollständigen 24-Versuchsplans (balanciertes „Within-Subjects“ Design) erfolgt sowohl mit einer Varianzanalyse mit Messwiederholung als auch der traditionellen Conjoint Analyse.


Zum Seitenanfang