Potsdam Mind Research Repository

Reproducible Research

Potsdam Mind Research Repository (PMR2)

The Potsdam Mind Research Repository

The Potsdam Mind Research Repository (PMR2) provides access to peer-reviewed publications along with data and scripts for analyses and figures reported in them. We refer to these units as "paper packages."  We hope to achieve the following goals:

• Document data and analyses used in our publications in a public forum.
• Invite readers (a) to reproduce our analyses/figures, (b) to try out and possibly publish alternative analyses, or (c) to adopt our scripts for their own data.
• Receive feedback about our scripts, both about necessary corrections of errors and more elegant alternative code.

Here are a few proposals about how we plan to manage and grow the site for the start-up time:

• We will add new scripts to a paper package if they correct our analyses or if they provide new results through an alternative analysis. Indeed, alternative analyses may lead to new publications and we will be glad to include these as new paper packages.
• We invite colleagues to submit paper packages if they fit the theme of our research topics. In general, our expectation is that other groups may follow suit, organizing content according to their own research program.
• Eventually, we may provide comment fields for paper packages, at least for some of them, or open a Blog to facilitate exchange about the paper packages. For now, we ask that such questions are directed to the author's email.

We will explore whether this site can serve as a repository for experimental results that were not published because they did not turn out as expected, assuming that there were no technical or other obvious reasons for the failure of the experiment. Making such data available in the context of research that did yield the desired results may inspire others to take a new look. Perhaps this way we (slightly) reduce  the problem associated with the well-known bias for publications with positive results.

With "R2" in the acronym PMR2 and with the term  "paper package" we give credit to "The R Project for Statistical Computing"  (CRAN; http://www.r-project.org/ ). We model this site on the collaborative spirit of CRAN. They have served as our prime source of inspiration of how transparency and progress can be implemented as Open Science. With this site we hope to import this collaborative spirit of openness, sharing, and support into a few of the many areas of mind and brain research.

Moreover, most of the analyses scripts and dataframes are based on R. We make use of a very large number of the opportunities afforded by many contributors to this computing environment.  As representatives of all the contributors to R, we single out the authors of two packages that almost all our scripts rely on. We acknowledge the contribution of Douglas Bates and Martin Maechler, two members of the R Core Group and the authors of the lme4 package (and Matrix --among others). We rely on their work for statistical inferences about our results on a daily basis. For graphics we rely primarily on Hadley Wickham's ggplot2 package which sets new standards for ease of displaying experimental results.

Update (October 2011): We recently became aware of several, some of them long-standing, initiatives in the context of Reproducible Research. It appears that there is a momentum building in favor of this perspective on a new culture in science. Here are some very informative links on this topic:

- NSF (2011). Changing the conduct of science in the information age ( http://www.nsf.gov/pubs/2011/oise11003/)

- AAAS (2011). The digitization of science: Reproducibility and interdisciplinary knowledge transfer (http://www.stanford.edu/~vcs/AAAS2011/ )

Update (July 2014): We are using two services of RStudio for further dissemination of our reproducible research:

- RPubs, based on R Markdown;  commented R scripts for publications (e.g., http://rpubs.com/Reinhold/17313) or tutorial R scripts (e.g., http://rpubs.com/Reinhold/22193)

- Shiny;  e.g., a tutorial for shrinkage in linear mixed models: https://pmr2.shinyapps.io/Shrinkage/

Our research and the construction of this site have been funded by a European Collaborative Research Project (ESF 05_ECRP-FP006, 2006-2009) and a German Research Foundation Research Group (DFG FOR868, since 2008).

Last Updated on Friday, 25 July 2014 10:57

Matuschek et al. (2017). Balancing Type I Error and Power in Linear Mixed Models

Matuschek, et al. (2017). Balancing Type I Error and Power in Linear Mixed Models

Hannes Matuschek, Reinhold Kliegl, Shravan Vasishth, Harald Baayen, Douglas Bates

Abstract: Linear mixed-effects models have increasingly replaced mixed-model analyses of variance for statistical inference in factorial psycholinguistic experiments. The advantages of LMMs over ANOVAs, however, come at a cost: Setting up an LMM is not as straightforward as running an ANOVA. One simple option, when numerically possible, is to fit the full variance-covariance structure of random effects (the maximal model; Barr et al., 2013), presumably to keep Type I error down to the nominal $\alpha$ in the presence of random effects. Although it is true that fitting a model with only random intercepts may lead to higher Type I error, fitting a maximal model also has a cost: it can lead to a significant loss of power. We demonstrate this with simulations and suggest that for typical psychological and psycholinguistic data, models with a random effect structure that is supported by the data have optimal Type I error and power properties.

Also available at arXiv.

 MatuschekPreprint2015.zip [ ] [ ] 18307 Kb 16/02/2017 15:02 manuscript.pdf [ ] [ ] 1945 Kb 28/07/2016 14:56
Last Updated on Thursday, 20 June 2019 18:21

Schad et al. (2019, JML). How to capitalize on a priori contrasts in linear (mixed) models: A tutorial

Schad, D., Hohenstein, S., Vasishth, S., & Kliegl, R. (2019). How to capitalize on a priori contrasts in linear (mixed) models: A tutorial. Journal of Memory and Language

Abstract. Factorial experiments in research on memory, language, and in other areas are often analyzed using analysis of variance (ANOVA). However, for effects with more than one numerator degrees of freedom, e.g., for experimental factors with more than two levels, the ANOVA omnibus F-test is not informative about the source of a main effect or interaction. Because researchers typically have specific hypotheses about which condition means differ from each other, a priori contrasts (i.e., comparisons planned before the sample means are known) between specific conditions or combinations of conditions are the appropriate way to represent such hypotheses in the statistical model. Many researchers have pointed out that contrasts should be “tested instead of, rather than as a supplement to, the ordinary ‘omnibus’ F test” (Hays, 1973, p. 601). In this tutorial, we explain the mathematics underlying different kinds of contrasts (i.e., treatment, sum, repeated, polynomial, custom, nested, interaction contrasts), discuss their properties, and demonstrate how they are applied in the R System for Statistical Computing (R Core Team, 2018). In this context, we explain the generalized inverse which is needed to compute the coefficients for contrasts that test hypotheses that are not covered by the default set of contrasts. A detailed understanding of contrast coding is crucial for successful and correct specification in linear models (including linear mixed models). Contrasts defined a priori yield far more useful confirmatory tests of experimental hypotheses than standard omnibus F-test.

This the author version of a paper to appear in the Journal of Memory and Language.

 Schad_etal.Contrasts.JML.2019.pdf [ ] [1] 3142 Kb 21/07/2019 09:01 mixedDesign.v0.6.3.R [ ] [2] 17 Kb 14/07/2019 06:36 SchadEtAlJML2019.R [ ] [3] 24 Kb 21/07/2019 08:58
Last Updated on Sunday, 21 July 2019 09:15

Laubrock & Kliegl (2015). The eye-voice span during reading aloud

Laubrock, J., & Kliegl, R. (2015, Frontiers of Psychology). The eye-voice span during reading alound.

Although eye movements during reading are modulated by cognitive processing demands, they also reflect visual sampling of the input, and possibly preparation of output for speech or the inner voice. By simultaneously recording eye movements and the voice during reading aloud, we obtained an output measure that constrains the length of time spent on cognitive processing. Here we investigate the dynamics of the eye-voice span (EVS), the distance between eye and voice. We show that the EVS is regulated immediately during fixation of a word by either increasing fixation duration or programming a regressive eye movement against the reading direction. EVS size at the beginning of a fixation was positively correlated with the likelihood of regressions and refixations. Regression probability was further increased if the EVS was still large at the end of a fixation: if adjustment of fixation duration did not sufficiently reduce the EVS during a fixation, then a regression rather than a refixation followed with high probability. We further show that the EVS can help understand cognitive influences on fixation duration during reading: in mixed model analyses, the EVS was a stronger predictor of fixation durations than either word frequency or word length. The EVS modulated the influence of several other predictors on single fixation durations (SFDs). For example, word-N frequency effects were larger with a large EVS, especially when word N−1 frequency was low. Finally, a comparison of SFDs during oral and silent reading showed that reading is governed by similar principles in both reading modes, although EVS maintenance and articulatory processing also cause some differences. In summary, the EVS is regulated by adjusting fixation duration and/or by programming a regressive eye movement when the EVS gets too large. Overall, the EVS appears to be directly related to updating of the working memory buffer during reading.

 Laubrock_Kliegl.fpsyg.2015.pdf [article] [1] 5988 Kb 19/06/2019 23:46
Last Updated on Wednesday, 19 June 2019 23:46