This 3-day course teaches skills for informative and transparent evaluation of theory-based hypotheses using p-values, information criteria, and Bayes factors. Participants will learn to apply open-science principles and cutting-edge statistical techniques that ensure maximally informative analyses. Contemporary issues - such as publication bias, questionable research practices, the statistical evaluation of (non-null) hypotheses, and methods for evaluating the same research question with multiple (replication) studies - are also addressed. The course will be non-technical in nature and is targeted at PhD students and researchers who want to apply the presented approaches to their own data.
The evaluation of hypotheses is a core feature of research in the behavioral, social, and biomedical sciences. Over the last decade, the replication crisis drew a lot of attention to high rates of false-positive results of hypothesis tests in the literature, caused by problems such as publication bias (selective publication of positive results), ‘questionable research practices’, and statistical misconceptions. This course will give participants a non-technical introduction to the theoretical basis of hypothesis evaluation from different perspectives (frequentist, information theoretic, and Bayesian,) and teach how to apply hypothesis evaluation appropriately to avoid fooling oneself and others.
The course is targeted at students and researchers who want to improve their understanding of how to evaluate theory-based hypotheses.
The first day of the course will cover classic null-hypothesis significance testing (NHST), common problems with NHST (misconceptions, questionable research practices, publication bias), open-science practices to avoid these problems, and how to draw more informative inferences with equivalence testing.
The second day will focus on hypothesis evaluation using model selection. Model selection provides an alternative to dichotomous decisions that are the default in NHST and allows more nuanced inferences. Two types will be discussed: information theoretic model selection, that is, model selection using information criteria , and Bayesian model selection (BMS). For both types, the focus will be on informative, theory-based hypotheses (as opposed to null hypothesis testing). The methods that will be covered include the AIC-type criteria called the GORIC and GORICA, GORIC(A) weights, Bayes factors, and posterior model probabilities,.
The third day of the course will address informative hypothesis evaluation for multiple (replication) studies, including both direct and conceptual replications. Attention will be paid to (among others) hypothesis updating and combining evidence from multiple studies addressing the same research question (using both GORICA and BMS).
Each day consists of lectures, including small hands-on sessions, and ends with a lab meeting, where there is also room to work on your own data.
A nice prequel to our course is the 4-day summer school course ‘Open and Reproducible Science’
This summer school course provides intensive training on open-science methods such as preregistration, reproducible analyses, and FAIR data sharing for researchers in the social and behavioural sciences. This course takes place in the week prior to our course.
Among our Methodology and Statistics postgraduate courses, there is a course that also addresses Bayes: A gentle introduction to Bayesian Estimation
This course focuses on Bayesian statistics in terms of prediction (focusing on SEM) but does not cover hypothesis testing nor Bayes factors (which we do).
The course takes also place in the week prior to our course and may be a nice prequel if your focus is on Bayesian methods in general.
Participants are requested to bring their own laptop computer. Software will be available online.
Dr. Rebecca Kuiper and Anne Scheel
The course will be non-technical in nature and is targeted at students and researchers who want to use the approaches presented for the evaluation of their own data. The participants can come from a variety of fields, such as sociology, psychology, education, human development, marketing, business, biology, medicine, political science, and communication sciences
Aim of the course
After attending the course,
- you understand the epistemological basis of hypothesis testing as well as frequentist, Bayesian, and information-theoretic approaches to evaluate hypotheses statistically,
- you have obtained the practical skills necessary to evaluate hypotheses with these approaches,
- using open-science practices to conduct and report your research transparently and reduce the risk of error and bias,
- you learned how to evaluate replication studies,
- you learned how to combine the evidence from multiple diverse studies.
3 full days:
Each day consists of lectures including small hands-on exercises and ends with a lab meeting (where there is also room to work on your own data).
You will receive a certificate upon course completion. Please be aware that this course does not include graded activities, and therefore we cannot provide a transcript of grades.
The tuition fee for PhD students from the Faculty of Social and Behavioural Sciences from Utrecht University will be funded by the Graduate School of Social and Behavioural Sciences.
The are no scholarships available for this course.
We also offer tailormade M&S courses and in-house M&S training. If you want to look the possibilities, please contact us at email@example.com
The housing costs include housing, plus a Utrecht Summer School sleeping bag, for you to keep. This sleeping bag also includes an inflatable pillow and matrass cover. If you wish to bring your own bedding, please contact us, so we can give you a 50 EUR discount on the housing fee. Please note that you cannot buy individual bedding items.
Team M&S Summer School | E: firstname.lastname@example.org