Pre Recorded
This two day course covers the important and general topics of statistical model building, model evaluation, model selection, model comparison, model simplification, and model averaging. These topics are vitally important to almost every type of statistical analysis, yet these topics are often poorly or incompletely understood. We begin by considering the fundamental issue of how to measure model fit and a model’s predictive performance, and discuss a wide range of other major model fit measurement concepts like likelihood, log likelihood, deviance, residual sums of squares etc. We then turn to nested model comparison, particularly in general and generalized linear models, and their mixed effects counterparts. We then consider the key concept of out-of-sample predictive performance, and discuss over-fitting or how excellent fits to the observed data can lead to very poor generalization performance. As part of this discussion of out-of-sample generalization, we introduce leave-one-out cross-validation and Akaike Information Criterion (AIC). We then cover general concepts and methods related to variable selection, including stepwise regression, ridge regression, Lasso, and elastic nets. Following this, we turn to model averaging, which is an arguably always preferable alternative to model selection. Finally, we cover Bayesian methods of model comparison. Here, we describe how Bayesian methods allow us to easily compare completely distinct statistical models using a common metric. We also describe how Bayesian methods allow us to fit all the candidate models of potential interest, including cases were traditional methods fail.
This course is aimed at anyone who is interested in using R for data science or statistics. R is widely used in all areas of academic scientific research, and also widely throughout the public, and private sector.
Last Up-Dated – 23:11:2021
Duration – Approx. 15 hours
ECT’s – Equal to 1 ECT’s
Language – English
This course is aimed at anyone who is interested in advanced statistical modelling as it is practiced widely throughout academic scientific research, as well as widely throughout the public and private sectors.
Although not strictly required, using a large monitor or preferably even a second monitor will make the learning experience better, as you will be able to see my RStudio and your own RStudio simultaneously.
All the sessions will be video recorded, and made available immediately on a private video hosting website. Any materials, such as slides, data sets, etc., will be shared via GitHub.
We will assume only a minimal amount of familiarity with some general statistical and mathematical concepts. These concepts will arise when we discuss statistics and data analysis. Anyone who has taken any undergraduate (Bachelor’s) level course on (applied) statistics can be assumed to have sufficient familiarity with these concepts.
Attendees should already have experience with R and be able to read csv files, create simple plots, and manipulate data frames. The experience of using some basic R spatial packages, such as sp or raster would be beneficial.
A laptop computer with a working version of R or RStudio is required. R and RStudio are both available as free and open source software for PCs, Macs, and Linux computers. R may be downloaded by following the links here https://www.r-project.org/. RStudio may be downloaded by following the links here: https://www.rstudio.com/.
All the R packages that we will use in this course will be possible to download and install during the workshop itself as and when they are needed, and a full list of required packages will be made available to all attendees prior to the course.
A working webcam is desirable for enhanced interactivity during the live sessions, we encourage attendees to keep their cameras on during live zoom sessions.
Although not strictly required, using a large monitor or preferably even a second monitor will improve he learning experience
PLEASE READ – CANCELLATION POLICY
Cancellations/refunds are accepted as long as the course materials have not been accessed,.
There is a 20% cancellation fee to cover administration and possible bank fess.
If you need to discuss cancelling please contact oliverhooker@prstatistics.com.
If you are unsure about course suitability, please get in touch by email to find out more oliverhooker@prstatistics.com
Day 1 – approx. 6 hours
Topic 1: Numerical programming with numpy. Although not part of Python’s official standard library, the numpy package is the part of the de facto standard library for any scientific and numerical programming. Here we will introduce numpy, especially numpy arrays and their built in functions (i.e. “methods”). Here, we will also consider how to speed up numpy code using the Numba just-in-time compiler.
Topic 2: Data processing with pandas. The pandas library provides means to represent and manipulate data frames. Like numpy, pandas can be see as part of the de facto standard library for data oriented uses of Python. Here, we will focus on data wrangling including selecting rows and columns by name and other criteria, applying functions to the selected data, aggregating the data. For this, we will use Pandas directly, and also helper packages like siuba.
Day 2 – approx. 6 hours
Topic 3: Data Visualization. Python provides many options for data visualization. The matplotlib library is a low level plotting library that allows for considerable control of the plot, albeit at the price of a considerable amount ofm low level code. Based on matplotlib, and providing a much higher level interface to the plot, is the seaborn library. This allows us to produce complex data visualizations with a minimal amount of code. Similar to seaborn is ggplot, which is a direct port of the widely used R based visualization library.
Topic 4: Statistical data analysis. In this section, we will describe how to perform widely used statistical analysis in Python. Here we will start with the statsmodels, which provides linear and generalized linear models as well as many other widely used statistical models. We will also cover rpy2, which is and interface from Python to R. This allows us to access all of the the power of R from within Python.
Topic 5: Symbolic mathematics. Symbolic mathematics systems, also known as computer algebra systems, allow us to algebraically manipulate and solve symbolic mathematical expression. In Python, the principal symbolic mathematics library is sympy. This allows us simplify mathematical expressions, compute derivatives, integrals, and limits, solve equations, algebraically manipulate matrices, and more.
Topic 6: Parallel processing. In this section, we will cover how to parallelize code to take advantage of multiple processors. While there are many ways to accomplish this in Python, here we will focus on the multiprocessing
Classes from 10:00 to 18:00
Topic 1: Measuring model fit. In order to introduce the general topic of model evaluation, selection, comparison, etc., it is necessary to understand the fundamental issue of how we measure model fit. Here, the concept of conditional probability of the observed data, or of future data, is of vital importance. This is intimately related, though distinct, to concept of likelihood and the likelihood function, which is in turn related to the concept of the log likelihood or deviance of a model. Here, we also show how these concepts are related to concepts of residual sums of squares, root mean square error (rmse), and deviance residuals.
Topic 2: Nested model comparison. In this section, we cover how to do nested model comparison in general linear models, generalized linear models, and their mixed effects (multilevel) counterparts. First, we precisely define what is meant by a nested model. Then we show how nested model comparison can be accomplished in general linear models with F tests, which we will also discuss in relation to R^2 and adjusted R^2. In generalized linear models, and mixed effects models, we can accomplish nested model comparison using deviance based chi-square tests via Wilks’s theorem.
Topic 3: Out of sample predictive performance: cross validation and information criteria. In the previous sections, the focus was largely on how well a model fits or predicts the observed data. For reasons that will be discussed in this section, related to the concept of overfitting, this can be a misleading and possibly even meaningless means of model evaluation. Here, we describe how to measure out of sample predictive performance, which measures how well a model can generalize to new data. This is arguably the gold-standard for evaluating any statistical models. A practical means to measure out of sample predictive performance is cross-validation, especially leave-one-out cross-validation. Leave-one-out cross-validation can, in relatively simple models, be approximated by Akaike Information Criterion (AIC), which can be exceptionally simple to calculate. We will discuss how to interpret AIC values, and describe other related information criteria, some of which will be used in more detail in later sections.
Classes from 10:00 to 18:00
Topic 4: Variable selection. Variable selection is a type of nested model comparison. It is also one of the most widely used model selection methods, and variable selection of some kind is almost always done routinely in all data analysis. Although we will also have discussed variable selection as part of Topic 2 above, we discuss the topic in more detail here. In particular, we cover stepwise regression (and its limitations), all subsets methods, ridge regression, Lasso, and elastic nets.
Topic 5: Model averaging. Rather than selecting one model from a set of candidates, it is arguably always better perform model averaging, using all the candidates models, weighted by the predictive performance. We show how to perform model average using information criteria.
Topic 6: Bayesian model comparison methods. Bayesian methods afford much greater flexibility and extensibility for model building than traditional methods. They also allow us to easily directly compare completely unrelated statistical models of the same data using information criteria such as WAIC and LOOIC. Here, we will also discuss how Bayesian methods allow us to fit all models of potential interest to us, including cases where model fitting is computationally intractable using traditional methods (e.g., where optimization convergence fails). This allows us therefore to consider all models of potential interest, rather than just focusing on a limited subset where the traditional fitting algorithms succeed.
Dr. Mark Andrews
Google Scholar
Mark Andrews is a Senior Lecturer in the Psychology Department at Nottingham Trent University in Nottingham, England. Mark is a graduate of the National University of Ireland and obtained an MA and PhD from Cornell University in New York. Mark’s research focuses on developing and testing Bayesian models of human cognition, with particular focus on human language processing and human memory. Mark’s research also focuses on general Bayesian data analysis, particularly as applied to data from the social and behavioural sciences. Since 2015, he and his colleague Professor Thom Baguley have been funded by the UK’s ESRC funding body to provide intensive workshops on Bayesian data analysis for researchers in the social sciences.