05 Estimating Risk and Test Error

Stat 406

Geoff Pleiss, Trevor Campbell

Last modified – 23 September 2024

\[ \DeclareMathOperator*{\argmin}{argmin} \DeclareMathOperator*{\argmax}{argmax} \DeclareMathOperator*{\minimize}{minimize} \DeclareMathOperator*{\maximize}{maximize} \DeclareMathOperator*{\find}{find} \DeclareMathOperator{\st}{subject\,\,to} \newcommand{\E}{E} \newcommand{\Expect}[1]{\E\left[ #1 \right]} \newcommand{\Var}[1]{\mathrm{Var}\left[ #1 \right]} \newcommand{\Cov}[2]{\mathrm{Cov}\left[#1,\ #2\right]} \newcommand{\given}{\ \vert\ } \newcommand{\X}{\mathbf{X}} \newcommand{\x}{\mathbf{x}} \newcommand{\y}{\mathbf{y}} \newcommand{\P}{\mathcal{P}} \newcommand{\R}{\mathbb{R}} \newcommand{\norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\snorm}[1]{\lVert #1 \rVert} \newcommand{\tr}[1]{\mbox{tr}(#1)} \newcommand{\brt}{\widehat{\beta}^R_{s}} \newcommand{\brl}{\widehat{\beta}^R_{\lambda}} \newcommand{\bls}{\widehat{\beta}_{ols}} \newcommand{\blt}{\widehat{\beta}^L_{s}} \newcommand{\bll}{\widehat{\beta}^L_{\lambda}} \newcommand{\U}{\mathbf{U}} \newcommand{\D}{\mathbf{D}} \newcommand{\V}{\mathbf{V}} \]

Last time

  1. What is a model (formal definition)?
  2. Evaluating models (risk/loss functions)
  3. Decomposing risk (bias, variance, irreducible error)

What is a model?

A model is a set of distributions that explain data \(\{ Z = (X, Y) \}\), i.e.

\[\mathcal{P} = \{ P: \quad Y \mid X \sim \mathcal N( f(X), \sigma^2) \quad \text{for some ``smooth'' }f \}\]

(Why do we have to specify that \(f\) is smooth? Why can’t it be any function?)

Goal of learning

Choose the \(P \in \mathcal P\) that makes the “best” predictions on new \(X, Y\) pairs.

(Next slide: how do we formalize “best”?)

How do we evaluate models?

\[\mathcal{P} = \{ P: \quad Y \mid X \sim \mathcal N( f(X), \sigma^2) \quad \text{for some ``smooth'' f} \}\]

  1. Specify how a \(P \in \mathcal P\) makes predictions \(\hat Y\) on new inputs \(X\).
    (E.g.: \(\hat Y = f(X)\) for \(P = \mathcal N(f(X), \sigma^2)\).)

  2. Introduce a loss function \(\ell(Y, \hat{Y})\) (a datapoint-level function).
    (E.g.: \(\ell(Y, \hat Y) = (Y - \hat Y)^2\))

  3. Define the test error of \(P \in \mathcal P\) as the expected loss (a population-level function):
    \(T_n(P) = E[\ell(Y, \hat Y)] = E[(Y - f(X))^2]\)

  4. The best model is the one that minimizes the test error
    (\(P^* = \argmin_{P \in \mathcal P} T_n(P)\))

Last time: when \(\ell(Y, \hat Y) = (Y - \hat Y)^2\), we showed that the regression function is the best model:

\[ \text{Regression function } \triangleq E[Y \mid X] = \argmin_{P \in \mathcal P} T_n(P) = \argmin_{P \in \mathcal P} E[\ell(Y, \hat Y)] \]

Are we done? Have we solved learning?

No! We don’t know what \(E[Y \mid X]\) is! We have to estimate it from data!

\[ \hat f(X) \approx E[Y \mid X] \]

(We’ll discuss various methods for producing \(\hat f(X)\) estimators throughout this course.)

Risk (Expected Test Error) and its Decomposition

Our estimator \(\hat f\) is a random variable (it depends on training sample).
So let’s consider the risk (the expected test error):

\[ R_n = E_{\hat f} \left[ T_n(\hat f) \right] = E_{\hat f, X, Y} \left[ \ell(Y, \hat f(X)) \right] \]

Note

Test error is a metric for a fixed \(\hat f\). It averages over all possible test points, but assumes a fixed training set.

Risk averages over everything that is random: (1) the test data point sampled from our population, and (2) the training data that produces \(\hat f\)

Risk (Expected Test Error) and its Decomposition

When \(\ell(Y, \hat Y) = (Y - \hat Y)^2\), the prediction risk of \(\hat f(X)\) decomposes into two factors:

\[ R_n \quad = \quad \underbrace{E_{\hat f, X, Y} \left[ \: \left( E[Y\mid X] - \hat f(X) \right)^2 \right]}_{(1)} \quad + \quad \underbrace{E_{X, Y} \left[ \: \left( Y - E[Y\mid X] \right)^2 \right]}_{(2)} \]

  1. Estimation error (or “reducible error”)
  2. Irreducible error (or “noise”)

The estimation error term further reduces into two components:

\[ \begin{aligned} \underbrace{E_{\hat f, X, Y} \left[ \: \left( E[Y\mid X] -\hat f(X) \right)^2 \right]}_{\text{Estimation error}} \quad &= \quad \underbrace{ E_{X, Y} \left[ \left( E[Y\mid X] - E \left[\hat f(X) \mid X\right] \right)^2 \right]}_{(A)} \quad \\ &+ \quad \underbrace{E_{\hat f, X} \left[ \: \left( E \left[\hat f(X) \mid X\right] -\hat f(X) \right)^2 \right]}_{(B)} \end{aligned} \]

  1. Bias^2
  2. Variance

Tip

Analogous decompositions hold for other loss/risk functions.

Code
cols = c(blue, red, green, orange)
par(mfrow = c(2, 2), bty = "n", ann = FALSE, xaxt = "n", yaxt = "n", 
    family = "serif", mar = c(0, 0, 0, 0), oma = c(0, 2, 2, 0))
library(mvtnorm)
mv <- matrix(c(0, 0, 0, 0, -.5, -.5, -.5, -.5), 4, byrow = TRUE)
va <- matrix(c(.02, .02, .1, .1, .02, .02, .1, .1), 4, byrow = TRUE)

for (i in 1:4) {
  plot(0, 0, ylim = c(-2, 2), xlim = c(-2, 2), pch = 19, cex = 42, 
       col = blue, ann = FALSE, pty = "s")
  points(0, 0, pch = 19, cex = 30, col = "white")
  points(0, 0, pch = 19, cex = 18, col = green)
  points(0, 0, pch = 19, cex = 6, col = orange)
  points(rmvnorm(20, mean = mv[i, ], sigma = diag(va[i, ])), cex = 1, pch = 19)
  switch(i,
    "1" = {
      mtext("low variance", 3, cex = 2)
      mtext("low bias", 2, cex = 2)
    },
    "2" = mtext("high variance", 3, cex = 2),
    "3" = mtext("high bias", 2, cex = 2)
  )
}

Sources of bias and variance

What conditions give rise to a high bias estimator?

  • Not enough covariates (small \(p\))
  • Model is too simple
  • Model is misspecified (doesn’t accurately represent the data generating process)
  • Bad training algorithm

What conditions give rise to a high variance estimator?

  • Not enough training samples (small \(n\))
  • Model is too complicated
  • Lots of irreducible noise in training data (if my model has power to fit noise, it will)

How do we estimate \(R_n\)?


\(R_n\) is a theoretical construct.
We can never know the true \(R_n\) for a given \(\hat f\). We also have to estimate it from data.

Don’t use training error

The training error in regression is

\[\widehat{R}_n(\widehat{f}) = \frac{1}{n} \sum_{i=1}^n (y_i - \hat{f}(x_i))^2\]

Here, the \(n\) is doubly used (annoying, but simple): \(n\) observations to create \(\widehat{f}\) and \(n\) terms in the sum.

Tip

We also call \(\hat R_n(\hat f)\) the empirical risk.

\(\hat R_n(\hat f)\) is a bad estimator for \(R_n\).
So we should never use it.

Why is \(\hat R_n\) a bad estimator of \(R_n\)?

  1. It doesn’t say anything about predictions on new data.
    (It’s a measure of how well the model fits a fixed set of training data.)

  2. It can be made arbitrarily small by making your model more complex.

1. It doesn’t say anything about predictions on new data.

These all have the same \(R^2\) and Training Error

Code
ans <- anscombe |>
  pivot_longer(everything(), names_to = c(".value", "set"), 
               names_pattern = "(.)(.)")
ggplot(ans, aes(x, y)) + 
  geom_point(colour = orange, size = 3) + 
  geom_smooth(method = "lm", se = FALSE, color = blue, linewidth = 2) +
  facet_wrap(~set, labeller = label_both)

ans %>% 
  group_by(set) |> 
  summarise(
    R2 = summary(lm(y ~ x))$r.sq, 
    train_error = mean((y - predict(lm(y ~ x)))^2)
  ) |>
  kableExtra::kable(digits = 2)
set R2 train_error
1 0.67 1.25
2 0.67 1.25
3 0.67 1.25
4 0.67 1.25

2. It can be made arbitrarily small by making your model more complex.

Adding “junk” predictors increases \(R^2\) and decreases Training Error

n <- 100
p <- 10
q <- 0:30
x <- matrix(rnorm(n * (p + max(q))), nrow = n)
y <- x[, 1:p] %*% c(5:1, 1:5) + rnorm(n, 0, 10)

regress_on_junk <- function(q) {
  x <- x[, 1:(p + q)]
  mod <- lm(y ~ x)
  tibble(R2 = summary(mod)$r.sq,  train_error = mean((y - predict(mod))^2))
}
Code
map(q, regress_on_junk) |> 
  list_rbind() |>
  mutate(q = q) |>
  pivot_longer(-q) |>
  ggplot(aes(q, value, colour = name)) +
  geom_line(linewidth = 2) + xlab("train_error") +
  scale_colour_manual(values = c(blue, orange), guide = "none") +
  facet_wrap(~ name, scales = "free_y")

Other things you can’t use

You should not use anova

or the \(p\)-values from the lm output for this purpose.

These things are to determine whether those parameters are different from zero if you were to repeat the experiment many times, if the model were true, etc. etc.

In other words, they are useful for inference problems.

This is not the same as being useful for prediction problems (i.e. how to get small \(R_n\)).

Don’t use training error: the formal argument

Our training error \(\hat R_n(\hat f)\) is an estimator of \(R_n\).

So we can ask “is \(\widehat{R}_n(\hat{f})\) a good estimator for \(R_n\)?”

The error of our risk estimator

Let’s measure the error of our empirical risk estimator:

\[E[(R_n - \hat R_n(\hat f))^2]\] (What is the expectation with respect to?)

The error of our risk estimator

\[E[(R_n - \hat R_n(\hat f))^2]\]

  • \(R_n\) is deterministic (we average over test data and training data)
  • \(\hat R_n(\hat f)\) also only depends on training data
  • So the expectation is with respect to our training dataset

As before, we can decompose the error of our risk estimator into bias and variance

\[ E[(R_n - \hat R_n(\hat f))^2] = \underbrace{( R_n - E[\hat R_n(\hat f)])^2}_{\text{bias}} + \underbrace{E[( \hat R_n(\hat f) - E[\hat R_n(\hat f)])^2]}_{\text{variance}} \]

Is the bias of \(\hat R_n(\hat f)\) small or large? Why?

Is the bias of \(\hat R_n(\hat f)\) small or large? Why?

  • Assume we have a very complex model capable of (nearly) fitting our training data
    • I.e. \(\hat R_n(\hat f) \approx 0\)
  • \(\text{Bias} = ( R_n - E[\hat R_n(\hat f)])^2 \approx ( R_n - 0 ) = R_n\)
  • (That’s the worst bias we could get! 😔)

Formalizing why \(\hat R_n(\hat f)\) is a bad estimator of \(R_n\)

Consider an alternative estimator built from \(\{ (X_j, Y_j) \}_{j=1}^m\) that was not part of the training set. \[\tilde R_m(\hat f) = {\textstyle \frac{1}{m} \sum_{j=1}^m} \ell(Y_j, \hat f(X_j)), \] The error of this estimator can also be decompsed into bias and variance \[ E[(R_n - \tilde R_m(\hat f))^2] = \underbrace{( R_n - E_{\hat f,X_j,Y_j}[\tilde R_m(\hat f)])^2}_{\text{bias}} + \underbrace{E_{\hat f,X_j,Y_j}[( \tilde R_m(\hat f) - E_{\hat f,X_j,Y_j}[\tilde R_m(\hat f)])^2]}_{\text{variance}} \]

Is the bias of \(\tilde R_m(\hat f)\) small or large? Why?

Is the bias of \(\tilde R_m(\hat f)\) small or large? Why?

\(\tilde R_m(\hat f)\) has zero bias!

\[ \begin{aligned} E_{\hat f,X_j,Y_j} \left[ \tilde R_m(\hat f) \right] &= E_{\hat f,X_j,Y_j} \left[ \frac{1}{m} \sum_{j=1}^m \ell(Y_j, \hat f(X_j)) \right] \\ &= \frac{1}{m} \sum_{j=1}^m E_{\hat f,X_j,Y_j} \left[ \ell(Y_j, \hat f(X_j)) \right] = R_n \end{aligned} \]

How to properly estimate \(R_n\)

Holdout sets

One option is to have a separate “holdout” or “validation” dataset.

Tip

This option follows the logic on the previous slide.
If we randomly “hold out” \(\{ (X_j, Y_j) \}_{j=1}^m\) from the training set, we can use this data to get an (nearly) unbiased estimator of \(R_n\). \[ R_n \approx \tilde R_m(\hat f) \triangleq {\textstyle{\frac 1 m \sum_{j=1}^m \ell ( Y_j - \hat Y_j(X_j))}} \]

👍 Estimates the test error

👍 Fast computationally

🤮 Estimate is random

🤮 Estimate has high variance (depends on 1 choice of split)

🤮 Estimate has a little bias (because we aren’t estimating \(\hat f\) from all of the training data)

Aside

In my experience, CS has particular definitions of “training”, “validation”, and “test” data.

I think these are not quite the same as in Statistics.

  • Test data - Hypothetical data you don’t get to see, ever. Infinite amounts drawn from the population.
    • Expected test error or Risk is an expected value over this distribution. It’s not a sum over some data kept aside.
  • Sometimes I’ll give you “test data”. You pretend that this is a good representation of the expectation and use it to see how well you did on the training data.
  • Training data - This is “holdout” data that you get to touch.
  • Validation set - Often, we need to choose models. One way to do this is to split off some of your training data and pretend that it’s like a “Test Set”.

When and how you split your training data can be very important.

Announcements Sept 24

  • Lab Section 03 on monday next week: it’s a holiday. Your lab will be due Friday instead.
  • Everyone else’s lab: same time as usual.

Review of Risk (Estimation)

We fixed a bunch of subtle issues in the Sept 19 Risk Estimation slides.

And I’ve noticed some related confusion in my office hours, so it’s review time!

Tip

After this lecture, make sure to review the whole Risk Estimation slide deck to see all the fixed terminology/definitions/examples.

Risk vs. Test Error

  • Risk (\(R_n\)): expected error when the training data have not yet been observed (random!)
    • depends on true data distribution, predictor
  • Test Error (\(T_n\)): expected error when the training data have been observed (fixed!)
    • depends on true data distribution, predictor, training data
  • \(R_n = \E[T_n]\)
  • we mostly care about risk when designing a new predictor / estimator
    • we want to know how well it will work on future training and test data
  • we mostly care about test error when we’ve fit a predictor / estimator
    • we want to know how well it will work on future test data

An important clarification

Important

In previous lectures, we’ve used \(R_n(\hat f)\) to denote risk.

Confusing: the \(\hat \mu\) argument looks like a trained predictor, so \(R_n(\hat f)\) looks like a function of training data. Not true!

We will just avoid this confusion from now on and use \(R_n\) for risk.

Risk vs. Test Error: Warmup in 1D

  • Model: \(\mathcal{P} = \{ P: \quad Y \sim \mathcal N(\mu, 1), \quad \mu\in\R\}\), loss \(\ell(y,\hat y) = (y-\hat y)^2\)
  • Training Data: \(Y_i\in\R\) from some unknown “true” \(P_0 \in \mathcal{P}\) (i.e., “true” \(\mu_0\))
  • Predictor: function of training data \(\hat\mu : \R^n \to \R\)
    • e.g., scaled empirical average \(\hat\mu(Y_{1:n}) = \alpha\frac{1}{n}\sum_{n=1}^N Y_n\) for \(\alpha > 0\)
  • Risk \(R_n\): expected loss over both training and test data \(R_n = E[\ell(Y,\hat\mu(Y_{1:n}))]\)
    • function of true dist \(P_0\) and predictor function \(\hat\mu(\cdot)\)
    • averages over randomness in training data
  • Test Error \(T_n\): expected loss over only test data \(T_n(\hat\mu) = E[\ell(Y,\hat\mu(Y_{1:n})) | Y_{1:n}]\)
    • function of true dist \(P_0\), predictor function \(\hat\mu(\cdot)\), and training data
    • training data (and trained predictor) are known/fixed

Risk vs. Test Error: Warmup in 1D

  • Model: \(\mathcal{P} = \{ P: \quad Y \sim \mathcal N(\mu, 1), \quad \mu\in\R\}\), loss \(\ell(y,\hat y) = (y-\hat y)^2\)
  • Training Data: \(Y_i\in\R\) from some unknown “true” \(P_0 \in \mathcal{P}\) (i.e., “true” \(\mu_0\))
  • Predictor: scaled empirical average \(\hat\mu(Y_{1:n}) = \alpha \frac{1}{n}\sum_{i=1}^n Y_i\) for a fixed \(\alpha > 0\)

(What can the risk and test error depend on?)

\[ R_n = E[(Y - \hat\mu(Y_{1:n}))^2] = \underbrace{1}_{\text{irr.}} + \underbrace{\mu^2_0(\alpha-1)^2}_{\text{bias}^2}+\underbrace{\alpha^2/n}_{\text{variance}}\]

\[ T_n = E[(Y - \hat\mu(Y_{1:n}))^2 |Y_{1:n}] = 1 + \left(\mu_0 - \alpha \frac{1}{n}\sum_{i=1}^n Y_i\right)^2\]

  • \(R_n\) is a function of the true distribution (\(\mu_0\)) and predictor (\(\alpha\)); not training data!
  • \(T_n\) is a function of \(\mu_0\), \(\alpha\) and training data

Risk vs. Test Error: Regression

  • Model: \(\mathcal{P} = \{ P: \quad Y \mid X \sim \mathcal N( f(X), 1), \quad f(x)=ax, \, \text{for } a\in\R \}\)
  • Training Data: \(X_i,Y_i\in\R\) pairs from some unknown “true” \(P_0 \in \mathcal{P}\) (“true” \(a_0\))
  • Predictor: a function of the training data and a new test input \(\hat f : \R^n\times \R^{n} \times \R \to \R\)
    • e.g., the function \(\hat f(X_{1:n},Y_{1:n},x) = \beta \frac{Y_2 - Y_1}{X_2 - X_1} x\) for \(\beta > 0\)
  • Risk \(R_n\): avg loss over both training and test data \(R_n = E[\ell(Y, \hat f(X_{1:n}, Y_{1:n}, X))]\)
  • Test Error \(T_n\): avg loss over only test data \(T_n = E[\ell(Y, \hat f(Y_{1:n}, X_{1:n},X)) | Y_{1:n},X_{1:n}]\)
    • training data fixed/known

Risk vs. Test Error: Regression

  • Model: \(\mathcal{P} = \{ P: \quad Y \mid X \sim \mathcal N( f(X), 1), \quad f(x)=ax, \, \text{for } a\in\R \}\) (“true” value \(a_0\))
  • Predictor: the function \(\hat f(X_{1:n}, Y_{1:n}, x) = \beta \frac{Y_2 - Y_1}{X_2 - X_1} x\)

(What can the risk and test error depend on?)

\[ \begin{aligned} R_n = E[(Y - \hat f(X))^2] %= E[(Y - a_0X)^2] + E[(a_0X - \hat f(X))^2]\\ %&= 1 + E\left[\left(a_0X - \beta\frac{Y_2 - Y_1}{X_2-X_1}X\right)^2\right]\\ &= \text{annoying algebra...}\\ % &= 1 + E[X^2]E\left[\left(a_0 - \beta\frac{Y_2 - Y_1}{X_2-X_1}\right)^2\right]\\ % &= 1 + E[X^2]E\left[\left(a_0 - \beta\frac{a_0 X_2 + Z_2 - a_0 X_1 - Z_1}{X_2-X_1}\right)^2\right]\\ &= \underbrace{1}_{\text{irr.}} + \underbrace{a_0^2(1-\beta)^2E[X^2]}_{\text{bias}^2} + \underbrace{2\beta^2E\left[(X_2-X_1)^{-2}\right]E[X^2]}_{\text{variance}}. \end{aligned} \]

\[ T_n = E[(Y - \hat f(X))^2|X_{1:n},Y_{1:n}] = 1 + E[X^2]\left(a_0 - \beta\frac{Y_2 - Y_1}{X_2-X_1}\right)^2 \]

  • Risk \(R_n\) depends only on \(a_0, \beta\) and \(X\)-marginal distribution
  • Test error \(T_n\) depends on training data too!

Estimating risk

We can’t compute \(R_n\) in practice (we don’t know the true data distribution)

Can we estimate risk (\(\hat R_n\)) with data? How do we measure the quality of our estimate?

\[E[(R_n - \hat R_n)^2]\] (What is the expectation with respect to?)

  • \(R_n\) is just a fixed value (we already averaged over test data and training data)
  • \(\hat R_n\) depends only on training data
  • So the expectation is with respect to our training dataset

As before, we can decompose the error of our risk estimator into bias and variance

\[ E[(R_n - \hat R_n)^2] = \underbrace{( R_n - E[\hat R_n])^2}_{\text{bias}} + \underbrace{E[( \hat R_n - E[\hat R_n])^2]}_{\text{variance}} \]

Estimating risk: empirical risk

\[ \text{Recall: }E[(R_n - \hat R_n)^2] = \underbrace{( R_n - E[\hat R_n])^2}_{\text{bias}} + \underbrace{E[( \hat R_n - E[\hat R_n])^2]}_{\text{variance}} \]

  • Empirical risk: just use the training loss \(\hat R_n = \frac{1}{n}\sum_{i=1}^n \ell(Y_i, \hat f(X_i))\)

(What is the bias and variance with a really flexible model?)

  • If our model is so flexible that it can nearly fit all data always, then \(\hat R_n \approx 0\)
  • \(\text{Variance} = E[( \hat R_n - E[\hat R_n])^2] \approx E[(0 - 0)^2] = 0\)
  • \(\text{Bias}^2 = ( R_n - E[\hat R_n(\hat f)])^2 \approx ( R_n - 0 ) = R_n\)

(Is that good or bad?)

Estimating risk: holdout sets

\[ \text{Recall: }E[(R_n - \hat R_n)^2] = \underbrace{( R_n - E[\hat R_n])^2}_{\text{bias}} + \underbrace{E[( \hat R_n - E[\hat R_n])^2]}_{\text{variance}} \] Holdout: train \(\hat f\) with \(n-m\) data, estimate with \(m\) data: \(\hat R_n = \frac{1}{m}\sum_{j=1}^{m} \ell(Y_j, \hat f(X_j))\)

  • bias (increasing, decreasing, or either with \(m\)?) \[ \text{Bias}^2 = (R_n - R_{n-m})^2 \]
  • variance (increasing, decreasing, or either with \(m\)?)

\[ \text{Variance} = \underbrace{\frac{1}{m}E[(\tilde \ell - E[\tilde\ell| \hat f])^2]}_{\text{estimation variance}} + \underbrace{E[(E[\tilde \ell| \hat f] - R_{n-m})^2]}_{\text{training variance}} \quad \text{where }\tilde \ell = \ell(Y_1, \hat f(X_1)) \]

Estimating risk: leave-one-out cross-val. (LOO-CV)

  • LOO-CV estimate: \(\hat R_n = \frac{1}{n} \sum_{i=1}^n \tilde R_i\), where:
    • Remove the first observation and train (\(\hat f_1\)), estimate \(\tilde R_1 = \ell(Y_1, \hat f_1(X_1))\)
    • Remove the second observation and train (\(\hat f_2\)), estimate \(\tilde R_2 = \ell(Y_2, \hat f_2(X_2))\)
    • Wash, rinse, repeat…

(Bias? Hint: if we train with \(n-m\), we get \((R_n - R_{n-m})^2\)…)

  • Bias is \((R_n - R_{n-1})^2\) – quite small!
  • Variance is small too! Roughly: \(E[( \hat R_n - E[\hat R_n])^2]\approx \frac{1}{n} E[(\tilde R_1 - R_{n-1})^2]\)
    • assumption: ignoring \(\tilde R_j\), \(\tilde R_k\) covariance for \(j\neq k\)
    • reasonable if \(n\) large enough that one data point doesn’t influence \(\hat f\) too much
  • major downside: have to train \(n\) models (super expensive!)

Picking up from last time…

K-fold CV

\(K\)-fold cross validation finds a decent tradeoff between these options.

The idea of \(K\)-fold CV is

  1. Divide the data into \(K\) groups.
  2. Leave one group out and train with the remaining \(K-1\) groups.
  3. Test on the held-out group. Calculate an average risk over these \(\sim n/K\) data.
  4. Repeat for all \(K\) groups.
  5. Average the average risks.

K-fold CV: illustration

Code
par(mar = c(0, 0, 0, 0))
plot(NA, NA, ylim = c(0, 5), xlim = c(0, 10), bty = "n", yaxt = "n", xaxt = "n")
rect(0, .1 + c(0, 2, 3, 4), 10, .9 + c(0, 2, 3, 4), col = blue, density = 10)
rect(c(0, 1, 2, 9), rev(.1 + c(0, 2, 3, 4)), c(1, 2, 3, 10), 
     rev(.9 + c(0, 2, 3, 4)), col = red, density = 10)
points(c(5, 5, 5), 1 + 1:3 / 4, pch = 19)
text(.5 + c(0, 1, 2, 9), .5 + c(4, 3, 2, 0), c("1", "2", "3", "K"), cex = 3, 
     col = red)
text(6, 4.5, "Training data", cex = 3, col = blue)
text(2, 1.5, "Validation data", cex = 3, col = red)

K-fold CV vs. LOO CV

  • 🎉 Less costly than LOO CV (i.e., actually possible; only need to train \(K\) times)
  • 💩 K-fold CV has higher sq. bias \((R_n - R_{n(1-1/K)})^2\) than LOO CV \((R_n - R_{n-1})^2\)
  • a bit painful to compare variance…
    • I hereby invoke the sacred incantation: “this exercise is left to the reader”

The overall risk \(R_n\) depends on \(n\).

Tip

In practice, most people just default to using 5-fold or 10-fold. This is probably fine in most cases.

K-fold CV: Code

#' @param data The full data set
#' @param estimator Function. Has 1 argument (some data) and fits a model. 
#' @param predictor Function. Has 2 args (the fitted model, the_newdata) and produces predictions
#' @param error_fun Function. Has one arg: the test data, with fits added.
#' @param kfolds Integer. The number of folds.
kfold_cv <- function(data, estimator, predictor, error_fun, kfolds = 5) {
  n <- nrow(data)
  fold_labels <- sample(rep(1:kfolds, length.out = n))
  errors <- double(kfolds)
  for (fold in seq_len(kfolds)) {
    test_rows <- fold_labels == fold
    train <- data[!test_rows, ]
    test <- data[test_rows, ]
    current_model <- estimator(train)
    test$.preds <- predictor(current_model, test)
    errors[fold] <- error_fun(test)
  }
  mean(errors)
}
somedata <- data.frame(z = rnorm(100), x1 = rnorm(100), x2 = rnorm(100))
est <- function(dataset) lm(z ~ ., data = dataset)
pred <- function(mod, dataset) predict(mod, newdata = dataset)
error_fun <- function(testdata) mutate(testdata, errs = (z - .preds)^2) |> pull(errs) |> mean()
kfold_cv(somedata, est, pred, error_fun, 5)
[1] 0.8928686

Next time…

LOO tricks and “hatvalues”