The Error in Regression
Contents
11.5. The Error in Regression#
To assess the accuracy of the regression estimate, we must quantify the amount of error in the estimate. The error in the regression estimate is called the residual and is defined as
where \(\hat{Y} = \hat{a}(X-\mu_X) + \mu_Y\) is the regression estimate of \(Y\) based on \(X\).
Calculations become much easier if we express the residual \(D\) in terms of the deviations \(D_X\) and \(D_Y\).
Since \(E(D_X) = 0 = E(D_Y)\), we have \(E(D) = 0\).
This is consistent with what you learned in Data 8: No matter what the shape of the scatter diagram, the average of the residuals is \(0\).
In our probability world, “no matter what the shape of the scatter diagram” translates to “no matter what the joint distribution of \(X\) and \(Y\)”. Remember that we have made no assumptions about that joint distribution.
11.5.1. Mean Squared Error of Regression#
The mean squared error of regression is \(E\left( (Y - \hat{Y})^2 \right)\). That is just \(E(D^2)\), the expected squared residual.
Since \(E(D) = 0\), \(E(D^2) = Var(D)\). So the mean squared error of regression is the variance of the residual.
Let \(r(X,Y) = r\) for short. To calculate the mean squared error of regression, recall that \(\hat{a} = r\frac{\sigma_Y}{\sigma_X}\) and \(E(D_XD_Y) = r\sigma_X\sigma_Y\).
The mean squared error of regression is
11.5.2. SD of the Residual#
The SD of the residual is therefore
which is consistent with the Data 8 formula.
11.5.3. \(r\) As a Measure of Linear Association#
The expectation of the residual is always \(0\). So if \(SD(D) \approx 0\) then \(D\) is pretty close to \(0\) with high probability, that is, \(Y\) is pretty close to \(\hat{Y}\). In other words, if the SD of the residual is small, then \(Y\) is pretty close to being a linear function of \(X\).
The SD of the residual is small if \(r\) is close to \(1\) or \(-1\). The closer \(r\) is to those extremes, the closer \(Y\) is to being a linear function of \(X\). If \(r = \pm 1\) then \(Y\) is a perfectly linear function of \(X\).
A way to visualize this is that if \(r\) is close to \(1\) or \(-1\), and you repeatedly simulate points \((X, Y)\), the points will lie very close to a straight line. In that sense \(r\) is a measure of how closely the scatter diagram is clustered around a straight line.
The case \(r=0\) is worth examining. In that case we say that \(X\) and \(Y\) are “uncorrelated”. Because \(\hat{a} = 0\), the equation of the regression line is simply \(\hat{Y} = \mu_Y\). That’s the horizontal line at \(\mu_Y\); your prediction for \(Y\) is \(\mu_Y\) no matter what the value of \(X\) is. The mean squared error is therefore \(E\big{(}(Y-\mu_Y)^2\big{)} = \sigma_Y^2\), which is exactly what you get by plugging \(r=0\) into the expression \((1 - r^2)\sigma_Y^2\).
This shows that when \(X\) and \(Y\) are uncorrelated there is no benefit in using linear regression to estimate \(Y\) based on \(X\). In this sense too, \(r\) quantifies the amount of linear association between \(X\) and \(Y\).
In exercises you will see that it is possible for \(X\) and \(Y\) to be uncorrelated and have a very strong non-linear association. So it is important to keep in mind that \(r\) measures only linear association.