Search results
Results from the WOW.Com Content Network
In statistics, Cook's distance or Cook's D is a commonly used estimate of the influence of a data point when performing a least-squares regression analysis. [1] In a practical ordinary least squares analysis, Cook's distance can be used in several ways: to indicate influential data points that are particularly worth checking for validity; or to indicate regions of the design space where it ...
Although the raw values resulting from the equations are different, Cook's distance and DFFITS are conceptually identical and there is a closed-form formula to convert one value to the other. [ 3 ] Development
Specifically, for some matrix , the squared Mahalanobis distance of (where is row of ) from the vector of mean ^ = = of length , is () = (^) (^), where = is the estimated covariance matrix of 's. This is related to the leverage h i i {\displaystyle h_{ii}} of the hat matrix of X {\displaystyle \mathbf {X} } after appending a column vector of 1 ...
Various methods have been proposed for measuring influence. [3] [4] Assume an estimated regression = +, where is an n×1 column vector for the response variable, is the n×k design matrix of explanatory variables (including a constant), is the n×1 residual vector, and is a k×1 vector of estimates of some population parameter .
In regression problems, an alternative approach may be to only exclude points which exhibit a large degree of influence on the estimated coefficients, using a measure such as Cook's distance. [ 30 ] If a data point (or points) is excluded from the data analysis , this should be clearly stated on any subsequent report.
The outliers would greatly change the estimate of location if the arithmetic average were to be used as a summary statistic of location. The problem is that the arithmetic mean is very sensitive to the inclusion of any outliers; in statistical terminology, the arithmetic mean is not robust .
A simple example is fitting a line in two dimensions to a set of observations. Assuming that this set contains both inliers, i.e., points which approximately can be fitted to a line, and outliers, points which cannot be fitted to this line, a simple least squares method for line fitting will generally produce a line with a bad fit to the data including inliers and outliers.
The usual estimate of σ 2 is the internally studentized residual ^ = = ^. where m is the number of parameters in the model (2 in our example).. But if the i th case is suspected of being improbably large, then it would also not be normally distributed.