From AMS Glossary
(Redirected from Method of least squares)
Any procedure that involves minimizing the sum of squared differences.
For example, the deviation of the mean from the population is less, in the square sense, than any other linear combination of the population values. This procedure is most widely used to obtain the constants of a representation of a known variable Y in terms of others Xi. Let Y(s) be represented by
The an's are the constants to be determined, the fn's are arbitrary functions, and s is a parameter common to Y and Xi. N is usually far less than the number of known values of Y and Xi. The system of equations being overdetermined, the constants an must be "fitted." The least squares determination of this "fit" proceeds by summing, or integrating when Y and Xi are known continuously,
and minimizing the sum with respect to the an's. In particular, for example, if fn[Xi(s)] ≡ Xi(s), then the regression function is being determined; and when fn[Xi(s)] ≡ cos n'Xi(s), or sin n'Xi(s), then Y is being represented by a multidimensional Fourier series. Least squares is feasible only when the unknown constants an enter linearly. The method of least squares was described independently by Legendre in 1806, Gauss in 1809, and Laplace in 1812.