From AMS Glossary
The inherent imprecision of a given process of measurement; the unpredictable component of repeated independent measurements on the same object under sensibly uniform conditions.
It is found experimentally that, given sufficient refinement of reading, a series of independent measurements x1, x2, . . ., xn will vary one from another even when conditions are most stringently controlled. Hence, any such measurement xi may be regarded as composed of two terms:
where μ (ordinarily the true value) is a numerical constant common to all members of the series and vi, the random error, is an unpredictable deviation from μ. The principal conclusion of classical investigations of errors of measurement (by Gauss and Laplace) was that, as a consequence of the central limit theorem, repeated measurements under controlled conditions usually follow the normal distribution, and the corresponding distribution of the random error is known as the error distribution.