Uncertainty in measurement – Accuracy|Precision|Mean|Standard Deviation|Errors|

Any measurement, no matter how accurate or precise, has some amount of error. The true value of a quantity cannot be measured with infinite precision as there are always variations in measurements that come about from errors.  These errors are small and uncontrollable but do cause variations in measurements. The difference between a measured quantity and what is considered to be the true value is known as uncertainty in the measurement. Two concepts that deal with measurements are accuracy and precision.


Accuracy and Precision

The accuracy of the measurement refers to how close the measured value is to the true or accepted value. Precision refers to the agreement between two or more measurements that have been carried out exactly the same way.

It must be noted that precision has nothing to do with the true or accepted value of a measurement. Precision is determined with replicate measurements. Replicate measurements are obtained when a number of sample are analyzed in exactly the same way. Therefore, it is quite possible to be very precise and totally inaccurate.





Accuracy can be expressed in terms of either absolute or relative error. The absolute error (E) is found by subtracting the true or accepted value (Xt) from the measured value (Xm).

E = Xt – Xm

The value of the absolute error may be positive or negative. The relative error (Er) is a measurement of the absolute error relative to the rue or accepted value.

Er = (Xt – Xm) / Xt

Random or Indeterminate error & Systemic or determinate error

Random errors are the existing fluctuations of any measuring apparatus usually resulting from the experimenter’s inability to take the same measurement in exactly the same way to get the exact value. Even the process itself may introduce variables that may cause measurements to fluctuate.


There are many sources of random errors associated in the calibration. These are small and uncontrollable variables such as

–          visual judgment with respect to reading the marking on the glassware and the thermometer

–          temperature fluctuations which affects the volume of the glassware, the viscoscity of the liquid and the performance of the balance

–          wind that cause variations in the balance readings

Random errors affect the precision of a measurement. Precision os usually measured in terms of the deviation of a set of results from the mean value of the set. This is measured using the standard deviation (s)


Average or mean value

The mean is calculated by dividing the sum of the replicate measurements by the number of measurements in the set.

Example 1


Calculate the mean value from the data in table 1

Mean = 22.10 + 23.09 + 20.01 + 24.00 + 21.11 +  20.20 / 6

= 21.75 cm3

Standard Deviation

The standard deviation (s) is a measure of the variation of a set of measurement about its mean value. It is typically called the uncertainty in a measurement. It tells how values bunch together from the mean set of data.

Calculate the standard deviation for the data in table 2

table 1

Systematic Errors

A systematic error is a consistent difference between a measurement and its true value that is not due to random chance. It affects all the data set in the same way each time a measurement is made. There are three sources of systematic errors. These are:

  1. Instrument errors – these are caused by errors such as faulty calibrations, instruments being used under different conditions from which they were calibrated or unstable power supply. These errors can be eliminated by calibration or checking the instrument against a standard.
  1. Method errors – these arise from behaviours of reagents and reaction such as incompleteness of reaction or occurrence of side reactions which interefers with the measuring process. These errors are difficult to detect and correct.
  1. Personal error – these result from personal judgement such as the end point in a titration or estimating measurements between two scale markings. These errors can be minimized by care and self-discipline.

These errors affect the accuracy of a measurement. When the magnitude of the error is independent of the size of the sample being measured, the error is referred to as a constant error. This means that whether a small or large sample is used for analysis, the magnitude of the error is the same. Constant errors are minimized by using a large as possible sample. When errors vary with the size of the sample, they are referred to as proportional errors. This means that the magnitude of the error increases or decreases as the size of the same increases or decreases.

Leave a Reply

Your email address will not be published. Required fields are marked *