The uncertainty of measurements

Last Thursday, I got some interesting insights from Lars Kutzbach’s lecture. First of all, while it is often used in similar context, error and uncertainty are two totally different issues.

Error can be defined only if something like “the truth” exists. Then error is the difference between an actual value or observable and this (possibly unknown) truth.

Uncertainty occurs, if the truth is in fact not known, since then the truth lies somewhere in an uncertainty region.

From here, Lars derived several further terms: Trueness is the absence of systematic error. In other words, if an instrument measures some observable, it may have a bias. In order to minimize the error, one could subtract this systematic bias from the observed/measured values and get a value that is closer to the truth, thus trueness improves.

Precision, on the other hand determines, how close the observed/measured values are clustered around the truth (or if trueness is not so good around an biased truth). If both precision and trueness are high, then accuracy is high, since then the measured/observed values are very close to the truth.

Since we usually do not know the truth, there remains uncertainty. We know that a mean value is close to the truth (when trueness is high) and we know about a (usually Gaussian) distribution of observed values around this. This in turn means that if we only have the mean and the distribution, we know that the truth lies within this uncertainty region.

Therefore, Lars concluded with the (quite bold) statement:

measurement values without mentioning uncertainty are useless!

One question that remained open was: How does uncertainty propagate? If we observe by secondary information (e.g. by analyzing a spectrum), then a measurement model underlies the observation. So, the original uncertainty is propagated through the model.

Leave a Reply