## What is Measurement?

A measurement communicates the property of the measured object. Uncertainty of measurement is the doubt that exists regarding the value recorded during any measurement. The measured value will give us information on how heavy an object is, or how hot, or how long it is. A measurement gives credible value to that property. Measurements are always carried out using an instrument that is apt for the said role. Rulers, stopwatches, weighing scales, thermometers, etc., are all examples of measuring instruments.

There are some processes that might seem to be measurements but are not. For example, comparing two pieces of string to see which is longer is not really a measuring process.

Numbering of goods is not normally viewed as measurement either. Tests are also not a measuring process, but rather tests are done to get a positive or negative result or then it would be either to affirm or decline a parameter.

## Uncertainty of Measurement

Uncertainty of measurement is the doubt that exists regarding the value recorded during any measurement. You might think that well-made rulers, clocks and thermometers should be trustworthy, and give the right answers. But for every measurement – even the most cautiously carried out measurements – there is always an element of doubt regarding its legitimacy.

**Uncertainty of measurement** acknowledges that no measurement can be perfect and is defined as a ‘*The* *parameter associated with the result of a measurement, that characterises the dispersion of values that could reasonably be to the thing being measured*‘. The uncertainty of measurement is typically expressed as a range of values in which the value is estimated to be a part of, within a given statistical confidence. It does not attempt to define or rely on one unique true value.

As there would always be a margin of doubt regarding any measurement, we need to fathom ‘How big is the margin?’ and ‘How high is the probability of doubt?’ These two values are very much in need to quantify an uncertainty. One is the width of the margin, or interval. And the other is a confidence level that states how sure we are that the ‘true value’ is within that margin.

There are two ways of stating measurement error and uncertainty for the entire range of a measuring instrument.

1. Percentage of full-scale deflection or FSD:

2. Percentage of reading or indicated value:

The difference between the two concepts becomes obvious when an instrument is operating at the bottom of its range. In the below given example we shall explain what is full scale deflection and difference with percentage Reading.

Here we shall explain full scale calibration, full scale reading & how to calculate full scale deflection.

Assume you have a 100 Nm torque tester (Maximum), and that the stated uncertainty is

**Case 1:** At 100 Nm ± 0.5%FSD uncertainty is = 0.5 Nm for the entire range. This represents the “best case” uncertainty of the measurement. However, when a lower range is utilized this 0.5 Nm becomes more significant.

**Case 2: **At 100 Nm ± 0.5% of reading the uncertainty is = 0.5 Nm

Therefore, what seems to be a good uncertainty reading at full scale actually translates into considerably higher percentage of uncertainty at the lower range of the tester as in case – 1

As can be seen from the above two cases, uncertainty as related to full scale value increases rather significantly as you go lower down the range. And the percentage of uncertainty related to indicated value stays constant throughout the useful range of the UUT.

** **The above explanation continues to hold good for the error of the equipment expressed in percent of reading and percent of FSD as well.

Therefore, care should be taken in selecting the right equipment used for calibration in the laboratory and also while issuing calibration report for the calibrated equipment.

## Range of equipment

**a. ****Equipment with percentage of FSD:**

If an instrument has accuracy specified as percent FSD then the error will have a fixed value no matter where the reading stands within the full range.

Generally, when a manufacturer advertises % FSD, therefore the range that is being depicted is from zero to full scale. As shown in the below examples:

** ****Example 1:** Supposing accuracy is mentioned as ± 0.5% FSD for the range 0 to 100 Nm. This is pretty interesting because at 0 Nm the system is would be only accurate to within ± 0.5 Nm. Hence basically the error would be infinity at zero.

** ****Example 2: **Supposing a weighing scale has a range of 0-100 kg, and the % FS accuracy is 99.5% FS (this is accuracy Full Scale meaning). Then the percent Full Scale error formula is 100-99.5 = 0.5% FS. This can be construed as the value of any reading can be off by (0.5/100) *(100-0) = +/- 0.5 kg.

** b.** **Equipment with percentage Reading: **If however, an instrument has an accuracy specified as percent RD then the error will always be the same percent of the actual reading.

Generally, when a manufacturer advertises percent of reading, then the useful range will be given from minimum to full scale. As shown in the below examples:

**Example 1: **Supposing accuracy is mentioned as ± 0.5% reading from 10Nm to 100 Nm that is basically 10% to 100%. This would be because constant uncertainty or error cannot be achieved due to various reasons below 10% of full-scale deflection.

If it is for above stated reason, then the systems which have accuracy as related to indicated value (% of reading) should state the useful range to be from 10% to 100% (or 20% to 100%, or as the case maybe).

**Example 2: **Now supposing a pressure gauge has a percentage RD accuracy of 99%, with the % RD error being 1%. Considering this situation, then pressure reads 500 bar the value could be off by +/-(1/100)*(500) = +/- 5 bar.

Therefore, if a tester has 100Nm maximum range, then this should not be used at less than 10Nm if a desired accuracy is needed.

From the point of view of the user, it is therefore advisable to state if the accuracy or uncertainty is in % of reading and if the useful range corresponds to the accuracy or uncertainty. This will prevent the user, from having to calculate what the real error or uncertainty is at any given range.

## Error Vs Uncertainty:

It is important not to confuse the terms ‘error’ and ‘uncertainty’. Error is the difference between the ‘measured value’ and the ‘true value’ with reference to a measurement being performed. Uncertainty is a quantification of doubt regarding the measurement result. Whenever possible we ensure to correct any known errors: for example, this may be done by applying corrections via calibration certificates. But any error whose value that is not known, is then a source of uncertainty.

**Importance of Uncertainty Measurement:**

Why is uncertainty of measurement important we might ask? There may be a need to fathom the uncertainty of measurement just because you desire to execute good quality measurements and to comprehend the results in totality. However, there are other more authoritative reasons for considering measurement uncertainty.

You may be making the measurements as part of a:

· **Calibration** – where the uncertainty of measurement must be reported on the certificate.

· **Test –** where the uncertainty of measurement is needed to determine whether it is a pass or fail situation. Or then it could be to meet a measurement standard.

· **Tolerance** – where you need to know the uncertainty before you can decide whether the tolerance is met.

· **Interpret** – where you may need to read and understand a calibration certificate or a written specification for a test or measurement.

**Important Note:** The calibration laboratory should check the Calibration Certificate of its reference equipment whether the error and uncertainty are in FSD or in percent of Reading. If and when it is in %FSD, then care should be taken to convert it into % of Reading for that particular point of calibration and then conduct calibration of the UUT.

## Conclusion:

Effective measurement techniques include
these below vital concepts:

·
Distinguishing between error and uncertainty

·
Recognizing that all measurements have an uncertainty
element

·
Identifying types of error, sources of error
and how to detect/minimize error

·
Estimating, describing, and expressing
uncertainty in measurements and calculations

·
Using uncertainty to describe the results of
lab work

·
Comparing measured values and determine
whether values are the same within stated uncertainty.

** **We
have decades of experience & expertise in – Force/Torque/Pressure measuring
& testing systems. When it comes to testing, measurement, and calibration
of Torque, Force, and Pressure parameters for any application, you can trust us
to find a solution for challenges faced by you.