Lesson II: Measurement



Measurement


Preliminary Discussion

Measurement is an integral component of scientific observation. Measurements are performed to explore the characteristics of unknown phenomena and also to corroborate theoretical conjecture. To perform measurements, scientists employ "instruments." Measuring devices are called instruments because, much like musical instruments, they must be calibrated against known standards to ensure acceptable performance. This is analogous to using a pitch fork to ascertain that the notes of a piano are at the correct tone.

It is next important to distinguish between accuracy and precision in making measurements. Contrary to common usage, these words are not interchangeable. Nor are they mutually exclusive. Accuracy is the term used, in scientific context, regarding the agreement of a particular measurement with the correct value. An accurate value is a true value. Precision is the term used in science to describe the reproducibility of a measurement. The more precise a measurement, the closer each value is on repeated reading.

Measurements are much more likely to be completely precise than completely accurate. Given the limitations of most measuring devices, particularly those we use in introductory chemistry, measurements are typically only certain to a few significant figures. Within the limitations of these devices, repeated measurement will very often be reproducible. On the other hand, for these same reasons no measuremnt is totally accurate. We simply cannot "see" all possible decimal places using any measurement tool. When scientist study new phenomena, there are no benchmarks by which to tell if measurements are accurate. In this instance, a scientist has no way of telling if a value is accurate, other than there is precision upon repeated measure by him or others.

So there is always a degree of error in the accuracy of a measurement. This error is called a random error, because the measurement is equally likely to be high or low. Sometimes this is called an indeterminate error, because it is solely the fault of the limitations of the measurement tool. The last significant figure of any measured quantity contains this random error, and may be equally high or low.

A measurement may also have a systematic error. This is the result of either a faulty measurement or an inappropriate measuring device. A systematic error is simply wrong. This is sometimes called a determinate error, because it can be corrected with a properly functioning measuring device, or the correct choice of apparatus.

The dartboards to the left provide classic examples of both the differences in precision and accuracy, and the differences in random and systematic errors. The top board shows large systematic and large random errors. The values are simply all over the place. The middle board displays small random errors, but large systematic errors. The measurement is reproducible but not true. The bottom dartboard represents the case of both small random and small systematic errors.

  • A measurement is PRECISE if it is reproducible after multiple attempts at reading it.
  • A measurement is ACCURATE if it is close to the true value.
  • A RANDOM ERROR may be equally high or low. This error can be minimized but is unavoidable.
  • A SYSTEMATIC ERROR is consistently wrong because of limitations of experimental design or apparatus.



Reporting Measurements

As we discussed above, there is ultimately a degree of error in every measurement. A trained scientist understands that reported values inherently contain this and, because it is due to limitations of the apparatus, that it is a random error. When reporting measurements, the lost accuracy is called the uncertainty of measurement. We will assume that this uncertainty is no more than ± one unit in the last reported place. For example, a reported value 0f 1.34 mL is assumed to actually be somewhere between 1.33 - 1.35 mL.

When you are the measurer, you must assure your values are reported to correctly reflect uncertainty. We will consider several examples. First, when using electronic devices, you simply report all places displayed on the electronic readout. Assuming no systematic errors, there is still a random error in the last digit of the readout of ± one unit.



When reading normal laboratory apparatus, such as glassware or meter sticks, it is important to first align yourself eye-level with the mark at which you are making the measurement. Otherwise, you introduce a parallax error by attempting to measure a point from an angle. Looking "down" at a mark shifts to lower values, proportional to the angle your eye makes with the mark. Looking "up" at a mark similarly shifts towards higher readings.

When reading liquids from volumetrics, it is standard practice to make measurements from the bottom of the meniscus. The meniscus is the interface between the liquid surface and air. For solutions of water, the meniscus appears curved in a concave fashion, to a degree dependent on the distance between the sides of the glassware. This is because the polar water molecules adhere to the glass surface, and influence others to follow them up the sides of the container. To avoid parallax error, you then align your eye with the bottom of the meniscus to read the graduations, or volume markings, on the volumetric.


EXAMPLE I. The beaker to the left is ruled with graduations of 100 mL. We note that the meniscus lies between the 300 and 400 mL graduations, and assume these to be accurate, but note that the device supplies no information about the 10s of mLs. This is where our uncertainty is. We estimate that the meniscus lies 40 mL above the 300 mark, so that the reported value is 340 mL. Actually, using the proper rules of significant figures, the measurement should be reported in scientific notation as 3.4 × 102 mL. This value includes an inherent random error, with the true value most likely in the range between 330 mL and 350 mL. Often, the value is presented in the form: 340 ± 10 mL.


EXAMPLE II. The beaker to the left is ruled with graduations of 10 mL. The uncertainty in this measuring device is then one place past this, or in the ones position. We note that the meniscus lies between the 10 mL and 20 mL graduations. Estimating one place past what is known for certain, the volume is 1 mL above the lower mark. The measurement then should be reported as 11 mL, and it is understood that there is uncertainty in the last position of ± 1 mL. The true value is between 10 and 12 mL, and is often written 11 ± 1 mL.


EXAMPLE III. As a final example, let us consider the meter stick to the left. It is ruled with graduations of 0.1 centimeter (cm) (or 1 millimeter (mm)). The uncertainty is then one place beyond this, or in the hundredths of centimeters. Following our rules, we then report our measurement by guessing to the best of our ability in this position. Suppose we are to make a measurement at the arrow. We note that it lies at what seems to be exactly the .6 cm mark. This value is then correctly reported as 2.60 cm, which reflects the fact that the measuring device is uncertain in the hundredths position. The value including uncertainty is 2.60 ± .01 cm, meaning the true value most likely lies between 2.59 and 2.61 cm.

In conclusion, to properly make and report a measurement:
  • Use the proper line of sight on the apparatus
  • Determine the scale to which the apparatus is graduated.
  • The uncertainty in measurement is one place beyond graduations.
  • Report the measurement one place past the graduation scale - to the place which is uncertain.


| Measurement Exercises | Main Lessons Page | Email Dr. Parkinson |




Copyright © 2007 Southeastern Louisiana University
ALL RIGHTS RESERVED.
Unofficial and external sites are not endorsed by Southeastern Louisiana University.