Significance arithmetic

Significance arithmetic is a set of rules (sometimes called significant figure rules) for approximating the propagation of uncertainty in scientific or statistical calculations. These rules can be used to find the appropriate number of significant figures to use to represent the result of a calculation. If a calculation is done without analysis of the uncertainty involved, a result that is written with too many significant figures can be taken to imply a higher precision than is known, and a result that is written with too few significant figures results in an avoidable loss of precision. Understanding these rules requires a good understanding of the concept of significant and insignificant figures.

The rules of significance arithmetic are an approximation based on statistical rules for dealing with probability distributions. See the article on propagation of uncertainty for these more advanced and precise rules. Significance arithmetic rules rely on the assumption that the number of significant figures in the operands gives accurate information about the uncertainty of the operands and hence the uncertainty of the result. For an alternatives see interval arithmetic and floating point error mitigation.

An important caveat is that significant figures apply only to measured values. Values known to be exact should be ignored for determining the number of significant figures that belong in the result. Examples of such values include:

  • integer counts (e.g., the number of oranges in a bag)
  • definitions of one unit in terms of another (e.g. a minute is 60 seconds)
  • actual prices asked or offered, and quantities given in requirement specifications
  • legally defined conversions, such as international currency exchange
  • scalar operations, such as "tripling" or "halving"
  • mathematical constants, such as π and e

Physical constants such as the gravitational constant, however, have a limited number of significant digits, because these constants are known to us only by measurement. On the other hand, c (speed of light) is exactly 299,792,458 m/s by definition.

Multiplication and division using significance arithmetic

When multiplying or dividing numbers, the result is rounded to the number of significant figures in the factor with the least significant figures. Here, the quantity of significant figures in each of the factors is important—not the position of the significant figures. For instance, using significance arithmetic rules:

  • 8 × 8 ≈ 6 × 101
  • 8 × 8.0 ≈ 6 × 101
  • 8.0 × 8.0 ≈ 64
  • 8.02 × 8.02 ≈ 64.3
  • 8 / 2.0 ≈ 4
  • 8.6 /2.0012 ≈ 4.3
  • 2 × 0.8 ≈ 2

If, in the above, the numbers are assumed to be measurements (and therefore probably inexact) then "8" above represents an inexact measurement with only one significant digit. Therefore, the result of "8 × 8" is rounded to a result with only one significant digit, i.e., "6 × 101" instead of the unrounded "64" that one might expect. In many cases, the rounded result is less accurate than the non-rounded result; a measurement of "8" has an actual underlying quantity between 7.5 and 8.5. The true square would be in the range between 56.25 and 72.25. So 6 × 101 is the best one can give, as other possible answers give a false sense of accuracy. Further, the 6 × 101 is itself confusing (as it might be considered to imply 60 ±5, which is over-optimistic; more accurate would be 64 ±8).

Addition and subtraction using significance arithmetic

When adding or subtracting using significant figures rules, results are rounded to the position of the least significant digit in the most uncertain of the numbers being summed (or subtracted). That is, the result is rounded to the last digit that is significant in each of the numbers being summed. Here the position of the significant figures is important, but the quantity of significant figures is irrelevant. Some examples using these rules:

1
+1.1
2
  • 1 is significant to the ones place, 1.1 is significant to the tenths place. Of the two, the least precise is the ones place. The answer cannot have any significant figures past the ones place.
1.0
+1.1
2.1
  • 1.0 and 1.1 are significant to the tenths place, so the answer will also have a number in the tenths place.
    100 + 110 ≈ 200
  • We see the answer is 200, given the significance to the hundredths place of the 100. The answer maintains a single digits of significance in the hundreds place, just like the first term in the arithmetic.
    100. + 110. = 210.
  • 100. and 110. are both significant to the ones place (as indicated by the decimal), so the answer is also significant to the ones place.
    1×102 + 1.1×102 ≈ 2×102
  • 100 is significant up to the hundreds place, while 110 is up to the tens place. Of the two, the least accurate is the hundreds place. The answer should not have significant digits past the hundreds place.
    1.0×102 + 111 = 2.1×102
  • 1.0×102 is significant up to the tens place while 111 has numbers up until the ones place. The answer will have no significant figures past the tens place.
    123.25 + 46.0 + 86.26 ≈ 255.5
  • 123.25 and 86.26 are significant until the hundredths place while 46.0 is only significant until the tenths place. The answer will be significant up until the tenths place.
    100 - 1 ≈ 100
  • We see the answer is 100, given the significance to the hundredths place of the 100. It may seem counter-intuitive, but giving the nature of significant digits dictating precision, we can see how this follows from the standard rules.

Transcendental functions

Transcendental functions have a complicated method for determining the significance of the result. These include the logarithm function, the exponential function and the trigonometric functions. The significance of the result depends on the condition number. In general, the number of significant figures for the result is equal to the number of significant figures for the input minus the order of magnitude of the condition number.

The condition number of a differentiable function f at a point x is see Condition number: One variable for details. Note that if a function has a zero at a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to non-zero, yielding a ratio with zero in the denominator, hence an infinite relative change. The condition number of the mostly used functions are as follows;[1] these can be used to compute significant figures for all elementary functions:

Exponential function
Natural logarithm function
Sine function
Cosine function
Tangent function
Inverse sine function
Inverse cosine function
Inverse tangent function

The fact that the number of significant figures for the result is equal to the number of significant figures for the input minus the logarithm of the condition number can be easily derived from first principles. Let and be the true values and let and be approximate values with errors and respectively. Then we have , , and

The significant figures of a number is related to the uncertain error of the number by . Substituting this into the above equation gives:

Rounding rules

Because significance arithmetic involves rounding, it is useful to understand a specific rounding rule that is often used when doing scientific calculations: the round-to-even rule (also called banker's rounding). It is especially useful when dealing with large data sets.

This rule helps to eliminate the upwards skewing of data when using traditional rounding rules. Whereas traditional rounding always rounds up when the following digit is 5, bankers sometimes round down to eliminate this upwards bias.

See the article on rounding for more information on rounding rules and a detailed explanation of the round-to-even rule.

Disagreements about importance

Significant figures are used extensively in high school and undergraduate courses as a shorthand for the precision with which a measurement is known. However, significant figures are not a perfect representation of uncertainty, and are not meant to be. Instead, they are a useful tool for avoiding expressing more information than the experimenter actually knows, and for avoiding rounding numbers in such a way as to lose precision.

For example, here are some important differences between significant figure rules and uncertainty:

  • Uncertainty is not the same as a mistake. If the outcome of a particular experiment is reported as 1.234±0.056 it does not mean the observer made a mistake; it may be that the outcome is inherently statistical, and is best described by the expression indicating a value showing only those digits that are significant, ie the known digits plus one uncertain digit, in this case 1.23±0.06. To describe that outcome as 1.234 would be incorrect under these circumstances, even though it expresses less uncertainty.
  • Uncertainty is not the same as insignificance, and vice versa. An uncertain number may be highly significant (example: signal averaging). Conversely, a completely certain number may be insignificant.
  • Significance is not the same as significant digits. Digit-counting is not as rigorous a way to represent significance as specifying the uncertainty separately and explicitly (such as 1.234±0.056).
  • Manual, algebraic propagation of uncertainty—the nominal topic of this article—is possible, but challenging. Alternative methods include the crank three times method and the Monte Carlo method. Another option is interval arithmetic, which can provide a strict upper bound on the uncertainty, but generally it is not a tight upper bound (i.e. it does not provide a best estimate of the uncertainty). For most purposes, Monte Carlo is more useful than interval arithmetic . Kahan considers significance arithmetic to be unreliable as a form of automated error analysis.[2]

In order to explicitly express the uncertainty in any uncertain result, the uncertainty should be given separately, with an uncertainty interval, and a confidence interval. The expression 1.23 U95 = 0.06 implies that the true (unknowable) value of the variable is expected to lie in the interval from 1.17 to 1.29 with at least 95% confidence. If the confidence interval is not specified it has traditionally been assumed to be 95% corresponding to two standard deviations from the mean. Confidence intervals at one standard deviation (68%) and three standard deviations (99%) are also commonly used.

See also

References

  1. Harrison, John (June 2009). "Decimal Transcendentals via Binary" (PDF). IEEE. Retrieved 2019-12-01.
  2. William Kahan (1 March 1998). "How JAVA's Floating-Point Hurts Everyone Everywhere" (PDF). pp. 37–39.

Further reading

  • Delury, D. B. (1958). "Computations with approximate numbers". The Mathematics Teacher. 51 (7): 521–30. JSTOR 27955748.
  • Bond, E. A. (1931). "Significant Digits in Computation with Approximate Numbers". The Mathematics Teacher. 24 (4): 208–12. JSTOR 27951340.
  • ASTM E29-06b, Standard Practice for Using Significant Digits in Test Data to Determine Conformance with Specifications
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.