Catastrophic cancellation

In numerical analysis, catastrophic cancellation[1][2] is the phenomenon that subtracting good approximations to two nearby numbers may yield a very bad approximation to the difference of the original numbers.

For example, suppose you have two wood studs, one long and the other long. If you measure them with a ruler that is good only to the centimeter, you might get approximations and . Depending on your needs, these may be good approximations, in relative error, to the true lengths: the approximations are in error by less than 2% of the true lengths, .

However, if you subtract the approximate lengths, you will get , even though the true difference between the lengths is . The difference of the approximations, , is in error by 100% of the magnitude of the difference of the true values, .

Catastrophic cancellation may happen even if the difference is computed exactly, as in the example aboveit is not a property of any particular kind of arithmetic like floating-point arithmetic; rather, it is inherent to subtraction, when the inputs are approximations themselves. Indeed, in floating-point arithmetic, when the inputs are close enough, the floating-point difference is computed exactly, by the Sterbenz lemmathere is no rounding error introduced by the floating-point subtraction operation.

Formal analysis

Formally, catastrophic cancellation happens because subtraction is ill-conditioned at nearby inputs: even if approximations and have small relative errors and from true values and , respectively, the relative error of the approximate difference from the true difference is inversely proportional to the true difference:

Thus, the relative error of the exact difference of the approximations from the difference of the true numbers is

which can be arbitrarily large if the true inputs and are close.

In numerical algorithms

Subtracting nearby numbers in floating-point arithmetic does not always cause catastrophic cancellation, or even any errorby the Sterbenz lemma, if the numbers are close enough the floating-point difference is exact. But cancellation may amplify errors in the inputs that arose from rounding in other floating-point arithmetic.

Example: Difference of squares

Given numbers and , the naive attempt to compute the mathematical function by the floating-point arithmetic is subject to catastrophic cancellation when and are close in magnitude, because the subtraction will amplify rounding errors in the squaring. The alternative factoring , evaluated by the floating-point arithmetic , avoids catastrophic cancellation because it avoids introducing rounding error leading into the subtraction.[2]

For example, if and , then the true value of the difference is . In IEEE 754 binary64 arithmetic, evaluating the alternative factoring gives the correct result exactly (with no rounding), but evaluating the naive expression gives the floating-point number nearest to , of which only half the digits are correct and the other half (underlined) are garbage.

Example: Complex arcsine

When computing the complex arcsine function, one may be tempted to use the logarithmic formula directly:

However, suppose for . Then and ; call the difference between them a very small difference, nearly zero. If is evaluated in floating-point arithmetic giving

with any error , where denotes floating-point rounding, then computing the difference

of two nearby numbers, both very close to , may amplify the error in one input by a factor of a very large factor because was nearly zero. For instance, if , the true value of is approximately , but using the naive logarithmic formula in IEEE 754 binary64 arithmetic may give , with only five out of sixteen digits correct and the remainder (underlined) all garbage.

In the case of for , using the identity avoids cancellation because but , so the subtraction is effectively addition with the same sign which does not cancel.

Example: Radix conversion

Numerical constants in software programs are often written in decimal, such as in the C fragment double x = 1.000000000000001; to declare and initialize an IEEE 754 binary64 variable named x. However, is not a binary64 floating-point number; the nearest one, which x will be initialized to in this fragment, is . Although the radix conversion from decimal floating-point to binary floating-point only incurs a small relative error, catastrophic cancellation may amplify it into a much larger one:

double x = 1.000000000000001;  // rounded to 1 + 5*2^{-52}
double y = 1.000000000000002;  // rounded to 1 + 9*2^{-52}
double z = y - x;              // difference is exactly 4*2^{-52}

The difference is . The relative errors of x from and of y from are both below , and the floating-point subtraction y - x is computed exactly by the Sterbenz lemma.

But even though the inputs are good approximations, and even though the subtraction is computed exactly, the difference of the approximations has a relative error of over from the difference of the original values as written in decimal: catastrophic cancellation amplified a tiny error in radix conversion into a large error in the output.

Benign cancellation

Cancellation is sometimes useful and desirable in numerical algorithms. For example, the 2Sum and Fast2Sum algorithms both rely on such cancellation after a rounding error in order to exactly compute what the error was in a floating-point addition operation as a floating-point number itself.

The function , if evaluated naively at points , will lose most of the digits of in rounding . However, the function itself is well-conditioned at inputs near . Rewriting it as

exploits cancellation in to undo the error.[2] This works when is near zero (though not exactly zero) because is near , so , and thus the numerator and denominator of the quotient cancel, leaving only as desired.

References

  1. Muller, Jean-Michel; Brunie, Nicolas; de Dinechin, Florent; Jeannerod, Claude-Pierre; Joldes, Mioara; Lefèvre, Vincent; Melquiond, Guillaume; Revol, Nathalie; Torres, Serge (2018). Handbook of Floating-Point Arithmetic (2nd ed.). Gewerbestrasse 11, 6330 Cham, Switzerland: Birkhäuser. p. 102. doi:10.1007/978-3-319-76526-6. ISBN 978-3-319-76525-9.CS1 maint: location (link)
  2. Goldberg, David (March 1991). "What every computer scientist should know about floating-point arithmetic". ACM Computing Surveys. New York, NY, United States: Association for Computing Machinery. 23 (1): 5–48. doi:10.1145/103162.103163. ISSN 0360-0300. Retrieved 2020-09-17.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.