Simple rational approximation

Simple rational approximation (SRA) is a subset of interpolating methods using rational functions. Especially, SRA interpolates a given function with a specific rational function whose poles and zeros are simple, which means that there is no multiplicity in poles and zeros. Sometimes, it only implies simple poles.

The main application of SRA lies in finding the zeros of secular functions. A divide-and-conquer algorithm to find the eigenvalues and eigenvectors for various kinds of matrices is well known in numerical analysis. In a strict sense, SRA implies a specific interpolation using simple rational functions as a part of the divide-and-conquer algorithm. Since such secular functions consist of a series of rational functions with simple poles, SRA is the best candidate to interpolate the zeros of the secular function. Moreover, based on previous researches, a simple zero that lies between two adjacent poles can be considerably well interpolated by using a two-dominant-pole rational function as an approximating function.

One-point third-order iterative method: Halley's formula

The origin of the interpolation with rational functions can be found in the previous work done by Edmond Halley. Halley's formula is known as one-point third-order iterative method to solve by means of approximating a rational function defined by

We can determine a, b, and c so that

Then solving yields the iteration

This is referred to as Halley's formula. This geometrical interpretation was derived by Gander (1978), where the equivalent iteration also was derived by applying Newton's method to

We call this algebraic interpretation of Halley's formula.

One-point second-order iterative method: Simple rational approximation

Similarly, we can derive a variation of Halley's formula based on a one-point second-order iterative method to solve using simple rational approximation by

Then we need to evaluate

Thus we have

The algebraic interpretation of this iteration is obtained by solving

This one-point second-order method is known to show a locally quadratic convergence if the root of the equation is simple. SRA strictly implies this one-point second-order interpolation by a simple rational function.

We can notice that even third order method is a variation of Newton's method. We see the Newton's steps are multiplied by some factors. These factors are called the convergence factors of the variations, which are useful for analyzing the rate of convergence. See Gander (1978).

References

  • Demmel, James W. (1997), Applied Numerical Linear Algebra, Philadelphia, PA: Society for Industrial and Applied Mathematics, ISBN 0-89871-389-7, MR 1463942.
  • Elhay, S.; Golub, G. H.; Ram, Y. M. (2003), "The spectrum of a modified linear pencil", Computers & Mathematics with Applications, 46 (8–9): 1413–1426, doi:10.1016/S0898-1221(03)90229-X, MR 2020255.
  • Gu, Ming; Eisenstat, Stanley C. (1995), "A divide-and-conquer algorithm for the symmetric tridiagonal eigenproblem" (PDF), SIAM Journal on Matrix Analysis and Applications, 16 (1): 172–191, doi:10.1137/S0895479892241287, MR 1311425.
  • Gander, Walter (1978), On the linear least squares problem with a quadratic constraint, Stanford University, School of Humanities and Sciences, Computer Science Dept..
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.