Integration by substitution

In calculus, integration by substitution, also known as u-substitution or change of variables,[1] is a method for evaluating integrals and antiderivatives. It is the counterpart to the chain rule for differentiation, in fact, it can loosely be thought of as using the chain rule "backwards".

Substitution for a single variable

Introduction

Before stating the result rigorously, consider a simple case using indefinite integrals.

Compute .[2]

Set . This means , or, in differential form . Now

,

where is an arbitrary constant of integration.

This procedure is frequently used, but not all integrals are of a form that permits its use. In any event, the result should be verified by differentiating and comparing to the original integrand.

For definite integrals, the limits of integration must also be adjusted, but the procedure is mostly the same.

Definite integrals

Let φ : [a,b] → I be a differentiable function with a continuous derivative, where IR is an interval. Suppose that f : IR is a continuous function. Then[3]

In Leibniz notation, the substitution u = φ(x) yields

Working heuristically with infinitesimals yields the equation

which suggests the substitution formula above. (This equation may be put on a rigorous foundation by interpreting it as a statement about differential forms.) One may view the method of integration by substitution as a partial justification of Leibniz's notation for integrals and derivatives.

The formula is used to transform one integral into another integral that is easier to compute. Thus, the formula can be read from left to right or from right to left in order to simplify a given integral. When used in the former manner, it is sometimes known as u-substitution or w-substitution in which a new variable is defined to be a function of the original variable found inside the composite function multiplied by the derivative of the inner function. The latter manner is commonly used in trigonometric substitution, replacing the original variable with a trigonometric function of a new variable and the original differential with the differential of the trigonometric function.

Proof

Integration by substitution can be derived from the fundamental theorem of calculus as follows. Let f and φ be two functions satisfying the above hypothesis that f is continuous on I and φ is integrable on the closed interval [a,b]. Then the function f(φ(x))φ′(x) is also integrable on [a,b]. Hence the integrals

and

in fact exist, and it remains to show that they are equal.

Since f is continuous, it has an antiderivative F. The composite function Fφ is then defined. Since φ is differentiable, combining the chain rule and the definition of an antiderivative gives

Applying the fundamental theorem of calculus twice gives

which is the substitution rule.

Example 1:

Consider the integral

Make the substitution to obtain , meaning . Therefore,

Since the lower limit was replaced with , and the upper limit with , a transformation back into terms of was unnecessary.

Alternatively, one may fully evaluate the indefinite integral (see below) first then apply the boundary conditions. This becomes especially handy when multiple substitutions are used.

Example 2:

For the integral

a variation of the above procedure is needed. The substitution implying is useful because . We thus have

The resulting integral can be computed using integration by parts or a double angle formula, , followed by one more substitution. One can also note that the function being integrated is the upper right quarter of a circle with a radius of one, and hence integrating the upper right quarter from zero to one is the geometric equivalent to the area of one quarter of the unit circle, or .

Antiderivatives

Substitution can be used to determine antiderivatives. One chooses a relation between and , determines the corresponding relation between and by differentiating, and performs the substitutions. An antiderivative for the substituted function can hopefully be determined; the original substitution between and is then undone.

Similar to example 1 above, the following antiderivative can be obtained with this method:

where is an arbitrary constant of integration.

There were no integral boundaries to transform, but in the last step reverting the original substitution was necessary. When evaluating definite integrals by substitution, one may calculate the antiderivative fully first, then apply the boundary conditions. In that case, there is no need to transform the boundary terms.

The tangent function can be integrated using substitution by expressing it in terms of the sine and cosine:

Using the substitution gives and

Substitution for multiple variables

One may also use substitution when integrating functions of several variables. Here the substitution function (v1, …, vn) = φ(u1, …, un) needs to be injective and continuously differentiable, and the differentials transform as

where det()(u1, ..., un) denotes the determinant of the Jacobian matrix of partial derivatives of φ at the point (u1, ..., un). This formula expresses the fact that the absolute value of the determinant of a matrix equals the volume of the parallelotope spanned by its columns or rows.

More precisely, the change of variables formula is stated in the next theorem:

Theorem. Let U be an open set in Rn and φ : URn an injective differentiable function with continuous partial derivatives, the Jacobian of which is nonzero for every x in U. Then for any real-valued, compactly supported, continuous function f, with support contained in φ(U),

The conditions on the theorem can be weakened in various ways. First, the requirement that φ be continuously differentiable can be replaced by the weaker assumption that φ be merely differentiable and have a continuous inverse.[4] This is guaranteed to hold if φ is continuously differentiable by the inverse function theorem. Alternatively, the requirement that det() ≠ 0 can be eliminated by applying Sard's theorem.[5]

For Lebesgue measurable functions, the theorem can be stated in the following form:[6]

Theorem. Let U be a measurable subset of Rn and φ : URn an injective function, and suppose for every x in U there exists φ(x) in Rn,n such that φ(y) = φ(x) + φ(x)(yx) + o(||yx||) as yx (here o is little-o notation). Then φ(U) is measurable, and for any real-valued function f defined on φ(U),

in the sense that if either integral exists (including the possibility of being properly infinite), then so does the other one, and they have the same value.

Another very general version in measure theory is the following:[7]

Theorem. Let X be a locally compact Hausdorff space equipped with a finite Radon measure μ, and let Y be a σ-compact Hausdorff space with a σ-finite Radon measure ρ. Let φ : XY be a continuous and absolutely continuous function (where the latter means that ρ(φ(E)) = 0 whenever μ(E) = 0). Then there exists a real-valued Borel measurable function w on X such that for every Lebesgue integrable function f : YR, the function (fφ) ⋅ w is Lebesgue integrable on X, and

Furthermore, it is possible to write

for some Borel measurable function g on Y.

In geometric measure theory, integration by substitution is used with Lipschitz functions. A bi-Lipschitz function is a Lipschitz function φ : URn which is injective and whose inverse function φ1 : φ(U) → U is also Lipschitz. By Rademacher's theorem a bi-Lipschitz mapping is differentiable almost everywhere. In particular, the Jacobian determinant of a bi-Lipschitz mapping det is well-defined almost everywhere. The following result then holds:

Theorem. Let U be an open subset of Rn and φ : URn be a bi-Lipschitz mapping. Let f : φ(U) → R be measurable. Then

in the sense that if either integral exists (or is properly infinite), then so does the other one, and they have the same value.

The above theorem was first proposed by Euler when he developed the notion of double integrals in 1769. Although generalized to triple integrals by Lagrange in 1773, and used by Legendre, Laplace, Gauss, and first generalized to n variables by Mikhail Ostrogradski in 1836, it resisted a fully rigorous formal proof for a surprisingly long time, and was first satisfactorily resolved 125 years later, by Élie Cartan in a series of papers beginning in the mid-1890s.[8][9]

Application in probability

Substitution can be used to answer the following important question in probability: given a random variable with probability density and another random variable such that , what is the probability density for ?

It is easiest to answer this question by first answering a slightly different question: what is the probability that takes a value in some particular subset ? Denote this probability . Of course, if has probability density then the answer is

but this isn't really useful because we don't know ; it's what we're trying to find. We can make progress by considering the problem in the variable . takes a value in whenever takes a value in , so

Changing from variable to gives

Combining this with our first equation gives

so

In the case where and depend on several uncorrelated variables, i.e. and , can be found by substitution in several variables discussed above. The result is

See also

Notes

  1. Swokowski 1983, p. 257
  2. Swokowsi 1983, p. 258
  3. Briggs & Cochran 2011, pg.361
  4. Rudin 1987, Theorem 7.26
  5. Spivak 1965, p. 72
  6. Fremlin 2010, Theorem 263D
  7. Hewitt & Stromberg 1965, Theorem 20.3
  8. Katz 1982
  9. Ferzola 1994

References

  • Briggs, William; Cochran, Lyle (2011), Calculus /Early Transcendentals (Single Variable ed.), Addison-Wesley, ISBN 978-0-321-66414-3
  • Ferzola, Anthony P. (1994), "Euler and differentials", The College Mathematics Journal, 25 (2): 102–111, doi:10.2307/2687130
  • Fremlin, D.H. (2010), Measure Theory, Volume 2, Torres Fremlin, ISBN 978-0-9538129-7-4.
  • Hewitt, Edwin; Stromberg, Karl (1965), Real and Abstract Analysis, Springer-Verlag, ISBN 978-0-387-04559-7.
  • Katz, V. (1982), "Change of variables in multiple integrals: Euler to Cartan", Mathematics Magazine, 55 (1): 3–11, doi:10.2307/2689856
  • Rudin, Walter (1987), Real and Complex Analysis, McGraw-Hill, ISBN 978-0-07-054234-1.
  • Swokowski, Earl W. (1983), Calculus with analytic geometry (alternate ed.), Prindle, Weber & Schmidt, ISBN 0-87150-341-7
  • Spivak, Michael (1965), Calculus on Manifolds, Westview Press, ISBN 978-0-8053-9021-6.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.