]>
Previous topic: page 4 Applications
Next topic: chapter 2 Integration, section 1 Indefinite Integrals, page 1 Definitions and Basics
We have already found (1.3.1.4) that the tangent line of a function f(x) at a point x = x0 is
(1.3.5.1) |
where the index zero means at point x = x0 . The value f0 is called the zero approximation, and the expression (1.3.5.1) is called the first approximation of the function in the vicinity of x = x0 . The first approximaton means that y and its derivative share the same values as the function at the point x = x0 :
(1.3.5.2) |
It would be nice to find an expression that in addition to these values, the second derivative will also be the same. If we add a term of the form where k is a constant, it won't spoil the first approximation since the additional term vanishes. In order to obtain the correct value of the second derivative we have to fix k = ½. Try it and you'll find out why!
To summarize, the second approximation around x = x0 is
(1.3.5.3) |
The second approximation contains more information than the first; not only the slope, but also the change of the slope, or how the function curves in the vicinity of the point x = x0 . The approximate expression is not just a straight line now, but a parabola and hopefully the approximation now will be more accurate at bigger distances from x = x0 . If f(x) is a second order polynomial, then the second approximation will reproduce f(x) exactly.
One can continue and find more and more terms, and obtain the N-th approximation, which can be written in a general form as
(1.3.5.4) |
The expression of (1.3.5.4) is also called the Taylor polynomial. The question now is how accurate is the N-th approximation for representing the function f(x) ? The answer is given by Taylor's theorem, which we are not going to prove, but just to formulate:
If the function f(x) is differentiable up to the order of N (including) in an interval of x [a,b] , and differentiable of order N+1 in the interval (a,b) , then for x0 in [a,b]
(1.3.5.5) |
where yN is defined in (1.3.5.4) and the RN is called the remainder, and is expressed in the form:
(1.3.5.6) |
This form of the remainder (1.3.5.6) is commonly named after Lagrange. Some other forms also exist, but we are not going to consider them here.
If we denote by R1N and R2N the most extreme possible values of RN , then in order to evaluate f(x) one can use:
(1.3.5.7) |
Let's consider a simple example:
(1.3.5.8) |
By the choice of x0 = 0 the powers of (x−x0) from the Taylor polynomial (1.3.5.4) simply become the powers of x , and such a polynomial is commonly named after Maclaurin.
From (1.3.5.4) and (1.3.5.8) one obtains:
(1.3.5.9) |
The case of x = 1 corresponds to f = e and from the remainder (1.3.5.6) one obtains
(1.3.5.10) |
Since the remainder converges to zero for N→ , one can obtain the constant e to any required precision. For evaluating e one cannot use its value in (1.3.5.10) but just 3, which is an upper bound, as shown by (A01.9). The formula for evaluating e that we are going to use is according to (1.3.5.7)
(1.3.5.11) |
The graphical presentation of this example is seen in Fig. Approximations of exponent.
In this example the display of e is limited to 11 significant figures, but the accuracy of its evaluation is limited only to the round-off error of the computer. This particular accuracy was obtained at the 13th approximation. This evaluation of e is consistent with the limit of the sequence (1.2.4.2), as shown in Appendix 01.
We are now going to consider another example that behaves quite differently - the lan function.
We immediately see that lnx cannot be approximated around x0 = 0 (as Maclaurin polynomial), since the function diverges there to infinity. We can for instance use x0 = 1 . But instead of doing that, we'll use the Maclaurin polynomial for the lan function translated to the left by unity, namely by using ln(1+x) instead of lnx around x0 = 1 .
It is simple to prove (by induction) that
(1.3.5.12) |
and the Maclaurin polynomial for ln(1+x) is
(1.3.5.13) |
where the zero approximation is y0 = 0 . The remainder for x > 0 is
(1.3.5.14) |
and its maximal absolute value is for ψ = 0 :
(1.3.5.15) |
For N→ the expression (1.3.5.15) converges to zero for , and therefore the Maclaurin polynomial (1.3.5.13) is suitable for the approximation of the function there. It is left to the user to show that (1.3.5.15) diverges for x > 1 (exercise 1, see below), but on the other hand the substitution of ψ = x in (1.3.5.14) makes the remainder converging to zero. In other words the criterium of the remainder (1.3.5.14) is ambiguous for x > 1 . The remainder also gives ambiguous results for .
For x = 1 the function is ln(1+x) = 2 . The remainder's value is between (for ψ = x) and beween (for ψ = 0) and therefore from (1.3.5.7) one obtains:
(1.3.5.16) |
for evaluating ln2 .
The behaviour of the Maclaurin polynomial (1.3.5.13) will be studied and also ln2 will be evaluated by using (1.3.5.16) in Fig. Approximations of lan .
From this graphic study one concludes that there is no convergence for x > 1 (lnx > ln2) and the convergence for ln2 is very slow. For a correct display of four significant figures, it takes more than 70 approximation steps, and the error defined by the remainder (1.3.5.16) is orders of magnitude larger, than the deviation from the correct value. The reason is that x = 1 is on the edge of the convergence interval. We'll see later a better way to approximate lnx .
We saw that in the case of the exponent we can obtain the values of the function as accurately as we want by summing up the Taylor (or Maclaurin) polynomial up to the necessary order. An infinite order, representing a summation of infinite number of terms, will give us the exact value of the function.
On the other hand in the case of ln(1+x) , the summation gives an accurate result for an interval of the variable; but outside of it, any increase in the order of approximation worsens the results.
A summation of an infinite number of countable terms is called a series. The partial sum of a series SN is defined by
(1.3.5.17) |
where the an's are these terms. The partial sums (1.3.5.17) represent a sequence. The series is called converging to a value S if the partial sums converge to that value:
(1.3.5.18) |
The Maclaurin series of expx converges for any x . The Maclaurin series of ln(1+x) converges for the interval of x (−1,1] and diverges for x > 1 .
The following rules apply to the series:
An example of series, known from school is the geometric series:
(1.3.5.19) |
Another example is the harmonic series, which diverges although the terms converge to zero:
(1.3.5.20) |
An example of the harmonic series (multiplied by −1) is the Maclaurin series of ln(1+x) (1.3.5.13) for x = −1 . On the other hand a modification of the harmonic series with alternating sign of the terms converges. For example we saw that the Maclaurin series of ln(1+x) for x = 1 , converges to ln2 .
We saw that the remainder of a Taylor polynomial could be ambiguous, as a criterium for the presentation of a function as a Taylor series. The presentation would be correct, if the series converges. There are different criteria for convergence of a series, but we are going to use only one of them named after D'Alambert, without proving it.
The test of D'Alambert is for the absolute convergence of a series, meaning the convergence of the series with absolute values of the terms and the partial sum is:
(1.3.5.21) |
If a series converges absolutely, it also converges with different signs of the terms. Therefore the requirement of absolute convergence is stricter. As an example we saw that the harmonic series with alternating signs of the terms converges, but does not converge absolutely.
The D'alambert's test of the series with terms an states:
(1.3.5.22) |
In order to test if a series converges, it is better to first check if the terms converge to zero (the necessary test) and than use (1.3.5.22).
A Maclaurin or Taylor presentation of a function as a series is also called the power expansion of the function.
As an example of the D'Alambert's test, the Maclaurin series of expx (1.3.5.9) gives
(1.3.5.23) |
and therefore the series converges.
The other example is the Maclaurin series of ln(1+x) (1.3.5.13) that gives
(1.3.5.24) |
meaning that the series converges for |x| < 1 , diverges for |x| > 1 , and the test is indecisive for |x| = 1 . We know that indeed for x = 1 there is convergence, and for x = −1 the series diverges to − exactly as the function.
As already stated, the multiplication of all the terms by a constant number does not affect the convergence of the series. If we use a power series of a function, the multiplication can be done by any expression of x . In such a case there are occasions where special care should be taken:
Let's take the following example: the multiplication of Maclaurin series of ln(1+x) by x−1 . Since x = 0 is included in the convergence interval of the series of ln(1+x) , one has to make sure that the limit of x−1ln(1+x) is finite for x → 0 . Indeed it is trivial to find that the required limit is = 1 . By using (1.3.5.13) we thus obtain the power (Maclaurin) series with the interval of convergence :
(1.3.5.25) |
For another example one can use the multiplication of the power series of ln(1+x) by the expression (1+x) . For x = −1 , ln(1+x) diverges and the expression (1+x) vanishes. By using of the rule of L'Hôpital, one obtains that their product has the limit
which makes the new series convergent also for x = −1 .
It is obvious that if a power series of the form converges for an interval of x , then the substitution of x by any expression whose values are covered by that interval won't affect the convergence. This property is often used for obtaining the power expansion of some function directly from known series of other functions.
As an example we'll use the series of expx to obtain the expansion of exp(−x2) . From (1.3.5.9) and (1.3.5.23) we know that
(1.3.5.26) |
Since −x2 is included in the convergence interval of (1.3.5.26), the substitution of x by −x2 yields
(1.3.5.27) |
for any x .
Notice that exp(−x2) is an even function and its Maclaurin expansion (1.3.5.27) includes only even powers of x (that are even functions of x). For the same reason the expansion of an odd function should contain only odd powers of x.
We are now going to study a couple of the most important rules about operations with power series that direct us to useful applications.
Rule 1. Addition
Addition (Subtraction) of two converging series can be done by adding (subtracting) the corresponding terms:
(1.3.5.28) |
This rule can be obtained directly by using the partial sums as a sequence, and the rule of adding and subtracting sequences (1.1.3.11). Of course within the case of power series, this rule holds only for the overlapping part of the domains of convergence.
Example. The Maclaurin expansion of coshx can be obtained by using the expansion of expx (1.3.5.26) :
(1.3.5.29) |
Here we used the expansion of exp(−x) obtained by substituting x by −x in (1.3.5.26). Since the added series have the same domain of convergence, the domain of the final series remains the same (− <x<) .
Another example is the subtraction of ln(1−x) from ln(1+x) .
Geometrically ln(1−x) is obtained from ln(1+x) by inverting the direction of the x axis, therefore if the x interval of convergence is (−1,1] for ln(1+x) then it should be - [−1,1) for ln(1−x) . Therefore the overlaping domain of convergence is the interval (−1,1), without including the extremes.
The series for ln(1+x) is already known from (1.3.5.13) and that of ln(1−x) is obtained by substituting the x by −x .
(1.3.5.30) |
The power series (1.3.5.30) provides a fast converging calculation of the lan for any positive number. For example ln2 corresponds to , which yields . In addition from the relation of atanhx to (1.2.4.22), one obtains the series
(1.3.5.31) |
The convergence of the partial sums of (1.3.5.30) and the calculations of a few values of lan are demonstrated in Fig. Approximations of lnx via atanhx .
It is shown e.g. that the 21th approximation (11 steps) gives ln2 with the 11 correct significant figures, a better performance by orders of magnitude than that of the ln(1+x) series previously shown.
Rule 2. Differentiation
A power series can be diffentiated term by term within the interval of absolute convergence without affecting the convergence.
It can be easily shown that if a power serries holds the D'Alambert's test (1.3.5.22), then the derivative also holds it.
As an example of the rule, by differentiating the Maclaurin series of expx (1.3.5.26) one immediately obtains the relation: .
For another example we'll use the Maclaurin series of sinx . One obtains easily the derivatives of sinx at x = 0 :
(1.3.5.32) |
From (1.3.5.4) for x0 = 0 and from (1.3.5.32) one obtains
(1.3.5.33) |
The test of D'Alambert (1.3.5.22) gives
(1.3.5.34) |
meaning that (1.3.5.33) converges absolutely for any x .
The derivative of sinx can be obtained from its series:
(1.3.5.35) |
It is up to the user to show that the final result of (1.3.5.35) is the Maclaurin series of cosx .
In Section 1, page 2 the binomial expansion with a natural number as a power was studied. Here we are going to see the binomial expansion for any other power, which does not lead to a finite sum, but to a series.
In empirical sciences we often meet expressions that can be written in the form of (1+x)p , where x is small in comparison to the unity. The simplest way to expand this expression as a series is by using its Maclaurin series. By using induction (Section 1, page 2), one can easily prove that
(1.3.5.36) |
except for the case of p being a natural number or zero, which makes all the derivatives with n > p to vanish. From (1.3.5.36) one obtains the Maclaurin series
(1.3.5.37) |
The application of the D'Alambert's test yields
(1.3.5.38) |
meaning that the interval of absolute convergence is |x| < 1 . One should notice that there is no restriction on the signs of p and x .
Here is a simple example from mechanics. According to Newton's physics, the free motion (kinetic) energy of a particle with mass m is where v is the velocity. According to Einstein's relativity the free motion energy is , which includes the energy of rest .
It is well known that Newton's physics is correct for , and the rest energy is undetectable in this framework, but this is not apparent from the definitions above.
This is solved by using the binomial expansion
(1.3.5.39) |
which means that Newton's physics corresponds to the first approximation of Einstein's relativity, as far as free motion energy of a particle is concerned.
Exercise 1. For the function ln(1+x) :
Exercise 2. From the power expansion of (1.3.5.25) :
Exercise 3. Obtain the Maclaurin series of sinhx by using the following techniques, and then compare them:
In addition answer the following questions:
Exercise 4. This exercise is related to the binomial expansion.
Previous topic: page 4 Applications
Next topic: chapter 2 Integration, section 1 Indefinite Integrals, page 1 Definitions and Basics