We introduced compound
proceduresfunctions
in section 1.1.4 as a mechanism for
abstracting patterns of numerical operations so as to make them independent
of the particular numbers involved. With higher-order
procedures,functions,
such as
the integralprocedurefunction
of section 1.3.1, we began to
see a more powerful kind of abstraction:
proceduresfunctions
used to express general methods of computation, independent of the
particular functions involved. In this section we discuss two more elaborate
examples—general methods for finding zeros and fixed points of
functions—and show how these methods can be expressed directly as
procedures.functions.
Finding roots of equations by the half-interval method
The
half-interval method is a simple but powerful technique for
finding roots of an equation $f(x)=0$, where
$f$ is a continuous function. The idea is that,
if we are given points $a$ and
$b$ such that
$f(a) < 0 < f(b)$, then
$f$ must have at least one zero between
$a$ and $b$. To locate
a zero, let $x$ be the average of
$a$ and $b$ and
compute $f(x)$. If
$f(x) > 0$, then
$f$ must have a zero between
$a$ and $x$. If
$f(x) < 0$, then
$f$ must have a zero between
$x$ and $b$.
Continuing in this way, we can identify smaller and smaller intervals on
which $f$ must have a zero. When we reach a
point where the interval is small enough, the process stops. Since the
interval of uncertainty is reduced by half at each step of the process, the
maximal number of steps required grows as
$\Theta(\log( L/T))$, where
$L$ is the length of the original interval and
$T$ is the error tolerance (that is, the size of
the interval we will consider small enough).
Here is a
procedure that implements this strategy:function that implements this strategy:
We assume that we are initially given the function
$f$ together with points at which its values are
negative and positive. We first compute the midpoint of the two given
points. Next we check to see if the given interval is small enough, and if
so we simply return the midpoint as our answer. Otherwise, we compute as a
test value the value of $f$ at the midpoint. If
the test value is positive, then we continue the process with a new interval
running from the original negative point to the midpoint. If the test value
is negative, we continue with the interval from the midpoint to the positive
point. Finally, there is the possibility that the test value is 0, in
which case the midpoint is itself the root we are searching for.
To test whether the endpoints are close enough we can use a
procedurefunction
similar to the one used in section 1.1.7 for
computing square roots:[1]
Original
JavaScript
(define (close-enough? x y)
(< (abs (- x y)) 0.001))
Search
The function
search
is awkward to use directly, because we can accidentally give it points at
which $f$'s values do not have the required
sign, in which case we get a wrong answer. Instead we will use
search via the following
procedure,function,
which checks to see which of the endpoints has a negative function value and
which has a positive value, and calls the searchprocedurefunction
accordingly. If the function has the same sign on the two given points, the
half-interval method cannot be used, in which case the
procedurefunction
signals an error.[2]
Original
JavaScript
(define (half-interval-method f a b)
(let ((a-value (f a))
(b-value (f b)))
(cond ((and (negative? a-value) (positive? b-value))
(search f a b))
((and (negative? b-value) (positive? a-value))
(search f b a))
(else
(error "Values are not of opposite sign" a b)))))
function half_interval_method(f, a, b) {
const a_value = f(a);
const b_value = f(b);
return negative(a_value) && positive(b_value)
? search(f, a, b)
: negative(b_value) && positive(a_value)
? search(f, b, a)
: error("values are not of opposite sign");
}
The following example uses the
Original
JavaScript
half-interval method to approximate
$\pi$ as the root between 2 and 4 of
$\sin\, x = 0$:
Here is another example, using the half-interval method to search for a root
of the equation $x^3 - 2x - 3 = 0$ between 1
and 2:
Original
JavaScript
(half-interval-method (lambda (x) (- (* x x x) (* 2 x) 3))
1.0
2.0)
half_interval_method(x => x * x * x - 2 * x - 3, 1, 2);
1.89306640625
Finding fixed points of functions
A number $x$ is called a
fixed point of a
function $f$ if $x$
satisfies the equation $f(x)=x$. For some
functions $f$ we can locate a fixed point by
beginning with an initial guess and applying $f$
repeatedly,
\[
\begin{array}{l}
f(x), \ f(f(x)), \ f(f(f(x))), \ \ldots
\end{array}
\]
until the value does not change very much. Using this idea, we can devise a
procedurefunctionfixed-pointfixed_point
that takes as inputs a function and an initial guess and produces an
approximation to a fixed point of the function. We apply the function
repeatedly until we find two successive values whose difference is less
than some prescribed tolerance:
The fixed-point process is reminiscent of the process we used for finding
square roots in section 1.1.7. Both are based on
the idea of repeatedly improving a guess until the result satisfies some
criterion. In fact, we can readily formulate the
square-root computation as a fixed-point search. Computing the square root
of some number $x$ requires finding a
$y$ such that
$y^2 = x$. Putting this equation into the
equivalent form $y = x/y$, we recognize that we
are looking for a fixed point of the function[4]
$y \mapsto x/y$, and we can therefore try to
compute square roots as
Original
JavaScript
(define (sqrt x)
(fixed-point (lambda (y) (/ x y))
1.0))
function sqrt(x) {
return fixed_point(y => x / y, 1);
}
Unfortunately, this fixed-point search does not converge. Consider an
initial guess $y_1$. The next guess is
$y_2 = x/y_1$ and the next guess is
$y_3 = x/y_2 = x/(x/y_1) = y_1$. This results
in an infinite loop in which the two guesses
$y_1$ and $y_2$ repeat
over and over, oscillating about the answer.
One way to control such oscillations is to prevent the guesses from changing
so much. Since the answer is always between our guess
$y$
and $x/y$, we can make a new guess that is not as
far from $y$ as $x/y$
by averaging $y$ with
$x/y$, so that the next guess after
$y$ is
$\frac{1}{2}(y+x/y)$ instead of
$x/y$. The process of making such a sequence of
guesses is simply the process of looking for a fixed point of
$y \mapsto \frac{1}{2}(y+x/y)$:
Original
JavaScript
(define (sqrt x)
(fixed-point (lambda (y) (average y (/ x y)))
1.0))
function sqrt(x) {
return fixed_point(y => average(y, x / y), 1);
}
(Note that $y=\frac{1}{2}(y+x/y)$ is a simple
transformation of the equation $y=x/y$; to derive
it, add $y$ to both sides of the equation and
divide by 2.)
With this modification, the square-root
procedurefunction
works. In fact, if we unravel the definitions, we can see that the sequence
of approximations to the square root generated here is precisely the same as
the one generated by our original square-root
procedurefunction
of section 1.1.7. This approach of averaging
successive approximations to a solution, a technique we call
average damping, often aids the convergence of fixed-point searches.
Exercise 1.35
Show that the
golden ratio $\phi$
(section 1.2.2) is a fixed point of the
transformation $x \mapsto 1 + 1/x$, and use this
fact to compute $\phi$ by means of the
fixed-point procedure.
fixed_point function.
The fixed point of the function is
\[ 1 + 1 / x = x \]
Solving for x, we get
\[ x^2 = x + 1 \]
\[ x^2 - x - 1 = 0 \]
Using the quadratic equation to solve for $x$,
we find that one of the roots of this equation is the golden ratio
$(1+\sqrt{5})/2$.
Original
JavaScript
fixed_point(x => 1 + (1 / x), 1);
Exercise 1.36
Modify
fixed-pointfixed_point
so that it prints the sequence of approximations it generates, using the
primitive function display shown in
exercise 1.22. Then find a solution to
$x^x = 1000$ by finding a fixed point of
$x \mapsto \log(1000)/\log(x)$.
(Use Scheme's primitive log procedure
which computes natural logarithms.)
(Use the primitive function math_log,
which computes natural logarithms.)
Compare the number of steps this takes with and without average damping.
(Note that you cannot start
fixed-pointfixed_point
with a guess of 1, as this would cause division by
$\log(1)=0$.)
function fixed_point_with_average_dampening(f, first_guess) {
function close_enough(x, y) {
return abs(x - y) < tolerance;
}
function try_with(guess) {
display(guess);
const next = (guess + f(guess)) / 2;
return close_enough(guess, next)
? next
: try_with(next);
}
return try_with(first_guess);
}
Exercise 1.37
An infinite
continued fraction is an expression of the form
\[
\begin{array}{lll}
f & = & {\dfrac{N_1}{D_1+
\dfrac{N_2}{D_2+
\dfrac{N_3}{D_3+\cdots }}}}
\end{array}
\]
As an example, one can show that the infinite continued fraction
expansion with the $N_i$ and the
$D_i$ all equal to 1 produces
$1/\phi$, where
$\phi$ is the
golden ratio (described in
section 1.2.2). One way to approximate
an infinite continued fraction is to truncate the expansion after a given
number of terms. Such a truncation—a so-called
$k$-term finite continued
fraction—has the form
\[
{\dfrac{N_1}{D_1 +
\dfrac{N_2}{\ddots +
\dfrac{N_K}{D_K}}}}
\]
Suppose that n and
d are
proceduresfunctions
of one argument (the term index $i$) that
return the $N_i$ and
$D_i$ of the terms of the continued fraction.
Define a procedureDeclare a functioncont-fraccont_frac
such that evaluating
(cont-frac n d k)cont_frac(n, d, k)
computes the value of the $k$-term finite
continued fraction. Check your
procedurefunction
by approximating $1/\phi$ using
Original
JavaScript
(cont-frac (lambda (i) 1.0)
(lambda (i) 1.0)
k)
cont_frac(i => 1, i => 1, k);
for successive values of k. How large must
you make k in order to get an approximation
that is accurate to 4 decimal places?
If your
cont-fraccont_fracprocedurefunction
generates a recursive process, write one that generates an iterative
process. If it generates an iterative process, write one that generates
a recursive process.
//recursive process
function cont_frac(n, d, k) {
function fraction(i) {
return i > k
? 0
: n(i) / (d(i) + fraction(i + 1));
}
return fraction(1);
}
//iterative process
function cont_frac(n, d, k) {
function fraction(i, current) {
return i === 0
? current
: fraction(i - 1, n(i) / (d(i) + current));
}
return fraction(k, 0);
}
Exercise 1.38
In 1737, the Swiss mathematician
Leonhard Euler published a memoir
De Fractionibus Continuis, which included a
continued fraction expansion for $e-2$, where
$e$ is the base of the natural logarithms. In
this fraction, the $N_i$ are all 1, and the
$D_i$ are successively
1, 2, 1, 1, 4, 1, 1, 6, 1, 1, 8, …. Write a program that uses your
cont-frac procedurecont_frac function
from exercise 1.37 to approximate
$e$, based on Euler's expansion.
2 + cont_frac(i => 1,
i => (i + 1) % 3 < 1 ? 2 * (i + 1) / 3 : 1,
20);
Exercise 1.39
A continued fraction representation of the tangent function was
published in 1770 by the German mathematician
J.H. Lambert:
\[
\begin{array}{lll}
\tan x & = & {\dfrac{x}{1-
\dfrac{x^2}{3-
\dfrac{x^2}{5-
\dfrac{x^2}{ \ddots }}}}}
\end{array}
\]
where $x$ is in radians.
Define a procedure (tan-cf x k)Declare a function
tan_cf(x, k)
that computes an approximation to the tangent function based on
Lambert's formula.
K specifies the number of terms to compute,
as in exercise 1.37.
As in exercise 1.37,
k specifies the number of terms
to compute.
function tan_cf(x, k) {
return cont_frac(i => i === 1 ? x : - x * x,
i => 2 * i - 1,
k);
}
[1]
We have used 0.001 as a representative
small number to indicate a tolerance for the acceptable error
in a calculation. The appropriate tolerance for a real calculation depends
upon the problem to be solved and the limitations of the computer and the
algorithm. This is often
a very subtle consideration, requiring help from a
numerical analyst or some
other kind of magician.
[2]
This
can be accomplished using
error,
which takes as
arguments a number of items that are printed as error messages.
which takes as
argument a string that is printed as error message along
with the number of the program line that gave rise to the call of
error.
Try this during a boring lecture: Set your calculator to
radians mode and then repeatedly press the
$\cos$
button until you obtain the fixed point.
To obtain a
fixed point of cosine on a calculator,
set it to radians mode and then repeatedly press the
$\cos$
button until the value does not change any longer.
[4]
$\mapsto$
(pronounced maps to) is the mathematician's way of
writing
lambda.lambda expressions.
$y \mapsto x/y$ means
(lambda(y) (/ x y)),y => x / y,
that is, the function whose value at $y$ is
$x/y$.