As an illustration of symbol manipulation and a further illustration
of data abstraction, consider the design of a
procedurefunction
that performs symbolic differentiation of algebraic expressions. We would
like the
procedurefunction
to take as arguments an algebraic expression and a variable and to return
the derivative of the expression with respect to the variable. For example,
if the arguments to the
procedurefunction
are $ax^2 + bx +c$
and $x$, the
procedurefunction
should return $2ax+b$. Symbolic differentiation
is of special historical significance in
Lisp.
the programming language Lisp.[1]
It was one of the
motivating examples behind the development of a computer language for
symbol manipulation. Furthermore, it marked the beginning of the line of
research that led to the development of powerful systems for symbolic
mathematical work, which are
currently being used by a growing number of
applied mathematicians and physicists.
today routinely used by applied mathematicians and physicists.
In developing the symbolic-differentiation program, we will follow the same
strategy of data abstraction that we followed in developing the
rational-number system of section 2.1.1. That
is, we will first define a differentiation algorithm that operates on
abstract objects such as sums,products, and
variables without worrying about how these are to be
represented. Only afterward will we address the representation problem.
The differentiation program with abstract data
In order to
To
keep things simple, we will consider a very simple
symbolic-differentiation program that handles expressions that are built up
using only the operations of addition and multiplication with two
arguments. Differentiation of any such expression can be carried out by
applying the following
reduction rules:
\[
\begin{array}{rll}
\dfrac{dc}{dx} & = &
0\quad \text{for $c$ a constant or a variable different from $x$} \\[12pt]
\dfrac{dx}{dx} & = & 1 \\[12pt]
\dfrac{d(u+v)}{dx} & = & \dfrac{du}{dx}+\dfrac{dv}{dx} \\[12pt]
\dfrac{d(uv)}{dx} & = & u\left( \dfrac{dv}{dx}\right)+v\left(
\dfrac{du}{dx}\right)
\end{array}
\]
Observe that the latter two rules are recursive in nature. That is, to
obtain the derivative of a sum we first find the derivatives of the terms
and add them. Each of the terms may in turn be an expression that needs to
be decomposed. Decomposing into smaller and smaller pieces will eventually
produce pieces that are either constants or variables, whose derivatives
will be either $0$ or
$1$.
To embody these rules in a
procedurefunction
we indulge in a little
wishful thinking, as we did in designing the rational-number implementation.
If we had a means for representing algebraic expressions, we should be able
to tell whether an expression is a sum, a product, a constant, or a
variable. We should be able to extract the parts of an expression. For a
sum, for example, we want to be able to extract the addend (first term) and
the augend (second term). We should also be able to construct expressions
from parts. Let us assume that we already have
proceduresfunctions
to implement the following selectors, constructors, and predicates:
(variable? e)is_variable(e)
Is e a variable?
(same-variable? v1 v2)is_same_variable(v1, v2)
Are v1 and
v2 the same variable?
(sum? e)is_sum(e)
Is e a sum?
(addend e)addend(e)
Addend of the sum e.
(augend e)augend(e)
Augend of the sum e.
(make-sum a1 a2)make_sum(a1, a2)
Construct the sum of a1 and
a2.
(product? e)is_product(e)
Is e a product?
(multiplier e)multiplier(e)
Multiplier of the product e.
(multiplicand e)multiplicand(e)
Multiplicand of the product e.
(make-product m1 m2)make_product(m1, m2)
Construct the product of m1 and
m2.
Using these, and the primitive predicate
number?,is_number,
which identifies numbers, we can express the differentiation rules as the
following
procedure:function:
This derivprocedurefunction
incorporates the complete differentiation algorithm. Since it is expressed
in terms of abstract data, it will work no matter how we choose to
represent algebraic expressions, as long as we design a proper set of
selectors and constructors. This is the issue we must address next.
Representing algebraic expressions
We can imagine many ways to use list structure to represent algebraic
expressions. For example, we could use lists of symbols that mirror the
usual algebraic notation, representing $ax+b$ as
the list (a * x + b).list("a", "*", "x", "+", "b").
However, one especially straightforward choice is to use the same
parenthesized prefix notation that Lisp uses for combinations; that
is, to represent $ax+b$ as
(+ (* a x) b). Then our
However, it will be more convenient if we reflect the mathematical
structure of the expression in the JavaScript value representing it;
that is, to represent $ax+b$ as
list("+", list("*", "a", "x"), "b").
Placing a binary operator in front of its operands is called
prefix notation,
in contrast with the infix notation introduced in
section 1.1.1. With prefix notation, our
data representation for the differentiation problem is as follows:
The variables are
symbols.just strings.
They are identified by the primitive predicate
symbol?:is_string:
Original
JavaScript
(define (variable? x) (symbol? x))
function is_variable(x) { return is_string(x); }
Two variables are the same if the
symbols representing them are
eq?:
strings representing them are equal:
function is_product(x) {
return is_pair(x) && head(x) === "*";
}
The multiplier is the second item of the product list:
Original
JavaScript
(define (multiplier p) (cadr p))
function multiplier(s) { return head(tail(s)); }
The multiplicand is the third item of the product list:
Original
JavaScript
(define (multiplicand p) (caddr p))
function multiplicand(s) { return head(tail(tail(s))); }
Thus, we need only combine these with the algorithm as embodied by
deriv in order to have a working
symbolic-differentiation program. Let us look at some examples of its
behavior:
The program produces answers that are correct; however, they are
unsimplified. It is true that
\[
\begin{array}{lll}
\dfrac{d(xy)}{dx} & = & x\cdot 0+1\cdot y
\end{array}
\]
but we would like the program to know that
$x\cdot 0 = 0$,
$1\cdot y = y$, and
$0+y = y$. The answer for the second example
should have been simply y. As the
third example shows, this becomes a serious issue when the expressions are
complex.
Our difficulty is much like the one we encountered with the rational-number
implementation:
we haven't reduced answers to simplest form. To
accomplish the rational-number reduction, we needed to change only the
constructors and the selectors of the implementation. We can adopt a similar strategy here. We won't change deriv at
all. Instead, we will change
make-summake_sum
so that if both summands are numbers,
make-summake_sum
will add them and return their sum. Also, if one of the summands is 0,
then
make-summake_sum
will return the other summand.
Although this is quite an improvement, the third example shows that there
is still a long way to go before we get a program that puts expressions
into a form that we might agree is simplest. The problem
of algebraic simplification is complex because, among other reasons, a
form that may be simplest for one purpose may not be for another.
Exercise 2.60
Show how to extend the basic differentiator to handle more kinds of
expressions.
For instance, implement the differentiation rule
\[
\begin{array}{lll}
\dfrac {d(u^{n})}{dx} & = & nu^{n-1}\left( \dfrac{du}{dx}\right)
\end{array}
\]
by adding a new clause to the deriv program
and defining appropriate
proceduresfunctionsexponentiation?,is_exp,base, exponent,
and
make-exponentiation.make_exp.
(You may use
the symbol **the string "**"
to denote exponentiation.) Build in the rules that anything raised to the
power 0 is 1 and anything raised to the power 1 is the thing itself.
Exercise 2.61
Extend the differentiation program to handle sums and products of arbitrary
numbers of (two or more) terms. Then the last example above could be
expressed as
Try to do this by changing only the
representation for sums and products, without changing the
derivprocedurefunction
at all. For example, the addend of a sum would
be the first term, and the augend would be the
sum of the rest of the terms.
Original
JavaScript
(deriv '(* x y (+ x 3)) 'x)
function augend(s) {
return accumulate(make_sum, 0, tail(tail(s)));
}
function multiplicand(s) {
return accumulate(make_product, 1, tail(tail(s)));
}
Exercise 2.62
Suppose we want to modify the differentiation program so that it works
with ordinary mathematical notation, in which
+"+"
and
*"*"
are
infix rather than prefix operators. Since the differentiation program
is defined in terms of abstract data, we can modify it to work with
different representations of expressions solely by changing the predicates,
selectors, and constructors that define the representation of the algebraic
expressions on which the differentiator is to operate.
Show how to do this in order to differentiate algebraic expressions
presented in infix form,
such as
(x + (3 * (x + (y + 2)))).
as in this example:
list("x", "+", list(3, "*", list("x", "+", list("y", "+", 2))))
To simplify the task, assume that
+"+"
and
*"*"
always take two arguments and that expressions are fully parenthesized.
The problem becomes substantially harder if we allow
standard algebraic notation, such as
(x + 3 * (x + y + 2))
which drops unnecessary parentheses and assumes that multiplication is done before addition.
a notation closer to ordinary infix notation, which omits unnecessary
parentheses and assumes that
multiplication has higher precedence than addition, as in this
example:
list("x", "+", 3, "*", list("x", "+", "y", "+", 2))
Can you design appropriate predicates, selectors, and constructors for
this notation such that our derivative program still works?