…It's in words that the magic is—Abracadabra, Open Sesame, and the rest—but the magic words in one story aren't magical in the next. The real magic is to understand which words work, and when, and for what; the trick is to learn the trick. …And those words are made from the letters of our alphabet: a couple-dozen squiggles we can draw with the pen. This is the key! And the treasure, too, if we can only get our hands on it! It's as if—as if the key to the treasure is the treasure!John BarthChimera
In our study of program design, we have seen that expert programmers control the complexity of their designs with the same general techniques used by designers of all complex systems. They combine primitive elements to form compound objects, they abstract compound objects to form higher-level building blocks, and they preserve modularity by adopting appropriate large-scale views of system structure. In illustrating these techniques, we have used Lisp JavaScript as a language for describing processes and for constructing computational data objects and processes to model complex phenomena in the real world. However, as we confront increasingly complex problems, we will find that Lisp, JavaScript, or indeed any fixed programming language, is not sufficient for our needs. We must constantly turn to new languages in order to express our ideas more effectively. Establishing new languages is a powerful strategy for controlling complexity in engineering design; we can often enhance our ability to deal with a complex problem by adopting a new language that enables us to describe (and hence to think about) the problem in a different way, using primitives, means of combination, and means of abstraction that are particularly well suited to the problem at hand.[1]
Programming is endowed with a multitude of languages. There are physical languages, such as the machine languages for particular computers. These languages are concerned with the representation of data and control in terms of individual bits of storage and primitive machine instructions. The machine-language programmer is concerned with using the given hardware to erect systems and utilities for the efficient implementation of resource-limited computations. High-level languages, erected on a machine-language substrate, hide concerns about the representation of data as collections of bits and the representation of programs as sequences of primitive instructions. These languages have means of combination and abstraction, such as procedure definition, function declaration, that are appropriate to the larger-scale organization of systems.
Metalinguistic abstraction—establishing new languages—plays an important role in all branches of engineering design. It is particularly important to computer programming, because in programming not only can we formulate new languages but we can also implement these languages by constructing evaluators. An evaluator (or interpreter) for a programming language is a procedure function that, when applied to a statement or expression an expression of the language, performs the actions required to evaluate that statement or expression. It is no exaggeration to regard this as the most fundamental idea in programming:
The evaluator, which determines the meaning of statements and expressions in a programming language, is just another program.To appreciate this point is to change our images of ourselves as programmers. We come to see ourselves as designers of languages, rather than only users of languages designed by others.
In fact, we can regard almost any program as the evaluator for some language. For instance, the polynomial manipulation system of section 2.5.3 embodies the rules of polynomial arithmetic and implements them in terms of operations on list-structured data. If we augment this system with procedures functions to read and print polynomial expressions, we have the core of a special-purpose language for dealing with problems in symbolic mathematics. The digital-logic simulator of section 3.3.4 and the constraint propagator of section 3.3.5 are legitimate languages in their own right, each with its own primitives, means of combination, and means of abstraction. Seen from this perspective, the technology for coping with large-scale computer systems merges with the technology for building new computer languages, and computer science itself becomes no more (and no less) than the discipline of constructing appropriate descriptive languages.
We now embark on a tour of the technology by which languages are
established in terms of other languages. In this chapter we shall use
Lisp
JavaScript
as a base, implementing evaluators as
Lisp
JavaScript
procedures.functions.
Lisp is particularly well suited to this task, because of its ability
to represent and manipulate symbolic expressions.
We will take the first step in understanding how languages are implemented
by building an evaluator for
Lisp
JavaScript
itself.
The language implemented by our evaluator will be a subset of
the Scheme dialect of Lisp that we use in this
book.
JavaScript.
Although the evaluator described in this chapter is written for a
particular
dialect of Lisp,
subset of JavaScript,
it contains the essential structure of an evaluator for any
expression-oriented language
designed for writing programs for a sequential machine. (In fact, most
language processors contain, deep within them, a little
Lisp
evaluator.)
The evaluator has been simplified for the purposes of illustration and
discussion, and some features have been left out that would be
important to include in a production-quality
Lisp
JavaScript
system. Nevertheless, this simple evaluator is adequate to execute most of
the programs in this book.[2]
An important advantage of making the evaluator accessible as a Lisp JavaScript program is that we can implement alternative evaluation rules by describing these as modifications to the evaluator program. One place where we can use this power to good effect is to gain extra control over the ways in which computational models embody the notion of time, which was so central to the discussion in chapter 3. There, we mitigated some of the complexities of state and assignment by using streams to decouple the representation of time in the world from time in the computer. Our stream programs, however, were sometimes cumbersome, because they were constrained by the applicative-order evaluation of Scheme. JavaScript. In section 4.2, we'll change the underlying language to provide for a more elegant approach, by modifying the evaluator to provide for normal-order evaluation.
Section 4.3 implements a
more ambitious linguistic change, whereby statements and expressions
have many values, rather than just a single value. In this language of
nondeterministic computing, it is natural to express processes that
generate all possible values for statements and expressions and then search
for those values that satisfy certain constraints. In terms of models of
computation and time, this is like having time branch into a set of
possible futures
and then searching for appropriate time
lines. With our nondeterministic evaluator, keeping track of multiple values
and performing searches are handled automatically by the underlying
mechanism of the language.
In section 4.4 we implement a logic-programming language in which knowledge is expressed in terms of relations, rather than in terms of computations with inputs and outputs. Even though this makes the language drastically different from Lisp, JavaScript, or indeed from any conventional language, we will see that the logic-programming evaluator shares the essential structure of the Lisp JavaScript evaluator.