[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

"Correct" Lisp Interpreters



After reading your note, it doesn't appear that we disagree at all; I'll
take the "tone" of your first sentence to be merely "first impluse".

But you ask a very good question -- why is it so hard to write a correct
interpreter for Lisps?  ("correct" means one that obeys local
declarations, and implements local, or "lexical", variables correctly,
whether explicitly or implicitly declared so)

The first answer is that most Lisps were "bootstrapped" from some
machine-language program that implemented the interpreter, and writing a
"correct" interpreter is hard to do in anything other than Lisp (notice
how many Lisp compilers you find written in anything other than Lisp!)

The second answer is that after the bug is noted, there are two easy
"cop outs": (1) In the MacLisp world, we just spread the word that the
interpreter had a bug and we darnd well weren't going to fix it! [I was
partially responsible for this state of affairs during the early 70's at
MIT, but was also buoyed along by the general ambiance of arrogance for
which the MIT AI Lab was so justly infamous]. (2) In the InterLisp
world, following standard practice there, whenever a bug was discovered
that was too troublesome to fix, it was merely declared a "feature", and
documented as such!  That explains why MacLisp defaults to lexical
variables, and Interlisp defaults to dynamic variables.

Often, one's first thought about doing a "correct" interpreter is simply
to resort to alists for the lexical variables, pushing and popping the
variable holding that alist on each new "application" of a Symbol that
has a lambda definition.  This works.  Several people I know have
written such interpreters, and they do work.

But indeed consing, *** just to interpret some piece of Lisp code ***,
is anathema to most of us.  Fortunately, there exist completely
stack-bound solutions.  As I mentioned before, Lisp/370 has a most
elegant solution (and probably the first correctly working
implementation of spaghetti stacks).  The interpreter works by
"compiling up on the fly" a stack frame exactly like the one that would
exist if the code were to be run compiled; function boundaries are
marked in the frame so that a PROG within a PROG [which produces two
different frames -- one an extension of the other -- in the spaghetti
model] will not signal the end of the lexical context.  All variable
lookups are done by a stack-searching primitive.  But the Interpreter
was itself written in Lisp (Ha, Ha!)

Now, "Corky" Cartwright is caught in a misconception which suggests that
he either hasn't seen, or perhaps didn't fully understand, my paper in
the 1982 Lisp conference titled "Constant Time Interpretation for
Shallow-Bound Variables in the Presence of Mixed SPECIAL/LOCAL
Declarations".  [I don't mean any offense here.  I only know of 5 people
besides myself who were interested enought to try to understand the
algorithm].  In particular, his claim
    "The explanation for this difference is that in a
    conventional shallow binding implementation of lexical
    LISP, updating and restoring the shallow binding table on
    a function call requires about twice the computation (on the
    average) as it does in its dynamically scoped counterpart."
would seem to be belied by the actual implementation mentioned in the
paper.  The trick was that each Symbol has an independent cell for the
shallow binding of local bindings  and another for the shallow binding
of special bindings.  The code which implements "binding" is essentially
the same, regardless of whether it is "lexical" or "dynamic", since the
data structures are homologous.

Yet, interpreters may be slow for a variety of reasons; often it's the
tremendous number of "hooks" and kludges needed to make debugging
easier.  I don't know why T's interpreter is so slow; I don't even know
how to evaluate the relative cost of the modules needed to support the
interpreter described in "Constant Time . . . " mentioned above.

In the end, I agree totally with your exclamation: there is no excuse at
all for a situation like this being allowed to exist.  In the aggregate,
a correct interpreter can *** at most *** cost a few billion machine
cycles more; a buggy interpreter will certainly cost  thousands of
man-hours of wasted time.  Do you know what a "cycle" on a modern Lisp
machine costs?   How about the 8 hours per year you spend "in a black
hole" because of a mismatch between the Interpreter and Compiler;  what
is that worth to you?  What is it worth to a $30,000/year programmer?
What is it worth to a $60,000/year research scientist?