[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
More on "Correct" Lisp Interpreters
- To: Robert.Cartwright@cmu-cs-cad.ARPA
- Subject: More on "Correct" Lisp Interpreters
- From: JonL.pa@PARC-MAXC.ARPA
- Date: Fri, 10 Feb 84 12:44:00 GMT
- Cc: Franz-Friends@Berkeley, Guy.Steele@CMUA.ARPA, JonL.pa@PARC-MAXC.ARPA
- In-reply-to: your message of 9 Feb 84 15:41:26 EST
- Original-date: 10 Feb 84 04:44 PST
Hmmm, yes, it looks like we're talking about slightly different things
here. I take back my charge of "misconception", in favor of
Let me try to clarify.
"Constant time" in my paper refers to the access time for a variable
during the interpretation phase; as such, the time is constant
regardless of the number of variables present, and regardless of the
declarations present. As I mention in the paper, the "usual" approach
involves some kind of alist like mechanism and this would entail an
access time linear in the number of variables in the function being
interpreted [Lisp/370's interpreter, while keeping the variables only in
the stack frame, still had to search the frames one slot at a time; the
"consy" model of course searches the true alist one entry at a time]
I'm not sure where Baker's "Shallow Binding" paper finally got
published, but I have a preliminary version dated January, 1977,
numbered "AI Working Paper 138" from the MIT AI Lab. This re-rooting
scheme is a generalization of what happens on the Lisp Machine when a
context switch occurs -- the currently running stack group has to be
"unwound" upwards, and the new one has to be "unwound" downwards
[because of the potentially unbounded time cost for these "unwindings",
Interlisp-D adopted the deep bound model].
**** But Baker's "re-rooting" scheme would be "constant time" in exactly
the same sense as I used the term in my 1982 Lisp Conference paper. ****
On the other hand, I don't think it will be as easy as you suggest to
fix his scheme to do mixed local and special declarations right. The
major problem for these "value cell" schemes is not how to "shield" one
lexical environment from another, but how to decide whether a particular
occurence of a variable to be interpreted should be treated as a "local"
variable, or as a "special" variable. That's the reason for the DLINTZ
table mentioned in the "Constant time" paper.
I don't know what transpired verbally between you and Guy (Steele) about
interpreter timings, so I'm not sure just exactly what his comment
applies to. But let me assume that he was comparing a Lisp interpreter
design that does lexical binding "right" with one that doesn't attempt
to do it at all. Then I'd have to concur that the latter would be
faster. [It would also be "wrong" in the sense that got this discussion
started in the first place.]
What I meant to convey by my former note was that an interpreter design
and implementation exists (the "Constant time" paper) which
1) does both lexical and dynamic bindings correctly
2) costs no more for lexical variables than for dynamic ones
3) is "constant time" for variable access.
As you might imagine, I would loathe a sloution that changed the names
of my program variables. Even more so than the bugs in the existing
interpreters, which ignore declarations, and have no provisions for
lexical variables. But I think we have two main points of mutual
understanding here: one, that a "correct" Lisp-semantics interpreter
will be slower than the existing "wrong" ones, and two, that *** given
that one will do a "correct" interpreter, then a constant-access-time
one exists in which there is no time penalty for using lexical variables
as opposed to dynamic variables ***.
What we really don't know is just how much slower a "correct"
interpreter will be, nor how much of the slowdown is attributable to
retention of Lisp semantics (rather than adopting Scheme semantics).
The comparison here is not "Scheme against Lisp", but "Correct against
Additional comments: (yea, this note is getting long!)
Suppose the "Constant time" interpreter runs more slowly than the
"wrong" one mentioned above (and I'm sure it does!). It would be very
difficult to assess, without some serious experimentation and research,
just how much of the time slowdown is due to the extra instructions
required at access time (remember: the "constant" in the paper isn't 1,
whereas for a "wrong" implementation like Interlisp-10, which is
shallow-bound, it really could be 1 instruction), how much is due to the
maintenance of the BLINTZ and DLINTZ tables, and how much is due to the
generous provisions which make debugging easier.
The question we need to address is: assuming we like Lisp semantics, how
much slowdown are we willing to pay to fix the abominable situation that
now exists? Let me toss out the number 50%. Experience has shown that
pdp10 MacLisp users were willing to pay a slowdown cost of 200% to 300%
for the debugging facilities provided by *RSET mode. That surprised me.
I used to keep *RSET mode off, in order to keep up the speed, but
finally decided that that was a counter productive mode. Ultimately I
decided that 5 minutes of my debugging time was more valuable that
millions of pdp 10 instructions. Especially when one works at night a
lot, and literally trillions of machine cycles are going to waste!
What about FUNARGS?
The "Constant time" paper only addresses the linear stack model; Baker
addresses the full tree-environment model. Multiple stack groups such
as the Lisp Machine has permit multiprocessing, but are still only a
multiple set of linear stacks.
Consequently, I suspect Baker's scheme does FUNARGs right, whereas the
Lisp Machine, and the scheme in "Constant time", only permit what I'd
call "limited" FUNARGS -- the closures only "close" over explicitly
named variables (the "Constant time" paper, however, permits the
implicit capture into funargs of all the lexically-apparent variables
too). But, the application of a "limited" FUNARG only costs a time
linear in the number of variables captured by the "limited closure",
whereas the application of a true FUNARG as in Baker's scheme is
essentialy equivalent to a context switch, and thus would be unbounded
I'm enjoying this interchange with you, and hope the other members of
Franz-Friends aren't unduly upset at being cc'd with all these deep