[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Riesbeck: Re: Speed comparisons]
- To: Odonnell at YALE
- Subject: [Riesbeck: Re: Speed comparisons]
- From: Jonathan Rees <Rees at YALE>
- Date: Mon ,3 May 82 16:02:00 EDT
- Cc: Adams at YALE, T-Discussion at YALE
Date: 3-May-82 2:00PM-EDT (Mon)
From: Chris Riesbeck <Riesbeck>
Subject: Re: Speed comparisons
how will things slow down when the garbage collector is added
The question is, how long will garbage collections take, and how
often will they happen. I don't have good answers to these questions;
all I can say is that
(a) because of larger address space and more compact data structures,
one will probably be able (and want) to do more consing (twice as much?
six times as much?) between GC's than one does in DEC-20 lisps;
(b) this is offset by the fact that the interpreter conses a lot, but
this may change - see below;
(c) garbage collection will "take a long time" (like half as much time
as it does now on the VAX), in spite of projected compiler & system
improvements (see below), until either I recode the GC in assembler or I
get about 5 months to REALLY bring the quality of compiled code up to
that of, say, BLISS-11's.
how much consing can be removed from interpreted code
I think most of it can be eliminated, using a variety of techniques.
The current interpreter is still a very rough cut. The main things that
can be done include: better handling of "entities" in the compiler
(cutting by at least half the consing currently required to make
interpreted closures); specialized interpreters for things now
implemented as macros, like AND and OR; and stack-allocated
environments, accomplished by some combination of (a) more static code
analysis (ugh) and (b) dynamic "analysis" of the sort described by
McDermott at the 1980 Lisp Conference.
how much faster can the scanner get
This I don't know, but I expect a factor of 2 to 3 when the compiler
gets a little better, and maybe another factor of 2 to 3 when I've
developed some profiling tools to find out what the bottlenecks are, and
tuned the code accordingly.
will it be the case that the interpreted form of compiled code will be
available for prettyprinting? if not, i would expect far more code to
stay interpreted than you do -- that's why it stays interpreted on the
20, not that most code won't compile
I think it's pointless to leave the source code lying around in your
Lisp as S-expression, regardless of whether it's interpreted or
compiled, when it's sitting out there in a much more readable form in
some file in your editor. In Maclisp, it takes fewer characters for me
to get from lisp to the definition of some routine inside EMACS than it
would for me to ask Maclisp to pretty-print the routine: control-N
escape . routine-name return, versus open GRINDEF [or PP] space
routine-name close. And the result is so much more satisfying. This
sort of facility (hopefully even smoother than Maclisp's - more like the
Lisp Machine's) is high on my list of things to do - but it may end up
being easier on the VAX than on the Apollo. I'm surprised you don't do
this with TLISP and Z.
Anyhow, a general note about efficiency... I think the main thing right
now that makes it all so slow is compiler problems. I intend to improve
this situation in several phases, the first of which should occupy about
half of my summer and bring about a factor of 2 to 3 speedup. Tuning
(of which there has been none) should also be a big help.