[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: lisp machines vs. lisp on other boxes
Frankly, I am hesitant to get involved in this RISC vs CISC debate
because it is likely to degenerate into religious warfare that is
unenlightening. However, as a member of the SPUR (Lisp on a RISC)
project here a Berkeley, I have a few comments.
Date: Sat, 23 May 87 10:13 EDT
From: Chris Lindblad <CJL@REAGAN.AI.MIT.EDU>
Date: 22 May 87 13:37:38 PDT (Fri)
I have a hard time believing that a standard architecture machi
be able to simultaneously provide the speed and environment tha
t a lisp
This is precisely the RISC vs CISC issue. Personally I would very much
to see the good parts of the Symbolics programming environment running
something like Sun's new RISC chip.
So would I, but I'm not holding my breath. Building a lisp system from
Everything CJL says is exactly right. I'd like to add some other points.
Regarding speed: I don't think the "RISC vs CISC issue" has much to do
with anything. "RISC" is a funny term, as you can see if you read
Computer Architecture News: nobody can agree what it means. It actually
tends to refer to a whole set of architecture ideas, many of which don't
have anything to do with each other. For example, much of the Berkeley
RISC chip's performance is related to their "register window", which has
nothing especially to do with reduction of the instruction set, and it's
a feature that is in every MIT-derived Lisp Machine (CADR, LM-2, 36xx,
RISC is a funny term and it is becoming increasingly less useful as
the marketing types use it to sell new architectures (Symbolic too?).
However, if you read the RISC papers from IBM, Stanford, and Berkeley,
you can get a reasonable feel for the approach (Note that I say feel;
I agree with Weinreb that the concept is vaguely defined).
In my idiosyncratic definition, RISCs leave a feature out of the
hardware unless its *proven* performance advantage cannot be equaled
by software alone. This statement has two important parts. The first
says that the value of a hardware feature must be proven by
measurements on real programs before it is included. This principle
is just sound engineering, however it was overlooked in many
architectures designed in the late 1970s. The second part says that
the system designer should try to do something in software, not
hardware, because the former is more malleable than the latter. For a
good example of this approach, see Dave Ungar's dissertation on the
SOAR architecture or George Taylor's forthcoming dissertation on
Having certain hardware that's particularly useful for Lisp does not, by
itself, spoil any of the performance gains that are claimed by
architectures that claim to be "RISC". For example, the 3600
tag-checking and EGC hardware are very simple, very small, and don't
slow down the critical paths of the processor at all. Note that the
Berkeley group themselves are working on something they describe as a
RISC machine specialized for Lisp.
Not withstanding these two features, the 3600 is the antithesis of
RISC ideas. It contains a large amount of hardware to enhance the
performance of specific Lisp constructs. I believe, from second-hand
evidence and hearsay, that much of this hardware is seldom used. One
measure of the hardware complexity is the large amount of microcode in
the 3600. Another is Symbolic's evident difficult in bringing a
considerably faster machine to market. In addition, the high-level of
its instruction set precludes compiler optimizations.
To consider a concrete example, take array bounds checking. Symbolics
does it concurrently with the array access, with special hardware. A
more RISC approach would generate the 2 instructions (load and
compare-trap) per reference necessary check the bounds and rely on a
compiler to eliminate these instructions where redundant and to move
them out of loops. The advantage of this approach is that the
processor is even simpler (i.e., cheaper and faster to market, both now
and when the technology changes) and that the RISC frequently saves
the memory (or cache) access necessary to retrieve the array bounds.
The RISC architectures have some important good ideas, but it's not at
all easy to extract the general principles and apply them. Most of the
published papers have had rather poor explanations of what's really
going on with the RISC architectures. You have to study hard and look
behind the scenes to find out how to learn from these experiences. I
can tell you that the people who designed the architecture of our new
chip spent a lot of time doing just that.
We are all waiting to see exactly which lessons they learned. I
assume that your reason spate of messages on the topic portends an
Regarding environment, here are two important points. First, it's
important architectural support for type checking and array-bounds
checking and so on, so that objects can be shared (i.e. "single address
space") with sufficient firewalling, and sharing of objects is crucial
for a good environment. Second, aside from the first point, the main
environment issues of Lisp machines have much less to do with the
architecture than the operating system.
1. Agreed, but for a different reason. I personally do not like a
single address space for everything, but I still believe in type and
bounds checking. The cost of performing these checks is small on a
RISC, like SPUR, thanks to the presence of explicit tags and
2. I would argue that the main advantage of a Lisp-machine style
architecture is an environmental issue--debugging. The higher level
instruction set and lack of compiler optimizations makes writing a
good debugger much easier (though the Symbolics still lacks a
source-level debugger). Writing an equivalent debugger for a RISC is
a hard task.
Rather than phrasing Symbolic's problem as a RISC vs. CISC debate, I
would ask whether a small company like Symbolics selling to a small
market has the resources to build a special purpose machine with
enough performance advantages to stay ahead of the 'stock' hardware
vendors like Motorola, AMD, and Sun?