[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
SUN vs Lisp machine hardware differences?
Date: 11 August 1987 17:45 pdt
From: spar!malcolm at DECWRL.DEC.COM
From: Daniel L. Weinreb <DLW@ALDERAAN.SCRC.Symbolics.COM>
Date: Tue, 11 Aug 87 13:54:26 PDT
From: larus%paris.Berkeley.EDU@berkeley.edu (James Larus)
Even parallel checks of this sort have costs. To name two:
1. Extra hardware that costs money and limits the
basic cycle of the machine.
The extra hardware to do these checks is really tiny.
Since it's operating in parallel with normal operation
anyway, it doesn't really limit the cycle time of the
You can't say that it doesn't impact the cycle time of the machine.
At some point the result of the parallel computation must be folded
back into the main flow of the instruction. At the very least this
means that the gate that decides whether the instruction fails must
have one extra input and is going to be just a little bit slower.
This is a basic tenet of RISC.
Admittedly the extra time is small but they all add up.
Nope. It's buried in the memory cycle time noise. A gate with an extra
input can't possibly be more than a couple of percent slower, and
there's only a couple of gates with those extra inputs.
2. Extra design time that could be spent elsewhere
(e.g., speeding up the basic cycle) and that delays
Theoretically, yes, but it's pretty small also. Consider
that Symbolics just designed a chip that has more
transistors than the Intel 386, but did it with one-tenth
the personpower. Compared to this, the time needed to
design the extra checking hardware is peanuts.
This isn't a meaningful comparison. Intel has become big enough
that they are trying to design one piece of silicon that does
Besides, last week somebody at Symbolics was claiming that your
software tools (and single address space) were responsible for the
incredible productivity of your chip designers. I've always liked
your software enviroment much better than your hardware; I'm more
likely to believe the tools were responsible for the quick design
time then the "clean" architecture.
How do you suppose it's possible to make such a software environment?
It takes a pre-existing software environment which makes software
development happen in reasonable time. It also takes first-class
hackers. Symbolics has both; the Release 6.1 world was a pretty
powerful development environment, and the pre-7.0 world even more so.
Symbolics also has about 10 world-class hackers whom I could name, plus
another 40 or so who qualify as first-class.
Symbolics also claims that it takes a hardware architecture which makes
it possible to build that software environment in time shorter than
geological. This is a matter or religion, or at least an axiom rather
than a theorem. However, if you compare the currently-available
Symbolics environment with any other currently available, you have to
think that the hardware environment had something to do with the
software development time, which in turn affects the quality of the
Mail: Lamson@MIT-Multics.ARPA, Lamson@Multics.MIT.EDU
Snail: 166 Albion Street, San Francisco, Calif 94110