[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: SUN vs Lisp machine hardware differences?
Date: 11 Aug 87 17:45:32 PDT (Tue)
From: Daniel L. Weinreb <DLW@ALDERAAN.SCRC.Symbolics.COM>
Subject: Re: SUN vs Lisp machine hardware differences?
Date: Tue, 11 Aug 87 13:54:26 PDT
From: larus%paris.Berkeley.EDU@berkeley.edu (James Larus)
Even parallel checks of this sort have costs. To name two:
1. Extra hardware that costs money and limits the basic cycle of
The extra hardware to do these checks is really tiny. Since it's
operating in parallel with normal operation anyway, it doesn't really
limit the cycle time of the machine.
You can't say that it doesn't impact the cycle time of the machine. At some
point the result of the parallel computation must be folded back into the
main flow of the instruction. At the very least this means that the gate that
decides whether the instruction fails must have one extra input and is going
to be just a little bit slower. This is a basic tenet of RISC.
Admittedly the extra time is small but they all add up.
The question is: is the extra time spent in type-checking in hardware
smaller than the extra time which would otherwise have to be spent in
software? We believe so. Having such type-checking in hardware also
eliminate the temptation to remove type-checking in "production quality"
software, which is the one place where you really want to have it.
No, I don't think it is necessary to open up the RISC vs CISC debate again.
I hope not.
2. Extra design time that could be spent elsewhere (e.g., speeding
up the basic cycle) and that delays time-to-market.
Theoretically, yes, but it's pretty small also. Consider that
Symbolics just designed a chip that has more transistors than the
Intel 386, but did it with one-tenth the personpower. Compared to
this, the time needed to design the extra checking hardware is peanuts.
This isn't a meaningful comparison. Intel has become big enough that they are
trying to design one piece of silicon that does everything.
Besides, last week somebody at Symbolics was claiming that your software
tools (and single address space) were responsible for the incredible
productivity of your chip designers. I've always liked your software
enviroment much better than your hardware; I'm more likely to believe the
tools were responsible for the quick design time then the "clean"
It's not that the architecture is "clean", it's that it does a few
things for us which makes writing our environment software much simpler.