[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[not about] gc-by-area and host uptimes
Date: Thu, 16 Feb 89 09:58:38 EST
Date: Wed, 15 Feb 89 23:31 EST
From: Reti@stony-brook.scrc.symbolics.com (Kalman Reti)
Date: Wed, 15 Feb 89 20:55:01 CST
From: firstname.lastname@example.org (Kenneth Forbus)
(b) Reusing code over multiple
hardware systems has nothing to do with the underlying hardware.
Anybody remember the Interlisp virtual machine? Once you had it
running you could port over everything else pretty easily, since all
the tools were written in lisp. [Whether or not you would want those
tools is yet another question :-)]
Ah, but if you had one implementation of that virtual machine where
spaghetti stacks were horrendously slow, and another where they were
reasonable, the applications would be unusable on one and (perhaps)
usable on the other. This is precisely the point I am making.
You are confusing relative speeds and acceptable speeds. As far as I'm
concerned, there is little that a Sun4 can't do several times faster than a
I'm not current on Sun4 prices, but a while back weren't they more expensive
than a 36xx? I'm sure if a Cray has a reasonable Lisp implementation, any
particular benchmark on it would run dozens of times faster than on a 36xx.
Symbolics can't be in the business of building the fastest hardware in the
world; we try to be in the business of providing a lot of functionality cost
If the operations appear to a human to run in reasonable amounts of
time on a 36xx, then surely a human will be even happier with the
performance on a Sun4. That's true regardless of the relative speed of
instructions on the Sun4.
Only if the same number of instructions are required to achieve the desired
functionality. Instructions are pre-packaged bits of functionality that
some machine architect felt needed to be optimized. My point is that the
instruction sets of standard architectures don't pre-package some of the
essential functionality in Genera and that it would therefore take many more
of these faster instructions to give the end-user the same effect. If this
explosion factor in number of instructions is greater than the performance
factor of the raw cycle time, the end result is SLOWER by the user's
So I believe that you are in error when you
imply that things like Genera could not run on a Sun4. I suspect that with
a little bit of tuning, it would run several times faster than on a 36xx.
If you redesigned everything with the prepackaged functionality provided by the
Sun in mind, you are perhaps right. However, this is not a 'little bit of
tuning', this is more like a complete redesign. One of Symbolics' long-term
goals is to have the software be isolated from the winds of technological
change that hardware represents, so this sort of micro-optimization for
a particular platform is something we don't do, even for our own different
If we were a giant company with a captive fab line and process design
engineers to tweak the processes specifically for our chip, I assure
you we would. As a much smaller company, we can only make use of the
commercially available technology, which is bound to be several
years behind the best the semiconductor giants can muster.
This is precisely why Symbolics should be concentrating on software, and
not hardware. You are bound to lose if you are always behind the
semiconductor giants. Why try to beat them at a game you can't win?
Because that's not the game we're playing; we try to provide the end-user
functionality which we think is useful and which many of our customers want
in a cost effective manner. We only build the hardware ourselves because
the right pre-packaged functionality doesn't exist in stock hardware.
When I benchmark my code on generic hardware, I DO NOT turn off type
checking, I DO NOT install extra declarations, etc. I have enough on
my hands w/o trying to make up for deficiencies in the lisp
environment. And the generic stuff is really cleaning up in terms of
performance on my code.
It would be helpful to hear some details about this, what type of code,
doing what sort of operations, what sort of generic machine. There are
many optimizations we can make to our system, and we want to be responsive
to the needs of our customers.
I can give you many examples of large systems that now operate
significantly faster on a Sun4 than on a Lispm. For example, with Lucid's
new EGC, performance on GC heavy code has gone up very significantly. I
can compile *and* load several hundred thousand lines of code on a Sun4 in
the *same* amount of time it takes to just load it on my 3645. This is for
a Lisp and EGC with no special hardware. How do you explain that?
Well, I don't know enough details about the hardware of the Sun4, the Lucid
implementation of Lisp or your software systems to explain it in detail, but
I can give a few observations:
1) The 3645 was designed 6 years ago and is therefore no longer at the
cutting edge even of our technology.
2) There are many parts of our software that are less efficient than we'd
like, but there are only so many man-hours available to improve things
and hundreds of times more work than we can possibly ever get to.
One such program is the loader; we try to optimize the things with
the highest payoff, and the loader hasn't seemed to fit in this
category. Is it really more important for you that Symbolics work on
making loading fast, as opposed to say, function calling, message sending,
The point I'm making is that each system you choose will have some features
that go very fast, others that go particularly slowly, and many more that
fall somewhere in the middle. We try to provide what we perceive (and our
customers tell us) is the important functionality as efficiently as we can.