[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [not about] gc-by-area and host uptimes
Date: Wed, 15 Feb 89 20:55:01 CST
From: forbus@p.cs.uiuc.edu (Kenneth Forbus)
About the "genera philosophy": Right. We kind of know all that.
That's why I put it at the end where it was easy to skip.
(This IS slug, after all). The question is, how relevant is it?
The software developed according to that philosophy represents, like all software,
a bunch of design tradeoffs between competing implementations. The relative
performance of the primitives used by those competing implementations is
a prime factor in deciding which way to make the tradeoff (along with ease of
maintainance, extensibility and other factors). The point I was trying to make
(and apparently failed to get across), is that the whole scaffolding of Genera
rests on having made most of those tradeoffs in an environment where hardware
made certain operations relatively cheaper. If they had been relatively more
expensive, the tradeoff would surely have been different. So the influence
of the hardware is pervasive.
Two things to notice: (a) Symbolics hasn't been routinely giving out
sources for a long time now.
We do to about 80% of the system.
(b) Reusing code over multiple
hardware systems has nothing to do with the underlying hardware.
Anybody remember the Interlisp virtual machine? Once you had it
running you could port over everything else pretty easily, since all
the tools were written in lisp. [Whether or not you would want those
tools is yet another question :-)]
Ah, but if you had one implementation of that virtual machine where
spaghetti stacks were horrendously slow, and another where they were
reasonable, the applications would be unusable on one and (perhaps)
usable on the other. This is precisely the point I am making.
All the hardware arguments boil down to "unless you have special
hardware it isn't fast enough". Compared to what?
The relative costs of different operations. I can write code as all
function calls, or I can write it as all message passing, or some
mix in between. If message passing is 100 times slower, even if I
preferred to write message sends for modularity, readability reasons
I'd probably go with function calls. If they are in the same ballpark,
I have a lot more flexibility.
Alot of the
baselines assume the generic stuff is just as slow as the specialized
stuff. But what if it is faster (indeed, a whole lot faster)? IF you
built your specialized machine to have basic cycle rates the same as
the fastest generic box THEN these arguments would be correct.
If we were a giant company with a captive fab line and process design
engineers to tweak the processes specifically for our chip, I assure
you we would. As a much smaller company, we can only make use of the
commercially available technology, which is bound to be several
years behind the best the semiconductor giants can muster.
But if
the specialized box is slower, then it becomes dubious which way the
performance tradeoff actually lies.
Again, I'm talking about the relative performance of different types of
operations. However, there is another issue, which is comparing apples
to oranges. The work done by the cycle on the Ivory chip is comparable
to that of two (or more) cycles of some of the commodity chips, i.e.
we read and write scratchpad memory in the same cycle, we have more hardware
doing more things in parallel in the same cycle. A more accurate way
to compare absolute processing power was to figure out some way to
measure "functionality delivered per second" rather than cycles per second.
Another thing is that several commodity chips don't handle virtual memory,
or as large address spaces; none of this comes for free, both in terms
of hardware and in terms of performance.
When I benchmark my code on generic hardware, I DO NOT turn off type
checking, I DO NOT install extra declarations, etc. I have enough on
my hands w/o trying to make up for deficiencies in the lisp
environment. And the generic stuff is really cleaning up in terms of
performance on my code.
It would be helpful to hear some details about this, what type of code,
doing what sort of operations, what sort of generic machine. There are
many optimizations we can make to our system, and we want to be responsive
to the needs of our customers.
Also, there are usually many ways to implement an application on the lispm,
with performance implications that may not be obvious. There are many
new metering tools and some new documentation to help customers identify
and improve their code from a performance standpoint.
I've said this before: I wish Symbolics had concentrated on rewriting
their software for performance instead of DW.
Actually, we've done some of both, partly because different groups of customers
asked for both. Perhaps we didn't hit your favorite performance deficiencies,
we have, after all only limited resources.
I'll bet their sales
would be alot better now.