[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [not about] gc-by-area and host uptimes



    Date: Wed, 15 Feb 89 20:55:01 CST
    From: forbus@p.cs.uiuc.edu (Kenneth Forbus)

    About the "genera philosophy": Right.  We kind of know all that.
    (This IS slug, after all).  The question is, how relevant is it?

    Two things to notice: (a) Symbolics hasn't been routinely giving out
    sources for a long time now.  

Baloney.  By actual count, 7.2 includes 80% of the available source code
for non-layered products, as either "basic" or "optional" sources.  The
cost of those optional sources is pretty nominal, and practically nobody
is interested in purchasing them.  Of the 20% restricted, some are the
(unreleased) new scheduler, many are L-machine hardware dependent code,
and the remaining is Dynamic Windows and NSage.

In 7.3, no sources are restricted.  75% of the sources are "basic", and
25% are "optional".  Of the optional sources, many are L- and I-machine
hardware dependent code, the scheduler, DW internals, and NSage.

Categorically claiming we "don't routinely give out sources" hardly
seems accurate.

				  (b) Reusing code over multiple
    hardware systems has nothing to do with the underlying hardware.
    Anybody remember the Interlisp virtual machine?  Once you had it
    running you could port over everything else pretty easily, since all
    the tools were written in lisp.  [Whether or not you would want those
    tools is yet another question :-)]

Except that there is no guarantee that the IL virtual machine running on
a particular platform has reasonable performance, so you are forced to
write programs in the least common denominator in the hopes that they
will perform well.  The current implementation of CLX is loaded with all
kinds of hair for just this reason.  I have friends who are banging
their heads into this wall right now.

    All the hardware arguments boil down to "unless you have special
    hardware it isn't fast enough".  Compared to what?  Alot of the
    baselines assume the generic stuff is just as slow as the specialized
    stuff.  But what if it is faster (indeed, a whole lot faster)?  IF you
    built your specialized machine to have basic cycle rates the same as
    the fastest generic box THEN these arguments would be correct.  But if
    the specialized box is slower, then it becomes dubious which way the
    performance tradeoff actually lies.

Symbolics cannot afford to build chips with state-of-the-art processes,
since we do not happen to own a chip foundry.  Therefore, we concentrate
on architectures that attempt to make up for that by being clever.

    When I benchmark my code on generic hardware, I DO NOT turn off type
    checking, I DO NOT install extra declarations, etc.  I have enough on
    my hands w/o trying to make up for deficiencies in the lisp
    environment.  And the generic stuff is really cleaning up in terms of
    performance on my code.

    I've said this before: I wish Symbolics had concentrated on rewriting
    their software for performance instead of DW.  I'll bet their sales
    would be alot better now.

I don't agree.  A performance battle would only cause us to lose faster,
because our small staff and small size (witness Sun) would always allow
our competitors to inch ahead.  Therefore, we need to concentrate on
software technology.  In hindsight, not only would I do DW again, but I
would make its underpinnings more radical, and trade compatibility for
performance, which is something we did not do in 7.0.