[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: SUN vs Lisp machine hardware differences?



    Date: Tue, 11 Aug 87 13:54:26 PDT
    From: larus%paris.Berkeley.EDU@berkeley.edu (James Larus)

    Even parallel checks of this sort have costs.  To name two:

    1. Extra hardware that costs money and limits the basic cycle of the
    machine.

The extra hardware to do these checks is really tiny.  Since it's
operating in parallel with normal operation anyway, it doesn't really
limit the cycle time of the machine.

    2. Extra design time that could be spent elsewhere (e.g., speeding up
    the basic cycle) and that delays time-to-market.

Theoretically, yes, but it's pretty small also.  Consider that Symbolics
just designed a chip that has more transistors than the Intel 386, but
did it with one-tenth the personpower.  Compared to this, the time
needed to design the extra checking hardware is peanuts.

    I believe that checks like this are valuable in certain circumstances.
    If Symbolics has evaluated the tradeoffs and really believes that this
    check is worth the costs, then they should publish a paper and
    convince the rest of the world.  

I refer you to the paper "Symbolics Architecture", by David A. Moon,
IEEE Computer (20) 1, Jan 1987.

    One problem that many (non-MIT?  California-educated?) people have
    with the Lisp machine approach is that it assumes complex hardware
    rather than exploring the tradeoffs between software and hardware.

One problem some of us have with some of you is that you persist in
insisting that we give no thought to tradeoffs between software and
hardware.  That's not true at all; quite the contrary.  We think it's
worth spending a small hardware cost in order to get big software gains
that we can't get without that hardware.  And we're unwilling to
compromise.

    The extra memory reference for adjustable arrays can be eliminated in
    many cases by having the compiler eliminate common subexpression and
    store a pointer to the array's data vector in a register.

Sure.  We do this too, like anyone.  But many references are not inside
loops at all, particularly in symbolic computing.  And this applies not
only to arrays, but structures and instances, which also need to be
adjustable.  Also, the optimization you propose is incorrect if there is
any way that the array's size can change during the time that the
pointer lives in the vector.  It's particularly hard for the compiler to
prove that that can't happen if you allow several Lisp tasks to execute
together in one address space.  This is an example of where we're
unwilling to compromise.

    Cdr-coding was proposed and justify in the mid-1970's when memory
    prices and programs were quite different.  I don't believe that the
    original asssumptions hold true any long.  If Symbolics has evidence
    that this optimization is worth the complexity, then I and a number of
    other people would be very interested in seeing it.

Of course it depends a lot on the time cost of dealing with cdr-coding,
on how much memory costs, on how much memory your program needs, and
which program you're running, and how much of your data structure is
lists versus non-lists, and how many references are to lists (do you
make a whole lot of big lists that you rarely refer to, or the
opposite), and so on.  There are other issues here that are really too
complex to go into in detail.  No, I don't know of any explict attempts
to do these measurements lately.