[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: SUN vs Lisp machine hardware differences?

     Only in the most general way.  Our users tell us that Lisp programs and
     data are often up to four times as large on a Sun.  The Sun instruction
     set does not map into Lisp well which results in much bigger programs.
     For instance here is a trivial function that was disassembled on Sun
     Lisp V2 and Symbolics:

     <<Example deleted>>

     Lack on CDR-coding, boxing of numbers, and similar issues makes Lisp
     data  structures much larger on traditional systems also.

This is an interesting issue and I'm glad to see it brought up.  The
feeling around here is that the exponential growth in the size of
memories have changed space/speed trade-off for the *majority* of
users.  Obviously there will still be users who fill the largest
memories and for them, the trade-offs are different.  I am curious to
know, though, what percentage of the total (not just Symbolics) user
population fits in that category.  As one datapoint, I had a note a
couple of months ago from Dan Weinreb pointing out that the vast
majority of machines at Symbolics had only 2 MW and that he felt that
was sufficient.  Even accepting your 4x expansion, the Sun equivalent
is only 32MB.

One other point that should not be lost in the discussion is that the
Symbolics address space contains a lot more code and data than an
equivalent Lisp address space on a Sun.  The Unix system code is
written in C, which has a smaller object size than the current Sun
Lisp code.  Also, a number of utilities that are part of the Lisp
Machine address space (editors, mail programs, etc.) are dynamically
loaded (i.e., processes), so to be fair, we must count part of the
Unix file system as being in the "address space."

     It is also worth noting that traditional g.c.'s need huge amounts of
     free virtual address space to be able to run at all but the ephemeral
     g.c. will continue collecting until the very last bit of address space
     is full of data which potentially makes the usable address twice as
     large.  Combine this with the differences in the size of data and code
     and our users find that the Symbolics 3600 architecture is much more
     cost effective.

Agreed.  The current generation of Sun garbage collectors work very
poorly.  I doubt that this is an intrinsic feature of GC.  The
stories that I have heard suggest that the early Lisp Machine garbage
collectors were even worse.  As far as I know, no one runs without GC
on a Sun.