[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: SUN vs Lisp machine hardware differences?
From: Daniel L. Weinreb:
Forwarding pointers do slow down certain operations that could not have
been done in the same way on a conventional machine. But if you just
don't use them that way, then nothing else is slowed down. The check
for forwarding pointers in done by the 36xx in parallel with all those
other checks it does on every memory reference. So if you don't use
forwardings pointers, they don't slow you down.
Even parallel checks of this sort have costs. To name two:
1. Extra hardware that costs money and limits the basic cycle of the
machine.
2. Extra design time that could be spent elsewhere (e.g., speeding up
the basic cycle) and that delays time-to-market.
I believe that checks like this are valuable in certain circumstances.
If Symbolics has evaluated the tradeoffs and really believes that this
check is worth the costs, then they should publish a paper and
convince the rest of the world. I realize that they are in a
competative business, but appealing to the collective wisdom of the
Gods that built the MIT Lisp Machines is no longer sufficient to
justify putting features into a machine.
One problem that many (non-MIT? California-educated?) people have
with the Lisp machine approach is that it assumes complex hardware
rather than exploring the tradeoffs between software and hardware.
The extra memory reference for adjustable arrays can be eliminated in
many cases by having the compiler eliminate common subexpression and
store a pointer to the array's data vector in a register.
It's true that cdr-coding makes cdr take more memory references in some
cases. But the intention of cdr-coding is to make lists take less
memory, which means fewer page faults, which means things are faster on
the whole. As usual, it depends on your application and configuration
in ways that are hard to measure accurately: if you have plenty of extra
main memory, then the cdr-coding doesn't help and does slow things down
somewhat.
Cdr-coding was proposed and justify in the mid-1970's when memory
prices and programs were quite different. I don't believe that the
original asssumptions hold true any long. If Symbolics has evidence
that this optimization is worth the complexity, then I and a number of
other people would be very interested in seeing it.
/Jim