[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Scheme Digest #9

Mark VandeWettering writes:
| I think (and my thesis work is evolving into this kind of
| argument) that Y is overlooked for trivial reasons.  Partial
| evaluation and smarter code generation could make an SK based
| compiler generate code which is equal in quality to that produced
| by supercombinator based compilation.
Bard Bloom points out the space problem:
|I thought that the reason for using supercombinators rather than S and K is
| <stuff deleted>
|  In any event, the SK compiler has a lot of work to do before it can match
|even a fairly stupid supercombinator compiler, simply because it can be
|forced to produce incredibly long code.  My guess, and I gather the guess of
|many people who actually work on this, is that SK would lose.  I would be
|very interested in some proof that this guess was wrong.

    More specifically, the problem is not with the larger code image
produced by SK compilation (because memory size is typically not a
problem these days), but rather that the granularity of the
instructions is too fine.  Supercombinators have much coarser
granularity, and get more work done per `instruction'.

This is reminicent of the RISC vs. CISC arguments that are raging over
in comp.arch.  I think Mark is making a case that with a high enough
instruction bandwidth and more intelligent code
generation/optimization, SK reduction performance could approach
current supercombinator reduction performance.

I doubt it, especially with SK, but you might with a larger set of
fixed combinators, such as Turner's expanded set.  I think you will
also need hardware support to really approach/improve on
supercombinator performance.  Some fixed combinator machines have been
built, but I haven't heard of any that are close performance-wise to
the current breed of supercombinator implementations.

  In short, a number of people have looked into this already, and most
are opting in favor of supercombinators.  With the performance of
general-purpose architectures climbing steadily, the trend seems to be
moving away from building special purpose machines.  For the
forseeable future, fixed combinator implementations will be slower,
given the advanced state of supercombinator compilation techniques.

Eric   ------
  jeschke@iuvax.cs.indiana.edu		...{pyramid, rutgers}!iuvax!jeschke