[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Scheme Digest #9



> 
> Mark VandeWettering writes:
>
> I think (and my thesis work is evolving into this kind of
> argument) that Y is overlooked for trivial reasons.  Partial
> evaluation and smarter code generation could make an SK based
> compiler generate code which is equal in quality to that produced
> by supercombinator based compilation.

I thought that the reason for using supercombinators rather than S and K is
space.  Ordinary \-calculus expressions can be translated into SK-combinatory
expressions, but the combinatory version is exponentially longer than the SK
term.  Some sets of supercombinators give polynomial-length expansions; I
think some give linear expansions.
  Now, code size isn't usually much of a problem, in that we don't
usually care whether a program is 50K or 250K.  The difference between 50K
and  2^50K is significant.  I don't know if the translation has a decent
expected case or not.  

  In any event, the SK compiler has a lot of work to do before it can match
even a fairly stupid supercombinator compiler, simply because it can be
forced to produce incredibly long code.  My guess, and I gather the guess of
many people who actually work on this, is that SK would lose.  I would be
very interested in some proof that this guess was wrong.

-- Bard Bloom