[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Scheme Digest #9



In article <8811161428.AA13775@toucan.LCS.MIT.EDU> bard@THEORY.LCS.MIT.EDU writes:
>
>> 
>> Mark VandeWettering writes:
>>
>> I think (and my thesis work is evolving into this kind of
>> argument) that Y is overlooked for trivial reasons.  Partial
>> evaluation and smarter code generation could make an SK based
>> compiler generate code which is equal in quality to that produced
>> by supercombinator based compilation.

>I thought that the reason for using supercombinators rather than S and K is
>space.  Ordinary \-calculus expressions can be translated into SK-combinatory
>expressions, but the combinatory version is exponentially longer than the SK
>term.  Some sets of supercombinators give polynomial-length expansions; I
>think some give linear expansions.

This is true for the trivial compilation algorithm described in Turner's
original paper.  Typically with better compilation and the addition of
several "longer reach" combinators, expansions can typically be O(nlogn)
or less.  

I should point out that I do not intend to use SK combinators as the
final "primitives" in the SK machine.  I believe that SK combinators
form a good basis for performing partial evaluation.  Hence, the
compilation algorithm I am currently deriving is as follows:

	SASL-like functional language
		|
		V
	Enriched Lambda Calculus
		|
		V
	SK combinators
		| 		(by Partial Evaluation)
		V
	Stack Machine Code 	(similar to the G-machine)
	
I have found the work of Wand, Friedman, Haines and Kohlbecker to be
interesting, in that they transform an interpreter for Scheme into a 
compiler.  I have applied much of the same methodology within the
framework of a graph reduction engine, and find very interesting
results.

>  Now, code size isn't usually much of a problem, in that we don't
>usually care whether a program is 50K or 250K.  The difference between 50K
>and  2^50K is significant.  I don't know if the translation has a decent
>expected case or not.  
	
Recent evidence has shown that it does have "decent" performance.

>  In any event, the SK compiler has a lot of work to do before it can match
>even a fairly stupid supercombinator compiler, simply because it can be
>forced to produce incredibly long code.  My guess, and I gather the guess of
>many people who actually work on this, is that SK would lose.  I would be
>very interested in some proof that this guess was wrong.
	
I question whether "longer code" is indeed the bugaboo here.  We are
basically talking about a logn performance difference here.  Imagine
that our implementation of supercombinators actually required the
implementation of a primitive that didn't have a bounded execution time.
Most of the time, we are indeed concerned with TIME of execution rather
than space.  I wonder (and don't really believe) if the way that other
supercombinator methods don't achieve shorter code is by making more
complex primitives.

It was an idea, not a particularly good one.  I am very impressed
with the G machine, and find myself playing "catch up" to make SK
combinators have similar performance.

Why?  Well, its the stuff that theses are made of.....

>-- Bard Bloom

Mark VandeWettering