[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Scheme Digest #9



In article <15089@iuvax.cs.indiana.edu> jeschke@iuvax.UUCP (Eric Jeschke) writes:
>    More specifically, the problem is not with the larger code image
>produced by SK compilation (because memory size is typically not a
>problem these days), but rather that the granularity of the
>instructions is too fine.  Supercombinators have much coarser
>granularity, and get more work done per `instruction'.

Perhaps I haven't been entirely clear:   I am NOT using SK combinators
as the final code in the target machine.  Just as supercombinators have
an efficient implementation in the G-machine, I believe that SK
combinators have an efficient implementation in a similar stack based
machine.

Why use SK combinators?  Because they provide a convenient formalism for
reasoning about optimizations, strictness and partial evaluation.  

>This is reminicent of the RISC vs. CISC arguments that are raging over
>in comp.arch.  I think Mark is making a case that with a high enough
>instruction bandwidth and more intelligent code
>generation/optimization, SK reduction performance could approach
>current supercombinator reduction performance.

I see the problem of implementing an SK machine as twofold, the
instructions themselves are reasonably "highlevel", but do relatively
little relative to the source language.  For instance, a numerically
intensive program spends most of its time copying argument pointers into
the right place on the heap for execution.

I believe that by partially evaluating SK expressions, and studying the
actions that are performed in an INTERPRETER, we eliminate most of the
silly and redundant copying of pointers and heap allocation that plague
SK implementations.

>I doubt it, especially with SK, but you might with a larger set of
>fixed combinators, such as Turner's expanded set.  I think you will
>also need hardware support to really approach/improve on
>supercombinator performance.  Some fixed combinator machines have been
>built, but I haven't heard of any that are close performance-wise to
>the current breed of supercombinator implementations.

I should be more clear, when I say SK combinators, I mean the expanded
set.  

>  In short, a number of people have looked into this already, and most
>are opting in favor of supercombinators.  With the performance of
>general-purpose architectures climbing steadily, the trend seems to be
>moving away from building special purpose machines.  For the
>forseeable future, fixed combinator implementations will be slower,
>given the advanced state of supercombinator compilation techniques.

I agree in part, smart compilation will beat special hardware MOST of
the time.  But I don't believe that the state of the art in compiler
technology has been applied to compiling combinators.  Also, the target
of my compilation is NOT to a fixed set of combinators.

>Eric   ------
>  jeschke@iuvax.cs.indiana.edu		...{pyramid, rutgers}!iuvax!jeschke
>------

Mark VandeWettering