[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

*To*: Scheme@mc.lcs.mit.edu*Subject*: Scheme Digest #13*From*: bard@theory.lcs.mit.edu*Date*: Mon ,21 Nov 88 19:37:29 EDT*In-reply-to*: Automatic Scheme Digestifier's message of 20 NOV 88 00:02:32 EST <8811212057.AA00314@theory.LCS.MIT.EDU>

> Worst-case lambda-expressions for this > translation have deeply nested subexpressions and functions of many arguments > - the nesting has to be proportional to the size of the program to get > the quadratic effects, e.g. (backslash = lambda): > > \x1.\x2.\x3.\x4.\x5.(x5 ((x2 (x1 x4)) x3)) > > Does this ever happen? In more theoretical settings, at least, LET x=m IN n is identical to (\x.n)m. If the typical program structure is LISP-like, it is a long sequence of short function declarations followed by a body: LET x1 = m1 IN LET x2 = m2 IN ... LET xk = mk IN n which is indeed a deeply nested term, although not quite of the form above. All this proves is that you should do something in a way other than the theoretician's straightforward translation of LET. What methods are actually used? > > In any event, the SK compiler has a lot of work to do before it can match > > even a fairly stupid supercombinator compiler, simply because it can be > > forced to produce incredibly long code. > > Is this a particular SK compiler you're talking about? Has someone actually > implemented the naive translation into just S and K? I'm not surprised it's > no good. The naive one. As I said before, I'm a theoretician (interested in denotational semantics of lambda calculus) and woefully ignorant about compiler technology. -- Bard Bloom

**Follow-Ups**:**Re: Scheme Digest #13***From:*Krulwich-Bruce@yale-zoo.arpa (Bruce Krulwich)

- Prev by Date:
**Re: combinator code (was Re: Scheme Digest #9)** - Next by Date:
**Re: Scheme Digest #13** - Previous by thread:
**Re: Re^2: Scheme Digest #8, Efficiency of Y** - Next by thread:
**Re: Scheme Digest #13** - Index(es):