[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: tail call optimization in CLISP?



Jörg-Cyril Höhle <hoehle@zeus.gmd.de> asks:
> 
> would it cost a lot of work to perform tail call optimization in
> simple cases (no unwind-protect, no dynamic bindings etc.)?

One part of it would be easy: As you know, CLISP's bytecode has an
APPLY&SKIP&RET instruction. You would have to generate similar
CALL&SKIP&RET instructions.

Then you would have to modify the bytecode interpreter so that it
looks at the function to be called and whether it is a bytecode function.
If so, you will move the arguments within the stack, to reduce stack size,
and jump to the start of the bytecode interpreter. But on many platforms,
an auxiliary stack (of fixed size) is allocated within the bytecode
interpreter using C alloca(). If you just jump the the start of the bytecode
interpreter you will not reclaim this stack space, and thus not achieve
the desired tail call elimination. Working around this problem is hairy.

> Without that, CPS becomes unusable because all final calls pile up on
> the stack as a series of FUNCALLs instead of being JMP'ed to.

You should not rely on the compiler to do tail call elimination. This
is an optimization which works only under very special circumstances.
Just changing a variable from "lexical" to "dynamic", or introduction of
an UNWIND-PROTECT, or TRACE will disable the optimization. Relying on
tail call elimination is a guarantee that whoever will have to maintain
your code will have surprises and headaches in the future.

Systems which predictably support tail call elimination (e.g. many Scheme
systems) pay a big price for it: They have to allocate many entities on
the heap which could otherwise be stored on the stack. And heap allocation
is normally(+) more expensive than stack allocation.

                         Bruno


(+) SML/NJ claims to have a heap allocation which is as fast as stack
allocation.