[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Optimizers and macros



Users are always wanting to override system functions.  If you decide to
change the meaning of some function, macro, or symbol that names a
special form, then ideally you should be able to do that, and if there
is an optimizer sitting around that "knows" the meaning of the symbol,
and you change the meaning, then that optimizer had better be flushed.

The system can't tell the difference between a DEFUN (or DEFMACRO, et.
al.) that is changing the meaning of something, and one that is merely a
new version with a bug-fix, and so it can't tell whether to wipe out the
optimizers.  Schemes based on remembering what file things were defined
on are very prone to confusing effects (what if you type in a new definition
at Lisp top level?  What if you have a patch file that is only making bug
fixes?)  RMS's suggestion seems to me like the best way to solve
this problem: have the definition of the function (or macro) include,
textually, the names of optimizers to use, and have those be
free-standing functions.  This has all the right properties.

There is no problem with EQ checking to figure out whether an optimizer
did something, since as RMS pointed out there is no point in making a
displacing optimizer; they only happen at compile-time anyway.  In fact,
the Lisp Machine compiler actually uses EQ checking to figure out
when to stop macro expanding, and so does not actually allow old-style
displacing macros, as it happens.

GLS's and GSB's suggestions about DEFTRANSFORM are good, but if we
do things as RMS suggested (using local declarations) then there
is no need for it.

I actually think the name OPTIMIZER is good.  It stresses the fact that,
as RMS said, it is critically important that the optimizer does not
change the functionality of the form is is transforming AT ALL.  These
things are NOT like macros; they must not do anything semantically
interesting, but merely produce a different (and presumably more
efficient) way to do exactly the same thing.  This is because they only
happen in the compiler; it must not make any difference to the semantics
of the program if they do not get run.  Only the efficiency may be
affected.  Talk about making these things work in the interpreter is
misguided: we already have source-to-source transformations that also
work in the interpreter, and they are called macros.  Macros are the
right thing for doing actual intersting transformations; optimizers are
for when you know something will run faster compiled if it is
re-expressed in a certain way.