[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


I would prefer to see a rule-based approach, at least as the normal
way of declaring optimizations.  A more programmable approach would
be used only when the rule processor is inadequate.

E.g., I find it much easier to read rules of the form:

 ((<arg-type>*) <result-type> <rewrite-pattern>)
 ((<arg-type>*) (<result-type>*) <rewrite-pattern>)

as in:

 '(((fixnum fixnum) fixnum (fixnum-+ arg-0 arg-1))
   ((float float)   float  (float-+  arg-0 arg-1))

More importantly, a rule-based approach makes it easier to augment a
previous set of rules by avoiding the need to advise old optimizers.

The drawback (and this may apply to the programmatic approach as well)
is that some sets of rules are best expressed as sequential attempts.
A simple bad case to consider is the pair of rules:

   ((fixnum integer) integer (foo arg-0 arg-1))
   ((integer fixnum) integer (baz arg-0 arg-1))
Depending upon which is entered first, unpredictable results are
possible.  You can partially resolve the issue with more cumbersome
rules, e.g.:

   ((fixnum (and integer (not fixnum))) integer (foo arg-0 arg-1))
   ((integer fixnum) integer (baz arg-0 arg-1))

But even this doesn't solve the problem when you're adding both
types and rules, since the old rules won't discriminate wrt the new

All of these issues apply to the programmatic approach as well,
since adding one new case or type could easily require rewriting the
old optimizer (or adding advice that amounts to the same).

I guess its obvious that I think one should be able to incrementally
extend the optimizations, modulo probably inevitable pathological


P.S.  My preferences may be biased by having gotten used to adding
      rules to Lucid's compiler, which works something like what
      I'm suggesting.