[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Questions about Commonloops



I've been waiting to hear the official news from the IJCAI meeting,
which I couldn't attend, but I have finally become impatient to put in
my two cents' worth.  Here goes...

I am admittedly a beginner at this game, but of the object-oriented
systems I've read about, I like the general approach of CommonLoops the
best.  Unlike flavors, it does not inflict a tremendous load of
complexity on the user who wants to do something simple.  CommonLoops is
a bit more powerful and extensible than the H-P proposal, without too
much more user-visible complexity.  As for Object Lisp, it has a certain
elegance, but to me its way of treating an object as a set of dynamic
function and variable bindings is very non-intuitive.  Perhaps I would
learn to like Object Lisp if I had a chance to play with it, and I hope
that a portable version is made available for this purpose, but I am not
drawn toward what I see in the Object Lisp manual.  The function-calling
syntax and the operator specialization of CommonLoops looks good to me,
and I think that the basic "class" level provides just about the right
amount of functionality for most of the things I would want to do.

Having said that, I must also say that there are a lot of unclear parts
in the CommonLoops proposal, and some apparent problems that I can't see
how to resolve.  Perhaps these are misunderstandings on my part.  As a
general comment, let me say that the Commonloops document suffers from a
critical lack of examples, both small and large.  Each of the proposed
mechansims should be accompanied by a little example.  It is very hard
to see exactly what is being proposed from text alone.  I know that the
authors of the document had to rush to get something out before IJCAI,
but now we badly need an expanded version of the proposal with lots of
examples.  We also need a more detailed sketch of how all this would be
implemented, if not the code itself, so that we can see clearly what all
of this costs in performance.

Perhaps the easiest way to proceed is for me to say what I think the
proposal says about certain issues and how certain things might be
implemented, and then people can tell me where I'm confused.

As I understand it, there is a new function-like object called a
discriminator that binds together a family of methods that all share the
same selector.  This discriminator does everything that a regular
function object does -- you can pass it around, funcall it or apply it
to arguments, lexically bind it to some selector symbol, globally bind
it by stuffing it into the selector symbol's function cell, and so on.
Even if this discriminator was defined by a bunch of (defmethod foo ...)
calls, it has no permanent tie to the symbol FOO.  You could move all of
the behaviors to BAR by (setf (symbol-function 'bar) (symbol-function
'foo)).  An anonymous discriminator can be created by
MAKE-DISCRIMINATOR, and additional methods can be added to the bundle by
SPECIALIZE.  A method can be removed from the bundle by REMOVE-METHOD.
There probably also should be a way to examine the constituent methods
of a discriminator -- this does not seem to be included in the proposal.

There seems to be some confusion in the description of MLET and MLABELS.
It appears that these are meant to lexically bind a new or modified
discriminator to some specifier symbol.  If so, the manual should not
speak of "binding the function cell", since the function cell is used
only to hold the global function definition; a different mechanism is
used by FLET and LABELS.  It is not clear whether MLET and MLABELS are
meant to create a new discriminator that is a copy of the old one for
the specifier in question, plus the new method, or whether the new
method is to be temporarily inserted into the existing discriminator.  I
think I would favor the former semantics.  This does not matter as long
as the discriminator is only referenced by way of the selector in
question, but someone else may have hold of the old discriminator.

Now, if the discriminator can indeed be used in all the ways that a
function can be, it seems obvious to implement it as a function
(probably a closure).  In its simplest (non-compiled) form, this
function would go down a pre-ordered list of type-restricted methods
(the "method precedence list" referred to in the paper), looking for one
whose type restrictions are satisfied.  The first winner gets funcalled
with the arguments that were passed in to the discriminator.  Presumably
the method bodies in a given discriminator may be a mixture of compiled
and uncompiled functions.

Obviously it is going to be pretty slow to do all that type-testing for
every call, since each test involves chasing multiple paths up the type
hierarchy.  I guess that this is where the cacheing comes in.  Once a
particular combination of a selector and some argument types have been
called, this combination can be hashed for future use.  However, it
would be very tricky to keep the cache consistent if discriminators are
not permanently tied to selectors, or if they don't have to have
selectors at all.  I guess the alternative is to hash not on the
selector symbol but on the discriminator itself.  One would still have
to flush all the hashtable entries for a discriminator every time a new
method is added, and changing the type hierarchy would invalidate large
parts of the cache, but in the usual case we could get a call down to N
get-type calls (one for each arg), one hashtable lookup, and an extra
funcall.

That's good enough for system development, maybe, but for production use
there has to be some way to say "I'm done with changing the type
hierarchy and with creating new methods, so now compile everything as
tightly as possible".  Certainly you should be able to avoid runtime
lookup if you call (foo (the wombat bar)), where foo has some method
that is applicable to wombats.  This will be especially important for
methods that are really slot-variable references, since even the cost of
a function-call would be unacceptable here.  There has to be some way to
give the compiler enough compile-time information to make it possible to
turn a slot reference into a simple inline SVREF.

A critical issue then, is exactly what can be changed when.  If a new
method is added to the system, is it necessary to go back and recompile
every function that calls the selector function in question?  That is
clearly impractical, so the alternative is to always do the type
dispatch (or at least the cache lookup), even in compiled code.  Or can
we perhaps treat selector functions as something like macros inline
functions -- if you change them, previously compiled functions retain
the old definition?  Or is there some way to get reasonable efficiency
without having to take this step?  In flavors the set of things that
might have to be recompiled is relatively localized; here things are
scattered all over known space.

This issue is closely related to the question of whether it is permitted
to create type-restricted methods for the built-in functions in Common
Lisp.  The CommonLoops proposal dances around this issue and does not
seem to take a clear stand.  Obviously, this cannot be allowed if these
specializations are to have a retroactive effect.  In that case, we
would never again be able to open-code a CAR or SVREF because someone
might someday come along and specialize it.  Even if such restrictions
were not retroactive, a user should probably be prevented from
specializing certain essential functions such as CAR, since this would
slow down all subsequent calls to that function by a huge factor.  It is
probably best just to forbid this, either for all built-in Common Lisp
functions or for a list of the commonly used ones that might be open
coded.

Some other issues:

The whole meta-class business sounds like a win, but without a much more
detailed description of what is controlled by the meta-class and how
this is specified, it is impossible to tell whether this is going to
fly.  I can't even formulate the proper questions in this area until the
system is spelled out a bit more.

It is unclear to me how multiple meta-classes can coexist peacefully.
What is the proper specificity ranking between two methods with selector
FOO, when one has its first argument restricted to instances of class
BAR and the other has its first argument restricted to instances of
flavor BLETCH?  (I assume that there can not be both a class BAR and a
flavor BAR.)  Can a method have one argument that is a class-instance
and another that is a flavor-instance?  Which meta-class controls what
in such a case?

In the list of built-in Common Lisp types that are disallowed as method
type specifiers one finds INTEGER, yet INTEGER is used in several of the
examples.  Perhaps it is only INTEGER with additional range arguments
that is disallowed?

The extension listed in II.A, individual specifiers, seems like a good
one.  However, I wonder if it is really advisable to twist up the
usual left-to-right specificity order in this one case.  Are indivduals
really so much more specific that they warrant this special treatment?
Some motivation for this odd exception would be useful.

In section II.B, I suppose that the variable-reference format is
necessary for emulating systems like flavors, but I wonder if the hairy
long form is really necessary.  If you're in a case this complex,
shouldn't you revert to the unambiguous function-call syntax rather than
resorting to a macro that hacks print names?

I don't understand the "annotated values" business, though I see roughly
what you are trying to do.  Some examples are critically needed here.
The scheme for changing the class of an existing instance likewise needs
some elaboration and an example.

For the method-slots business, I don't even see what you are trying to
do.  Totally mysterious.  There is an example there, but it's awfully
hard to follow.

Well, that's my first round of confusions.  I hope there's nothing fatal
here.

-- Scott