[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
- To: firstname.lastname@example.org
- Subject: Re: a-to-c
- From: Gregor J. Kiczales <email@example.com>
- Date: Mon, 11 Jun 90 09:45:44 PDT
- Cc: MOP.pa@Xerox.COM
- In-reply-to: firstname.lastname@example.org's message of Thu, 7 Jun 90 09:22:35 +0100 <9006070822.AA11973@uk.ac.cam.cl.aldham>
- Line-fold: NO
- Sender: email@example.com
Date: Thu, 7 Jun 90 09:22:35 +0100
>From Richard Barber, Procyon Research Ltd, Cambridge, UK
Thanks for getting your comments back so soon. Along with some of the
other members of this list, I have just been at an X3J13 meeting which
is why I haven't responded sooner.
I have questions about both of your comments. But, in order to phrase
my questions more concisely, let me first talk about an important kind
of optimization in implementing the MOP.
For the purpose of this message, let the term "standard-metaobject" mean
those metaobjects which are instances of one of the standard metaobject
classes. That is a class which is an instance of standard-class, a
method which is an instance of standard-method etc. Note that these are
not instances of subclasses of the standard classes, they are instances
of the standard classes themselves.
For any standard-metaobject, the implementation of the metaobject
protocol can be heavily optimized. For example, a standard generic
function metaobject doesn't need to ever call compute-discriminating-
function. The rules of user specialization prevent there being any
user-defined applicable methods.
Formally, this optimization can be phrased as: The implementation is
permitted to elide a specified call to a specified generic function if
no portable methods would be applicable to the arguments.
With this optimization in mind, let me ask a few questions about your
The new specification of compute-discriminating-function is extremely
constraining on the way implementations manage method dispatch in
generic functions. In particular it forces the implementation to cache
lists of applicable methods. We at Procyon have experimented with doing
this but found it makes for an intolerable space overhead on small
The description of the method lookup protocol was specifically designed
to avoid requiring that lists of applicable methods be cached. Can you
explain how it requires this? Or perhaps you could say something about
your alternative technique which would demonstrate the additional leeway
you would like.
The second area which I am unhappy with (again for space reasons) is
that of effective-slot-definition creation. (I mentioned this in a
message to this group about 2 weeks ago). The new specification of
compute-effective-slot-definition states that on each call it creates
a new effective slot definition (using make-instance). However, Procyon's
current implementation of CLOS shares these objects between a class and
those inheriting from it whenever the objects would be equal. Following
the new specification and not doing this sharing results in a ***20%***
increase in the size of our complete system image!!
With this in mind, compute-effective-slot-definition should be permitted
to return an existing effective slot definition object where an identical
one exists. A minor consequence is that slot-definition-location
should take an extra argument, the class in which the slot definition
As far as I can tell, there are three ways to address this concern:
1) Find a way to allow compute-effective-slot-definition to return an
object not created by a call to make-instance. For standard class
metaobjects, the optimization mentioned above can play, but it is a
little weird in light of the protocol. That is, the protocol really
strongly suggests that effective slot definition metaobjects will be
unique. It would take a somewhat different protocol to make this seem
natural, and I think that protocol would likely be much more difficult
for users to understand how to extend.
2) Keep (almost) the same protocol, but on an implementation-specific
basis make the effective slot-definition metaobjects much more space
efficient. If, for example, compute-effective-slot-definition gets the
class as an argument, then one way to implement effective slot
definitions would be with just two slots: The class and the slot name.
Each reader of the effective slot definition would just recompute its
value by going down the class precedence list etc. This should provide
most of the space savings you mention.
Another way would be using a technique similar to the one you describe
of having effective slot definitions share common values. In such a
scheme, each effective slot definition might have only one slot.