[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Compilation of methods per class.
- To: lange@CS.UCLA.EDU
- Subject: Re: Compilation of methods per class.
- From: Gregor Kiczales <firstname.lastname@example.org>
- Date: Tue, 2 Oct 1990 18:46:43 PDT
- Cc: email@example.com
- Fake-sender: firstname.lastname@example.org
- In-reply-to: "Trent Lange's message of Tue, 2 Oct 1990 05:05:16 PDT <email@example.com>"
- Sender: <firstname.lastname@example.org>
Date: Tue, 2 Oct 1990 05:05:16 PDT
From: Trent Lange <lange@CS.UCLA.EDU>
Once I switched it to that (so that the cache is filled first with i2),
my run on a Sun-4 in Lucid 3.0.1 showed the same problem I found earlier.
The first two timings for 10,000 calls took a best of 0.13 seconds (like
yours), but the last two took a best of 0.17 seconds.
So unless there's something screwy with my May Day PCL, the run time
in Lucid definitely seems to be dependent upon the order in which the
caches are filled (and with generic functions having more than two
specializations, sometimes quite adversely affected by that order).
However, they don't seem to be dependent on cache-filling order in
Now, we may be getting somewhere. There is no doubt that in the Lucid
port of PCL, cache operations are going to run slower than in the Franz
port. So in your case, where filling the caches in the `other order'
may affect the cache layout, and consequently affect the number of
probes required, one might expect Lucid performance to suffer more than
Franz. Similarly, in the case with methods on a number of classes,
where more cache probes may also be required, we would expect Lucid
performance to degrade faster than Franz.
Let's look a little more carefully at what this means. The benchmarks
in question measure the time it takes to do generic function call
overhead. That is all they measure. That is why, when you produce a
case where the overhead is greater (more cache probes are required) the
numbers go up by what seem to be alarming percentages.
But, what these numbers don't measure is the percentage of the total
runtime of a real program which is going to generic function call
overhead. That is, when you generate a case where the generic function
call overhead goes up by 50%, we certainly don't expect the time your
entire real program takes to run to go up by 50%. In fact, a 50%
increase in generic function overhead is likely to likely to have a very
small effect on total system performance. I don't have any hard data on
this, but I believe JonL has some which he could share with us.
Now, there are two important caveats to what I am saying:
First, if generic function call overhead goes up by really large amounts
(say greater than a factor of 3) it may start to become a serious issue
in the performance of real programs.
Second, there are certainly pathological programs, of which this
benchmark is one, where 50% changes in the generic function call
overhead will have close to 50% effect on the performance of the total
Finally, it is important to point out that this difference in the
performance of PCL in Lucid and Franz Lisp has nothing to do with the
two Lisps. It shouldn't be taken as comment on the quality of either
Lisp, or as an indication of the quality of each vendor's future CLOS
products. For a variety of reasons, the Franz port of PCL has simply
had more work done to it. In particular, the Franz port of PCL has a
custom LAP code assembler.