[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
PCL benchmark
- To: larus@paris.berkeley.edu
- Subject: PCL benchmark
- From: Rob Pettengill <rcp@sw.MCC.COM>
- Date: Wed, 12 Oct 88 14:54:54 CDT
- Cc: kanderso@PEBBLES.BBN.COM, burdorf@RAND-UNIX.ARPA, CommonLoops.PA@Xerox.COM
- In-reply-to: kanderso@PEBBLES.BBN.COM's message of Wed, 12 Oct 88 13:38:25 -0400 <881012-104844-1868@Xerox>
- Redistributed: CommonLoops.PA
Redistributed: CommonLoops.PA
From: James Larus <larus@paris.berkeley.edu>
To: Chris Burdorf <burdorf@rand-unix.arpa>
Cc: CommonLoops.PA@xerox.com
Subject: Re: PCL benchmark
In-Reply-To: Your message of Mon, 10 Oct 88 15:07:58 PDT.
<8810102208.AA01749@rand.org>
Reply-To: larus@ginger.berkeley.edu
Date: Tue, 11 Oct 88 09:47:27 PDT
...
The first bug is that the caches for the discriminator functions have
32 entries. While a fixed-size cache works for some generic
functions, it fails miserably for generic functions with more than 32
methods (20% of the time spend in one discriminator function).
Compounding this problem is the slow speed of the cache miss code
(20-30% of the time).
...
I believe that the performance of the cache has been greatly improved
recently. Previously the low order bits of the byte address of the
word aligned class-wrapper were used for the cache key - this
resulted in caches that could never be more than 25% full. This gave
a very high probability of thrashing. Gregor's new scheme allows full
use of the cache with 3 layers of possible cache hits: the key
location, the folded key location, and any location in the cache.
This should dramaticly improve the previous cache performance. It
would be nice to have dynamicly expandable caches in the future. For
now, it is also easy to recompile PCL with a larger cache.
;rob
Robert C. Pettengill, MCC Software Technology Program
P. O. Box 200195, Austin, Texas 78720
ARPA: rcp@mcc.com PHONE: (512) 338-3533
UUCP: {ihnp4,seismo,harvard,gatech,pyramid}!ut-sally!im4u!milano!rcp