[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Avoid method cache locking



    So, i'd like to propose that the current caching mechanism be
    replaced with one that builds the complete cache and never changes
    it unless a method is redefined.  This may make the caches bigger,
    (but each implementation could trade time for space as it saw fit)
    but it would produce faster descriminators, and perhaps smaller
    descriminators as well.
This suggestion has two nonfeatures.  First, many caches would have to be
updated when a new class is defined, since to be complete, a cache must
mention each applicable class (not just the class wehre the method is
defined).  For this reason, for any generic function that has a method on
standard-object, the size of the "complete" discriminator cache is the
total number of classes ever defined. 

Here is an alternative scheme that requires locking only on update.
Method-lookup can proceed without locking (or checking a lock), trapping
out to an cache-update procedure if no appropriate entry is found.  

After taking the lock, the update code determines any entries it is
modifying.  It smashes the method address for each of those entries
replacing the method address with one for a trap routine.  Since smashing a
single location is (should be?) atomic, any method lookup process will
either get the correct method address, or trap out to wait to update the
cache.  For a machine whose cache does not contain an address to jump to,
the entry can be set to NIL, and the lookup code can check for NIL before
using the value found.