[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Avoid method cache locking
- To: Bobrow.pa@Xerox.COM
- Subject: Avoid method cache locking
- From: Jon L White <firstname.lastname@example.org>
- Date: Wed, 5 Apr 89 22:57:38 PDT
- Cc: kanderso@DINO.BBN.COM, CommonLoops.pa@Xerox.COM, aboulanger@DINO.BBN.COM
- In-reply-to: Danny Bobrow's message of 3 Apr 89 13:37 PDT <890403-134856-4209@Xerox>
- Redistributed: CommonLoops.pa
I think your alternative scheme is flawed in a subtle way that goes under
the rubric of "the multiple-readers/single-writer problem". The basic
problem is that the "read" step cannot be atomic (unless explicitly made
so by the offensive interrupt-inhibition construct) -- you typically have
to read several words from the cache to verify that the keys match, and
then you read the result word (i.e. the "method address"). [Some Lisp
implementations redefine the granularity level of atomicity to be
essentially the function entry, rather than a single machine instruction,
or single (shared) memory fetch -- they would provide locking without the
offensive behaviour that Ken is referring to, but they have so many other
problems that I don't think they are in the majority.]
To make a long story short, the "multiple-readers/single-writer" seems
to require that the readers also acquire the lock. Ask Ron Goldman
(arg@Lucid.com), who is doing our parallel LISP (QLISP) implementation,
for references and more details.
Yet, specific problems can avoid the reader lock, by recoding in
appropriate ways. A recent issue of SIGPLAN Notices, Vol 24, No. 4,
contains papers from a workshop on Object-based Concurrent Programming;
the position paper by Herlihy of CMU "Taking Concurrency Seriously"
spurred the idea for a non-locking cache in Lucid's implementation of
CLOS. Although he merely suggests "wait-free" programming, I translated
that into a relatively simple idea, and have volunteered to help Gregor
put it into PCL if he so wants.
The "simple idea" is that of version numbers -- where each update
increments the version. Readers simply check for EQLness between
the start of access and the finish thereof. Only two restrictions
(1) the version number range be big enough so as not to "cycle"
back to the same number (assuming modular arithmetic) during
any small number of scheduling quanta;
(2) the updating process leaves the cache in a state such that if
an interrupted reader has performed some, but not all, of its
atomic steps, then the subsequent atomic steps of the reader
will not cause memory access violations.
It isn't hard to see how a reasonable fixnum range satisfies (1), and
how a little care will satisfy (2), especially if the update process
is allowed to inhibit other processes from running while it is doing
its operations. Of course, on a true parallel machine, the task of
"inhibiting other processes/processors from running" is more serious
than a simple multiple-reader lock; so we wouldn't do it this way
in QLISP. But in fact a very minor variation of this "simple idea"
will work for parallel processors -- primarily, I believe, because
of the simple nature of a "cache" as a database.
Of course, I have no doubt that many other people could work out
reasonable variants of this "simple idea" by themselves too. Once
you think to do this rather than to do locking, the problem becomes
-- JonL --