[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
1Paging Performance 0
Date: Wed, 12 Aug 87 21:39 EDT
From: Brad Miller <miller@DOUGHNUT.CS.ROCHESTER.EDU>
As I sit here, editing something because my machine is too sluggish running a
compile and a dynamic GC to do anything else... I wonder:
I'm running on a 3620: the maxstor (sp?) disks are notorious for being slow. I
recall reading something in the internals doc under the schedular that page
faults do not generate sequence breaks. That is, when GC, for example, asks
for a page from disk, the processor does not go to another runnable process
(e.g. the compiler, the editor): it just waits for the page to come in. <my
understanding of the doc, please correct me if I'm wrong>.
I wonder if some wizard could enlighten me: isn't that a big waste of
processor resource, esp. with GCs going on? I'd think those cycles could be
put to better use... it certainly seems like *I* could be using them right
now! I understand <part of> the trade off here: certain parts of the system
might have to be moved to memory that cannot be paged, e.g. the schedular.
Since the minimum system is 8mb, I'd think there would be space, and the
overall performance improvement would be worth it. I know this may not come up
if you always tend to do *one* thing at a time: but the lispm environment
makes saving state so easy I tend to do lots of different things at once -
giving my poor machine's <paging> disks a workout!!
As you deduced, the problem is that all process-switching must not take
a recursive page fault. However, it's not just the scheduler which can
take a page fault. On a shallow-binding architecture, a
stack-group-switch requires that all variables bound by a process be
unbound, and all variables in the process to be run be rebound. So in
order for other processes to run during a page fault, all binding stacks
and all memory locations which are bound by any stack group must be
wired in physical memory.
Of course, there are ways around this problem, one of which is switching
to a deep-binding architecture. Another one is trying to
process-switch, and just punting if you happen to take a page fault.
The list of potential unimplemented kludges goes on and on.
Then there is an argument which says that allowing other processes to
run during a page fault will actuall degrade paging performance, because
page faults from two competing processes will be interleaved rather than
localized. As far as I know, this is just a theory, and has not been
Symbolics is currently testing an improved scheduler in-house. Its
inclusion in 7.2 depends on QA testing; I rather doubt it will be
Another thing we are working constantly is decreasing the paging
overhead of common system operations, such as the compiler and DGC which
By the way, during a page fault all is not wasted. Practically anything
which can be computed by the wired system is: the clock is updated, the
mouse is tracked, and (most importantly) the paging system queues are
searched and updated to most optimally select pages for replacement in
future page faults.