[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

VM hooks



Guys,
here is what I have come up with.  Let me know if you have any
suggestions of other things that might be useful to your tuning work.
This is all new stuff for both of us, don't be afraid to scream
out loud.
The good news is that I have a gut feeling that it is only using
a lot of text pages, it does not use up much data pages.  The bad
news is that you probably knew this already....
sandro-
-----------------------------------------------------------------------------
1- Quick fixes
On a 16 meg machine you can recover 1.6 meg by doing this once on a new
kernel image (as root)
	adb -w /vmunix
	bufpages/W 0t200
	nmbclusters/W 100
	^D
and reboot.
These are probably high limits that can be imposed on all machines,
but I want to be sure that a machine can tolerate them under a
heavier and more realistic load than what I can put on my box alone.
[Btw, X11 takes two meg, Motif 1 meg, starting up xeyes makes X eat 
 20 pages.  Yes you can have fun with this toy]

2- task_info
I have implemented something that was defined but not implemented in
the Mach interface. The program /../countach/usr/af/bugs/task_info
(on the new kernel, see later) will give you the following information,
either continuosly or as a one-shot:

	task virtual size: 1110016 bytes (271 pages)
	task resident memory: 36864 bytes (9 pages)
	number of page faults: 39
	number of zero fill pages: 2
	number of reactivated pages: 6
	number of actual pageins: 7
	number of copy-on-write faults: 4
	number of messages sent: 0
	number of messages received: 0
	suspend count for task: 0
	base scheduling priority: 12
	user run time (terminated threads): 0.000 seconds
	system run time (terminated threads): 0.000 seconds

The source is in the same place, basically this is the per-process
counterpart of vm_stat().

3- vmmprint
In the same directory there is another program that you can run 
like
	vmmprint -p<NNNN> -e -l
which will give you a snapshot of the virtual address space of the
process.  The -l switch goes as far as telling you which pages are
there and which are not.  This can be a huge printout for lisp.

4- task_traced
The metered kernel also has the capability of printing each and every
fault that the process takes, the exact address where it faulted,
and whether it was a read or write fault.  It would look like
	[R@402020][W@402020]..
(this is a read fault followed by a write to the same location)
You get this going by
	- drop into kdb
	- type "$l" and note the task number (say 0x80081818)
	  of the lisp process
	- type "task_traced/W 80081818"
	- get out of kdb with a ":c"
The printouts go to the user terminal, be ready with your emacs buffer.
To stop this:
	- drop into kdb
	- type "task_traced/W 0"

[Beware that faulting here is not the same as paging in, the page
 might well be around but the machdep layer does not know where.]
This is an extreme facility that will show you exactly, and in what
sequence where your lisp touches memory.  I would expect that by
filtering the info that comes out of these 'traces' you might
endup with a better understanding of why the working set is so much
horrendously big [I followed Will's suggestion to recompile setq.lisp
and it used 10 meg of paging space beyond the 9 meg of core!!]

- kernel
It's been running on my machine since yesterday, you can pick it up
from
	/../countach/usr/af/latest/STD+ANY+EXP/vmunix
This is NOT a kernel that can go into production mode yet.
I want you guys to use it, give me feedback, and then I'll see
what can be made generally available out of this furious hacking
session.