[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
LISPM Execution times
- To: RG at MIT-AI, GJS at MIT-AI, INFO-LISPM at MIT-AI
- Subject: LISPM Execution times
- From: RG at MIT-AI (Richard Greenblatt)
- Date: Mon ,27 Aug 79 07:19:00 EDT
- Cc: nil at MIT-MC
- Sent-by: RG0 at MIT-AI
CWH and I timed the examples reported by RJF for VAX and KA-10
on the LISPM. The results highlight an interesting point. About
half the real time delay as seen by the user was disk swap delay
swapping stuff out of physical memory to make room for freshly CONSed
data. (The LISPM creates freshly consed pages directly
in physical core without swaping them in). To show this, the
examples were run both "regular" and immediately after doing
a (si:page-out-area working-storage-area). The page-out-area
updates the disk copy of pages, thus creating a resovoir
of physical pages which can be immediately purged without writing
them out.
All times reported for the LISPM are net elapsed wall clock time
and as such, include the full time of all disk transfers. Loops
executing problem 1 100. times and problem 2 10. times were measured.
With the 256K machine immediately after a page-out-area,
there appeared to be very little swapping going on, judging from the run-bar.
machine (all compilied) (x+y)^12 (x+y+z)^20
KA-10 (RJF) .116 1.54
VAX Franz (8/16, RJF) .20 2.54
128K LISPM .19-.24 2.37
256K LISPM .11 1.6-1.8
128K L. after page out .12 .9-1.1
256K L. after page out .09 .7
The most interesting figure is the last one, which
illustrates most clearly the point mentioned earlier.
Note that this problem will not be significantly changed by arbitrarily
large amounts of physical memory. However, increasing the
page size (or at least the quanta transferred in a disk op)
would help.
During the run on the
128K LISPM after a page swap out, there was little or no swapping
for the first half or so of the run, then swapping
started, evidently as the pool of flushable physical pages
was exhausted. Comparing the times, it appears that
almost all the difference between the 128K and 256K
LISPMs was caused by this effect. That is, presumably
the only reason the 256K machine was faster in the "regular"
trial was that when a page was selected for swapout, there
was a greater chance of selecting a data page as opposed to a
program page, simply due to the greater number of data pages
present in physical memory. In this example, the data page
would probably not be needed again, while the program page
probably would be.
This effect probably also goes some distance toward explaining
the long standing mystery as to why the LISPM appeared to be 2-3 times
KA-10 speed on WOODS, but about equal on MACSYMA. Computationally,
it really is about 2 times as fast on the MACSYMA example as well, but
that one happens to be CONS intensive. In the future,
then, one should be aware of whether one is selecting CONS intensive
examples, and make appropriate allowances.
To summarize, in the past the cost of CONS has been considered to
consist of two terms, the CONS time itself and the GC time eventually
required to reclaim it. (GC time was not considered in any of the
forgoing). In the case where one is considering elapsed time
and a large virtual memory, there is another cost, namely,
time taken to swap stuff out of physical memory so as to make room
in which to CONS. This later cost can be quite significant.
Being aware of the problem, there are various schemes one
can imagine to help ameliorate it, but that is a subject for
another day.