[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
lispm market share; slow LISPM I/O
Date: Tue, 16 Jan 90 18:11 EST
From: Leslie A. Walko <firstname.lastname@example.org>
I was under the impression that file I/O slow because there is no
overlapped I/O implemented in Genera --process waits while disk seeks.
This is true for paging, but not for other I/O. See the function on
p.69 of "Internals, Processes, and Storage Management" for an example of
a program that does quite a bit of asynchronous I/O to the disk. In the
case of LMFS, the process that is reading a file does indeed block until
the data is read from disk, but other processes can still run.
Page faults, however, are processed synchronously and block the whole
machine. I believe this was a design decision, to simplify the system.
For one thing, allowing process scheduling during page faults means that
the process scheduler must not take any page faults. I understand that
some experiments with changing this were done during the scheduler
rewrite that was done for Ivory, and it didn't give enough of a
performance boost to justify including it. However, in applications
where the page access order can be predicted, there are asynchronous
interfaces to prefetch pages (see the SYS:PAGE-IN-xxx functions, and
look at the progress area during an APROPOS in a large package to see it
Multiple processes using LMFS simultaneously also tend to block at
higher levels. If you look at a busy LMFS server, you'll often see
processes in "LMFS Lock" state. There's a global process queue
controlling access to LMFS, so some overlap is prevented at this level.
This presumably simplifies the implementation of LMFS's buffer
management, since it doesn't have to do any locking of the lower-level
data structures, but it prevents some multiprogramming while waiting for
the disk to seek.
Finally, the Lispm only has one disk controller, so it can't take
advantage of multiple disks as well as many other systems can. Systems
with multiple controllers can have multiple disks operating more
concurrently than the Lispm can.
The official justification for this is that the user waits anyways since
these are single user workstations. The technical reasons I heard were
that since I/O is implemented in microcode --there is no dedicated
chanel processor hardware-- the i/o task keeps the processor quite busy.
However, the 36xx has a multitasking microengine, so overlap at this
level is possible (I don't know what the situation with Ivory is).
The blocking and de-blocking of the data comming off the disk is rather
cpu intensive also, so the net result (controlling rather unintelligent
ESDI disk + blocking + error checking + some lmfs to raw file mapping
overhead) is high CPU utilization.
One of my collegues (Peter Clitherow) has done some lmfs data transfer
rate measurements, and the numbers are rather horrible. This leads me
to beleive that a dedicated channel controller would be a good thing.
It allow for the efficient implementation for overlapped I/O becasue it
would take over all the CPU load currently associated with an I/O. The
buffer pool under the channel controller's supervision could also do
extensive pre-fetching, pre-reading of pages, thus greatly improving
sequental file read/write performance. Curiously, the dedicated
processor would not do much for paging, but a big buffer pool would sure
So how about some lower memory board prices!!!!