[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: LISPM I/O performance



    Date: Thu, 25 Jan 90 14:21:29 -0500
    From: kanderso@DINO.BBN.COM

  
      Here's the file I'm going to edit.  It was the biggest one I had readily
      available.
  
	    arppro13.zuu.1   53 235644(8)    !   11/10/89 12:54:56 (11/10/89)   Kosma
  
	..
  
      Evaluation of (ZWEI:FIND-FILE #P"KARL:>foo.foo") took 16.414146 seconds
      of elapsed time ...
  
      This works out to around 14K bytes/second, an abominable transfer rate,
      if you don't factor in overhead.  I don't know what zwei:com-read-file
      is doing internally, and I don't really care--it doesn't really matter.
      What DOES matter is that reading this file into ZMACS takes 16 to 17
      seconds, while reading the same file into emacs on my amiga takes 1 to 2
      seconds, and on our sparcstation loading it into gmacs, it's basically
      instantaneous (even I was surprized).
	...  

I would hesitate to call it transfer rate.  I just copied a 620KB local
LMFS text file to a new file and it took 13 seconds which works out to
be around 100K/second (13sec/1.24MB if I assume reads and writes take
the same time which won't be the case since the heads have to move
between reading a buffer and then writing it out).  That must mean that
at most 2.4 seconds of your 16.4 seconds were spent transfering data
from the disk (assuming your disk is approximately the same speed as
mine).  So, most of your time was probably spent in ZMACS processing the
data or in paging.

I don't really know the structures the other processor's editors build
up but ZMACS does do a good bit of work.  (Do either of those other
editors allow you to have multiple fonts in the file and display them?)
In any case, it is still quite slow.  (And beware bumping the mouse into
the scroll bar when you have a big file!)

    I've suddenly found my self having to read in several larges files
    prepeatedly.  For example:

    -rw-rw-rw-  1 kanderso  5735077 Jan 23 00:57 rt.det.shrink

	...

    My first version used READ.  My second version used SI:XR-READ-THING
    and internal read function.  This took 55 minutes to read in
    [:element-type 'string-char] .  I then wrote my own version of
    parse-integer [because the standard one conses when building
    double-floats and bignums], and functions to parse floats and symbols.
    Using these took 42 minutes about 30% less.

As as been stated, it depends upon what you mean by "read in".  Yesterday
I had to "read in" a 5MB mail file (text) and it took considerably less
time than you are talking about.


As an aside, everyone has been berating the speed of Symbolics's I/O.
On another side of the coin, at our site, NFS on the Suns is causing us
extreme grief.  For instance, we will get a notification that we have
new mail but when we look, it hasn't arrived in our mail box yet.  (In
other words, one host says it arrived but either its cache wasn't
written to disk yet or our cache wasn't updated from disk yet.)  The
time between notification and the point at which the file actually has
been updated may be minutes.  Needless to say, this hampers working on
two systems at once (eg, coordinating between any two computers).  We
can't edit our files on the Symbolics, write them out to the Sun file
server (with NFS) and then run Latex on them from a Sun because we don't
know when the real file will really show up.  It's rather irritating to
modify a file, print it out, and then find you really printed the old
file.

We also have a report (from another site) that the file system has
suffered such problems as their /etc/passwd file getting overwritten by
some file queued for the printer. (Luckily one host of their site still
saw the original file so they were able to recover.  I don't know
whether the file was overwritten to the disk or just overwritten in
everyone's caches.)  Other files have been reported to be filled with
nulls mysteriously.