[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

LISPM I/O performance hysteria



Received: from KARL.kahuna.decnet.lockheed.com by ALAN.kahuna.decnet.lockheed.com via CHAOS with CHAOS-MAIL id 18137; 24 Jan 90 13:34:21 PST
Date: Wed, 24 Jan 90 13:34 PST
From: Montgomery Kosma <kosma@ALAN.kahuna.decnet.lockheed.com>
Subject: LISPM I/O performance hysteria
To: slug@ALAN.kahuna.decnet.lockheed.com
In-Reply-To: <19900123064834.5.RWK@HIGASHIYAMA.ILA.Dialnet.Symbolics.COM>
Message-ID: <19900124213412.2.KOSMA@KARL.kahuna.decnet.lockheed.com>

    From the tone of your message, I infer that something in
    my message has rubbed you the wrong way and led you to
    want to entrench yourself.  I apologize if that's so.
    Let me try to make my intent more clear.

not really...I'm mostly rubbed the wrong way by having to spend what
seems to be an inordinate amount of time waiting on file I/O.  Nothing
to do with you or your message :-).

    Although you seem to treat it as all one issue, you've
    really raised TWO sets of issues.

Well, I suppose that if one were to really look at optimizing the disk
routines, you would definitely have to separate out the physical IO time
from the processing overhead from things like READ.  However, I don't
use READ, although it is not clear that the stuff I'm using is the most
efficient in the world either.  So, what is the most efficient way to
read in a text file?  Obviously, in my codes at least, I can reduce that
overhead by being smart about the way I do things (book 5 could be
better...).  On the other hand, things as basic as reading a file into
ZMACS take FAR too long (and I'm sure that the people who wrote ZMACS
knew lots more about optimizing file IO than I do).  

	Date: Fri, 19 Jan 90 18:44:51 CST
	From: "kosma%ALAN.kahuna.decnet.lockheed.com %ALAN.kahuna.DECNET.LOCKHEED.COM"@warbucks.ai.sri.com
	    Date: Wed, 17 Jan 90 15:06 PST
	    From: sobeck@RUSSIAN.SPA.Symbolics.COM
	    If you are willing to work with local FEP files, you can achieve performence of 
	    about 250K Bytes/sec(reading or writing), not counting the time required to construct 
		  ^^^^^^^^^^^^^^  !!!
	    the data structures.

	This is **exactly** the kind of benchmark I've been talking about!!  If
	My amiga (total system cost of about $3000) does disk i/o that peaks out
	higher than that!! (with a stupid seagate drive, even).  I've seen
	Micropolis drives peak out at over 600 KBytes/sec.  

    The performance issue you're really complaining about has
    nothing whatsoever to do with this.  It's not the peak
    transfer rate that you're running into, it's the incredible
    amount of overhead you run into if you read your data via
    READ.

sorry, calling it "peak" is sort of a misnomer.  What I meant is not so
much an instantaneous transfer value, I meant the maximum sustained
value (i.e. the file is large enough so that startup times and the like
all are relatively insignificant).

    Agreed, 250K is slow; the disk driver's overhead is rather
    excessive, and the 3600 series' design predates such nicities
    as VLSI disk controller chips.  The disks are slower than
    more modern ones.  The list goes on.  On the average,
    throughout the system, I'd say Symbolics IO is about 2.5x as
    slow as it should be, for the hardware.  And in some cases,
    the hardware is slower than it ought to be for the era in
    which it was designed.

    That's averaged over everything from the disk driver to
    the highest-level IO, the 2.5 figure varies widely depending
    on just which part of the IO system you look at.

    But let's keep some perspective here:  Your program spent
    maybe 15 minutes doing IO.  It spent maybe 30 minutes parsing
    with READ.  And it spent MANY HOURS doing SOMETHING ELSE.

Okay, first of all, I'm not sure what you mean by "my" program.  Perhaps
you have me confused with somebody else?  I'm not trying to pin down a
specific file IO example; I'm talking about everything from codes that
I've written to reading a text file into ZMACS.  

For example, one particular code which I've got running here does
approximately 15 to 30 minutes of file IO (NOT using READ, incidentally)
for approximately 2 minutes of compute time on the Connection Machine.


    So, sure, IO is a problem, but it is not >>YOUR<< main problem.
    At least, not yet.

IO *IS* a problem and is a main problem for me.  In fact, the two
primary factors behind my more and more likely switch to Sun are the
execution speed and the disk io speed.
							    And that's
	reading/writing TEXT (HUMAN READABLE) FILES!!! 

It SHOULDN'T have anything to do with anything, but when people propose
solutions like "binary files are much faster" or "using the FEP IO
system" or "using dump-forms-to-file" I simply cannot accept that as a
possible solution to my problem!  Text files *shouldn't* have to be so
much less efficient.  And I know that it has to do with processing
overhead, not transfer speeds (bytes is bytes) but my point is that
OVERALL disk IO for whatever types of files should be more efficient
than it is.

    This has nothing to do with anything.  "TEXT" is just bytes.
    We're all talking about bytes when we're talking about
    transfer speeds.

Yes but I'm thinking about more than just raw transfer speeds.  Maybe
I'm being too vague to just talk about the "feel" of the system reading a
file.

    Is your point that you WANT humans to be able to read the
    data with text tools?  If so, you can just say so; it would
    be a legitimate point.  Obviously, you'll pay a high price
    for this convenience in performance and disk space; my
    personal experience is that it's usually not worth it for
    large amounts of data.  There's something inherently NOT
    human readable about that much data...

good point, but the main reason is to facilitate data interchange,
although human-readability is necessary at times for checking numbers in
a file (not typically the whole file, but maybe the first few entries,
or looking for specific data in the file).

	When I'm dealing with large amounts of numerical data, the last thing I
	want to do is to use some funky binary format.  Typically I get geometry
	files off of a UNIX system or an IBM mainframe, process them on VMS to
	get volume descriptions, then load them into the symbolics and crunch on
	the connection machine.  The only way to do file interchange between
	different pieces of code on different systems is to use ASCII files

    What's funky about 8-bit bytes?  It happens to be the

there's nothing funky about 8-bit bytes.  You miss my point, I think.
When wanting to write floating point numbers on a VAX and read them in
on the symbolics, as far as I'm concerned my only reasonable option is
to write the data into a TEXT file and to read it in as a TEXT file.
Whether this is 8 bits or 7 bits or whatever doesn't matter.   The
FUNKYNESS (?) I'm talking about is in the binary representation of
floating point values, which I want to stay away from.

    industry's most reliable, most portable, most standard, and
    most EFFICIENT format.  (Unless you're using network mail as
    a transfer medium...)  Especially when you get IBM mainframes
    in there; they like to talk EBCDIC, you know.  ASCII is
    certainly *NOT* the *ONLY* way to get data between systems or
    applications, and it's certainly *NOT* what I would recommend
    to you, if you have control over the applications in
    question.  Obviously, if you don't, then my recommendation is
    moot, so just say so and we can ignore it.

The only thing I have control over is my own applications which run on
the Symbolics/Connection Machine.  I have to be compatible with the rest
of the world in terms of formats of data files which I read (and to a
lesser extent, write).  I thought that I made this reasonably clear.

    [BTW, I consider the only truly portable file type between
    Common Lisp's is :ELEMENT-TYPE '(UNSIGNED-BYTE 7).
    :ELEMENT-TYPE 'STANDARD-CHAR comes in second place, with the
    aid of file translation software as needed.]

	which **SHOULD** run at least one or two hundred K/sec.  

    I'll buy those figures, for reasonably coded applications.

    Calling READ, however, does not qualify.  I don't care if
    your talking about Symbolics, or Franz on a Sun 4, I would
    NEVER EVER want to load 5 Mbytes of data into any Lisp via
    READ if I could avoid it.

I don't call READ.  What I've been doing is reading in everything as
strings and doing 
								 I think this is
	totally reasonable and that the Symbolics I/O times are incredibly poor.
	I couldn't believe somebody (in another slug message) talking about 40
	minutes to read a 5 MB file like it was acceptable!!!!  Pure garbage!!

    Sure, if you take the slowest technique with the most
    overhead, you can spend 40 minutes.  You missed my point.  It
    does NOT take 40 minutes to read a 5MB file.  YOU CAN WASTE
    30 MINUTES by doing all sorts of silly things, like checking
    for lists, and symbols, and rationals, and arrays, and
    read-time evals, readmacros, readtable syntax, *READ-BASE*,
    and Symbolics extended characters, and Japanese and ...

    And it still doesn't add up to the times you originally reported.
    THAT was my point, not that 40 minutes was wonderful.

I don't recall reporting any times.  Once again, I think you have me
confused with somebody else.  I was just commenting on your response.  

Okay, okay.  Here's some numbers.  enough of this qualitative stuff:

Here's the file I'm going to edit.  It was the biggest one I had readily
available.

      arppro13.zuu.1   53 235644(8)    !   11/10/89 12:54:56 (11/10/89)   Kosma

     (scl:time (zwei:find-file #P"blaise:>kosma>dl>arppro13.zuu"))
Evaluation of (ZWEI:FIND-FILE #P"BLAISE:>kosma>dl>arppro13.zuu") took 17.556589 seconds of elapsed time including:
  0.755 seconds processing sequence breaks,
  2.358 seconds in the storage system (including 0.032 seconds waiting for pages):
    0.365 seconds processing 1657 page faults including 1 fetches,
    1.994 seconds in creating and destroying pages, and
    0.000 seconds in miscellaneous storage system tasks.
9,083 list, 505 structure words consed in WORKING-STORAGE-AREA.
102,689 structure words consed in EDITOR-LINE-AREA.
45 list, 32 structure words consed in *ZMACS-BUFFER-AREA*.
NIL
#<ZWEI:FILE-BUFFER "arppro13.zuu >kosma>dl BLAISE:" 43600407>
---> 


Then, I remembered that I just did that over the ethernet (TCP) so I
tried again, first copying the file to my local machine; there was about
a second difference overall:

Evaluation of (ZWEI:FIND-FILE #P"KARL:>foo.foo") took 16.414146 seconds
of elapsed time ...

This works out to around 14K bytes/second, an abominable transfer rate,
if you don't factor in overhead.  I don't know what zwei:com-read-file
is doing internally, and I don't really care--it doesn't really matter.
What DOES matter is that reading this file into ZMACS takes 16 to 17
seconds, while reading the same file into emacs on my amiga takes 1 to 2
seconds, and on our sparcstation loading it into gmacs, it's basically
instantaneous (even I was surprized).

    Look, in *ANY* Lisp, if you want to input data reasonably efficiently,
    *STAY AWAY FROM THE READ FUNCTION*.

    And if your language implementation provides you with a way
    to do so without consing, (Common Lisp doesn't, Symbolics
    *DOES*), avoid doing character-at-a-time IO.  This holds true
    whether you are programming in Lisp, or C, Pascal, or TECO.

    Actually, a lot of systems don't really provide any way to do
    character at a time IO; you have to do a block at a time, and
    pull the characters out yourself.  There's good reason for
    this: in many operating systems the overhead for system calls
    makes character-at-a-time IO prohibitively expensive.

    I don't mind complaint sessions about Symbolics' IO; there's
    plenty of grounds for rattling their cage about IO.  But
    let's try to keep it real, OK?  Separate the issues of coding
    style from IO.  You can complain about READ being slower than
    READ on some other system, but you haven't done that.  You
    haven't presented any data for such a conclusion.  I don't
    know if it is or it isn't.  I didn't investigate that far,
    and you didn't either.  Nobody else in this discussion has
    yet measured READ exlusive of IO on different systems, either.

    You originally reported that it took many hours to read the data.
not me!
    I pointed out that it was really more like 40 minutes, even if you
    use the same poor techniques I argued against using.  I did not say
    40 minutes was great.  I only said it better than what YOU reported.

okay, so what's the BEST it could be???  Give me a piece of code which
can read integers and floats from a text file FAST and I'll gladly use
it!

    Look, all I'm trying to do here is introduce a bit of rigor
    into this discussion, and a bit of basic software engineering
    and efficient programming.

    I'm trying to separate things out:

    On one side I want to gather all the LEGITIMATE complaints
    about Symbolics IO, built on a LEGITIMATE understanding of
    how Symbolics compares with the rest of the industry.  If
    Symbolics is really only 5X slower than our consensus
    reasonable value R, I don't want to waste time arguing
    about why it's 100X slower.

    On the other side, I'm perfectly happy to discuss why somebody's
    application runs 100X slower than they expected.  If a factor of
    2 comes from Symbolics IO and a factor of 5 comes from READ being
    slower than it should be, and another factor of 10 comes from using
    READ on a :ELEMENT-TYPE 'CHARACTER stream when you should be doing
    something else.

sounds good to me.

    What I'm NOT happy with is the current discussion.  After trying
    to separate out the various issues, you seem to be trying to put
    them back together into a fairly non-productive "But Symbolics IO
    is SLOW!", to maximize how much complaint you can lay at Symbolics'
    doorstep.  I think that wastes our time, and is likely to alienate
    Symbolics, as well.

    Here's my synopsis of the discussion so far:

    NOBODY has argued that Symbolics IO is not slow.

    We've argued that it's not as slow as you originally complained.

I think you definitely need to check who you are calling "you"--there
have been several people on this discussion.

    We've pointed out that you've made it even slower by poor
    implementation choices.

Excuse me, but I'm not so sure my implementation is so poorly chosen.  I
don't know that I've even described my IO problems or the constraints
within I must work.  Again this may be the "you" problem.

    We've measured how slow it is.

    We've discussed WHY it's slow.

    We've heard from Symbolics about how they're making it less slow.

    We've speculated as to why your timings are so much worse than
    even our measurements of your techniques.  (I'm curious to know
    if these speculations prove helpful; I hope they do).

    Anyway, I'll be very interested, after you make your program
    either use a binary file, or if you feel you can't do that,
    you use :STRING-LINE-IN and PARSE-INTEGER, or :READ-INPUT-BUFFER,
    some care about buffer boundaries, and PARSE-INTEGER, and then
    you come back and tell us:  "It takes X seconds, and that's too
    slow".  That will be good and useful, and I hope you do it.

where does something like READ-LINE fit into all this? I've used that in
some places.

    Or you may come back and say that it takes too long to transfer
    your data over the net.

not at all, the net performance (as seen in my ZMACS example above) has
very little to do with it.

    More likely, you'll do both!

    It might be useful if we could come to some consensus as to what
    IO performance level we think Symbolics should have.  Ideally, that
    should be referenced to other existing hardware, languages, and
    implementations, and prioritized by how important we (slug) think
    the different areas are.  READ's performance vs character-at-a-time
    vs local buffered string-character vs network vs ...

    Also, if you want to complain that what you had to do was too
    hard, and you'd like tool X to make it easier, that might be
    interesting, too.

I admit that I haven't looked carefully at file IO for some time, but
a couple months ago when I was in the midst of my file IO problems, I
carefully read book 5 and tried to do it the best I could with what
information I had.  Probably our IO code can be made better, but I have
no idea how much...I just looked at one piece of code which is actually
using read-line, not read, and then we later go ahead and pull out
fields as reals or ints based on our knowledge of the data file's
format.  On the other hand, ZMACS reading a file is probably as
efficient as it's going to get, and so that may be a better basis for
comparison (at least semi-quantitative comparison).

    Anyway, sorry for the length of this missive.  I've spent too much
    time triming out unnecessary flamage; I don't have time left to
    trim out unnecessary verbiage!

Well, when you're going to flame, please be careful with attributing
information to the correct sources.  I don't appreciate being pinned
with a bunch of claims which I never made (or having "poorly implemented
code" when that remains to be seen).

by the way, the same goes for me...sorry for the length and for any
unnecessary and undeserved flaming I've done.  


And I hope that some of this was constructive.


Monty Kosma
Lockheed Advanced Research
kosma@alan.decnet.lockheed.com