[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

1Pipelining streams0



    Date: Fri, 23 Oct 87 09:46 EDT
    From: Daniel L. Weinreb <DLW@ALDERAAN.SCRC.Symbolics.COM>


    1I don't agree with your statement that a "true" pipe would expand the
0    1buffer "as necessary", i.e. keep growing it 2indefinitely1 as input
0    1arrives.

0True, a pipe shouldn't grow indefinitely.  But suppose that the
programmer couldn't know (in advance) how big the pipe should be.  In
the current implementation, he must create the largest possible
(reasonable) buffer for use in the transaction.  If a transaction uses
much less than this, a lot of consing would have been done for naught.
I think that the buffer should start out at a minimal reasonable size,
and then expand to a maximal reasonable size, much like the growth of a
stack during recursive execution of a routine.

    [...]

    [buffering]1 is known as "flow control", and is present in every0 1real-world
0    1network protocol in order to provide a reasonable limit on the amount of
0    1memory that must be allocated for buffering.  In my opinion, that's what
0    1a "true" pipe is.  While it might be nice for the buffer to be able to
0    1expand dynamically up to a limit, I don't think it is desirable for it
0    1to keep expanding until you run out of virtual memory.

0Right.  I don't think the buffer should eat up all the virtual memory,
either; that's foolish.  However, it would be nice to specify a
"critical size" for the dynamically-expanding buffer.  Once the buffer
reached this size, an error should occur, much like the error one gets
when the stack gets too big.  One of this error's proceed options should
cause the generating process to wait until the buffer is cleared.  Of
course, this whole business of expanding the inter/intra-process buffer
should be an option to the pipelining routines.  Not every application
requires this sort of flexibility, but some do.

    1In what way does the limit on the size of the buffer cause your program
0    1to fail to work?

0The buffer size doesn't cause my program to fail; its just that I'd
prefer not to commit to a fixed buffer size.  Here's why.  I'm writing
a remote print server that requires the following pipeline:
                                                               pipe
Read the file and translate into the remote printer's language -->
  translate from symbolics characters to remote-printer characters
  and transmit

This pipeline connection, which takes a character stream on the
Symbolics and ships it across the ethernet in a translated form, is
subject to network lossage.  Sometimes the buffer of this pipe might
be very small, while other times it might be quite large;  a dynamically
expanding buffer would be quite useful, but not necessary.