[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Pipelining streams
Date: Thu, 22 Oct 87 17:42 EDT
From: J. Scott Penberthy <JSP@IBM.COM>
Dan Weinreb's recent message pointed to some little-known routines that
can be found in the basic source; I'm now using them. However, Dan's
routines are limited by a shared buffer used by the input and output
stream functions. Once this buffer is full, I imagine that the input
and/or output functions will stop until the buffer is cleared. A "true"
pipe would dynamically expand this buffer as necessary, shrinking it
down to some reasonable size when all is finished. Since I don't have
the source, its rather hard to enhance the existing code to more fully
simulate a pipe.
I don't agree with your statement that a "true" pipe would expand the
buffer "as necessary", i.e. keep growing it indefinitely as input
arrives. The idea of a pipe is that there are two concurrent processes,
a producer and a consumer. They are expected to run concurrently. The
buffering in the pipe allows the producer to get somewhat ahead of the
consumer. However, if the producer is consistently producing faster
than the consumer is consuming, the limit on the size of the buffer
causes the producer process to wait so that the consumer process can
catch up.
This is known as "flow control", and is present in every real-world
network protocol in order to provide a reasonable limit on the amount of
memory that must be allocated for buffering. In my opinion, that's what
a "true" pipe is. While it might be nice for the buffer to be able to
expand dynamically up to a limit, I don't think it is desirable for it
to keep expanding until you run out of virtual memory.
In what way does the limit on the size of the buffer cause your program
to fail to work?