[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
- To: kmp@SCRC-STONY-BROOK.ARPA
- Subject: Issue PEEK-CHAR-READ-CHAR-ECHO
- From: Kim A. Barrett <IIM%ECLA@ECLC.USC.EDU>
- Date: Sun 12 Mar 89 16:09:23-PST
- Cc: cl-cleanup@SAIL.STANFORD.EDU, iim%ECLA@ECLC.USC.EDU
The whole discussion of operating system considerations in the proposal just
confuses the issue, and isn't relevent to what the actual proposal says, so
that's a problem with the proposal. It may have lead some people to vote for
it because they got confused into thinking that the operating system
considerations were relevent. Note that I may have fallen into this trap.
When I wrote the example, I was thinking in terms of a reader that used
read-char/unread-char, rather than peek-char. Looking at the proposal again,
I note that it mentions the reader.
However, I think the discussion of the cost of this proposal is pretty weak.
First, an implication of the proposal is that peek-char is no longer
equivelent to read-char followed by unread-char. This means that all code
which uses read-char/unread-char needs to be re-examined, and probably
modified. That's a potentially large amount of code, due to the performance
effect that is not mentioned at all in the proposal. Namely, it is frequently
the case when parsing input that you get stretches where all the characters
are going to be used.
An example might be a reader's subroutine for reading an extended token. If
the two forms of 'peeking' are equivelent, then the token reader can iterate
on read-char until it finds a terminator, unreads it, and proceeds. Under
this proposal, it has to iterate on peek-char, decide if it likes it, and if
so then read-char to really get it. For such important subroutines in the
reader as the token reader, the string reader, the whitespace scanner, and
similar functions, this could mean something on the order of a factor of 2
performance hit. Slowing down important parts of the reader by a factor of
2 is not likely to make anyone smile (except those C lovers out there :-).
We're not advocating that it be left vague. I should have taken the time to
present our counter-proposal, but I was in a hurry, and I've had this note on
my desk to do something about this issue for over a month now, so I didn't.
Our position is that LAST-READ-CHAR is the proper behavior, with an additional
restriction that it is an error to do output to a stream between the calls to
read and unread. As a hint, here is how we've implemented this behavior.
1. Define two operations on streams, ECHO and UNECHO.
2. echo-streams, when reading a character, apply echo to the output stream and
the character. unread on echo-streams calls unecho on the output stream
and char, in addition to passing along the unread to the input stream.
3. Other meta-streams simply pass these operations along to their output side.
4. data-streams have two choices, depending on whether they have the
capability to 'back out' output. If they can back it out, then echo is
equivilent to write-char, and unecho backs it out. If they can't, then
they record the echo in a slot, writing any already pending echo. unecho
clears the pending echo slot. all normal output operations first write
pending echo. a normal close also forces pending echoing.
There is potentially more hair involved, intended to either support or
complain about improper usage, like calling unread after peek, doing output
between the read and the unread, &etc. Note that this depends on the single
unread restriction in order to work right in all cases.