[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Proposal someday soon?
- To: cl-error-handling@SU-AI.ARPA
- Subject: Proposal someday soon?
- From: Steven <Handerson@CMU-CS-C.ARPA>
- Date: Sat, 17 Nov 1984 21:17 EST
- In-reply-to: Msg of 30 Oct 1984 22:11-EST from Steven <Handerson>
I've had a few additional thoughts (starting with EXCEPTION HANDLING) since my
last series of posts, which seem to be leading up to a proposal. I've had a
few important thoughts, so I'll give those to you now. Any questions or
comments would be appreciated.
When I'm defining a term, I use italics (just like textbooks). I quote
"cutesy" terms. This also contains one or two humorous analogies.
Is there any reason to have an explicit condition object to pass around?
If you can hide this from the user, you need not fear reusing storage
(of course, that presupposes a uniform representation...).
I'm presuming a lot by taking these to be the current sway of the group.
The error system should probably be as powerful as we can reasonably make it.
It has been suggested that all you really need might be something like binding
a special; if so, you can easily do this yourself. The condition system is for
hairier cases. Various forms of language extensions (such as making objects
that masquerade as a new type) are also better dealt with using some other
mechanism, to prevent interactions with "normal" conditions.
We define our condition system to only handle synchronous events. Presumably
an errorful condition resulting from a (more abstract) event will be signalled
dynamically inside the call that lead to it (and presumably knows how to handle
it). If not, perhaps binding a more complicated handler earlier can deduce
from the environment which event has occurred. It is the purpose of the
condition system to determine what abstract event has occurred and to invoke
the appropriate corrective code.
We want taxonomic error handling, as described by Moon. This is actually where
most of the complexity comes from. If all the conditions have arguments, then
we need some way of converting the arguments of a specific condition into those
of a more general one. This is the crux of my proposal - I think some sort of
object-system fits in well, but it should be simple and specific enough so that
nobody feels left out, or pained to implement something huge (my proposal
will be almost entirely portable code, except for the implementation-dependent
signalling of system errors).
The whole point of exception handling in most languages is to make the normal
cases go fast by simplifying the initial tests for exceptions. If some simple
test as part of the calculation can prove that there is no error, you get error
checking practically for free. Hence the idea that an errorful event is
detected via some errorful consequence. The majority of these initial tests
are things that concern the language's integrity, and so should be dealt with
anyway; things like taking the car of 3 [I think these should be called
An event can often be viewed at several different levels of abstraction. The
lowest is at the system error that caused its detection. This can be
characterized in different ways, using the inclusion relation. However, if one
uses implication (which depends upon the code), you're dealing with different
levels of abstraction.
"It's condition SYS:DIVIDE-BY-ZERO."
"An arithmetic error."
"What does that mean?"
(looks at code) "Well, here it means the user's input is inconsistent."
Revelation! Handlers don't just handle errors; they actually determine (by
"looking at the code" whether a given (abstract) event has occurred. This is
probably why handlers want to abstain; they figure out that the abstract
condition they handle hasn't occurred. Defining errors inline would probably
be pretty clumsy; such things can be as complicated as the code. Instead, I
think that we should investigate what it means to have a handler signal another
(more abstract) condition, which can be done in existing systems (and probably
Looked at this way, the purpose of the condition system is to determine the
appropriate abstraction of the event and @b(then) the action to be taken.
Hence it's reasonable to do things like (ERROR 'my-condition), because the
sequence of handlers can determine things about the error that might be useful
to the user. Still, I think we need to be careful with exactly what these
For example, I think it's quite possible that you'd want to signal a condition
IN CASE it's handled (at some level of abstraction). System errors of course
can't be @i(continued) in the CL manual sense, but this could be what ERROR is
for. Hence, @i(errors) (conditions signalled with ERROR) are proceedable, but
not continuable. Errors signalled with CERROR are continuable and proceedable.
The debugger is a special beast, because the user is a special beast. It
should be able to access all abstractions of the signal chain. I think how
this is to be done is pretty (error-system-) implementation dependent, but the
basic idea is that you can go down into the stack and say "what are the proceed
options for this level?" The portable code could have some hook for this.
The debugger complicates the notion of proceed types slightly. Ideally, it
should know which proceed types it can invoke (and ask the user rather than
insisting it be in the arguments). This changes with whether the handler was
invoked with ERROR, whether the proceed type returns from the signal form or
not, the signalled arguments, etc. These aren't really general problems, and
my proposal will suggest one way of dealing with them.
I've also considered something that might be analogous to Symbolic's @i(special
commands). These are like proceed types, but they're just designed to give the
debugger user more information about the error. The more of this that gets
standardized, the better, but some implementation might want to expand on them.
The worst that can happen is that they not be used in portable code. Again,
somebody might have better ideas of how to provide this kind of thing.
The next paragraph assumes you know about the Symbolics implementation of
handler binding. Basically there are several binding lists, which store pairs
of a handler function and the condition it's bound to. When an error is
signalled, the lists are cdr'ed down in order until a handler binding of the
signalled condition or a condition that includes it is encountered. Normal
bindings are consulted first, then default bindings (implementing "bind unless
condition already handled"), then "restart handlers" (self-descriptive), and
some fourth thing I forgot about.
I think there should be two kinds of handlers: the kind that handle and the
kind that deduce a more abstract event. Restart handlers and such would be
normal handlers of some abstract and serious condition, like
EDITOR:USER-INPUT-GARBAGED. This works fine if you change levels of
abstraction only after all handlers [all handlers that should, anyway] have a
crack at the current level . Thus, in addition to the normal and default
bindings, there should be an "abstraction handler" binding list which gets
looked at after the other two. Realize that "abstraction handlers" may be on
the normal binding list, for instance if a code segment doesn't want the
original error to escape to surrounding code. Basically, if a piece of code
expects an error, it should bind a handler as close to the source as possible
that, if nothing else, signals a condition that better describes the event.
I suggest that people not worry too much about reaching inside other people's
code; designing good interfaces and clean code is generally more of a win. If
you can make wine out of water, fine, but hacks only lead to more hacks (don't
give ME a bottle of water to get drunk on). No, I don't think handlers should
be examining the stack (except the debugger, but that's special).
The whole process of selecting a handler is marginally complex. A handler
selected for the originally signalled condition may be an "abstraction
handler", in which case it signals another condition. If this returns nil,
then it hasn't been handled, and the handler probably returns nil itself,
allowing some other abstraction a chance. Otherwise, it's up to the handler to
arrange that proceeding the new handler does the appropriate thing with the
previous condition object.
If you were implementing an interpreter for some other (lispy) language in
Common Lisp, you'd want to map events in Common Lisp into those in the new
language. Handlers for system errors would push the interpretation level, and
signal some new-language condition, depending upon what was being interpreted
and the lisp system error signalled. If the interpreter functions were
quantized, there would be no reason to proceed a lisp error; the handlers could
just restart the current new-language primitive (perhaps by throwing to a catch
around all the primitives functions). In fact, the proceed types of the
new-language condition objects could do this directly, and the lisp handler
would just notice the abstraction. Unexpected errors would not get abstracted
(presuming you bound the abstraction handler close enough to the source), and
would cause the interpreter to bomb out.