[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
early error detection
Date: Thu, 16 Feb 89 11:33:32 PST
From: goldman@vaxa.isi.edu
Type checking also acts as a way to trap certain classes of errors early,
e.g. trying to do arithmetic on character strings. This operation
might never be detected and could simply provide wrong results to a computation
on stock hardware. This feature strikes me as being entirely analogous
to parity on memories; it lets you know AT THE EARLIEST MOMENT that something
has gone wrong.
This early-catching of errors is why I think the traditional approach to
slow type-checking (providing a compilation mode that turns it off) is
a mistake; it is precisely when your program is supposedly in production
that you most need to know that it has encountered such a bug. Even if you
handle the error with a trap handler and continue running, at least you have
the ability to know that it happened, rather than just blithely ignoring it.
This sounds good -- I wish symbolics took it even more seriously!
Look at the examples below.
:
:
Your examples, which I've elided for brevity, point out that existing
type declarations for defstruct slots aren't enforced in our
implementation. I believe those type declarations were originally
intended as a way for the compiler to generate the efficient/correct
instructions if you performed arithmetic operations on the contents of
those slots. Since our architecture does this for us without the
compiler's needing to generate different instructions, these type
declarations are ignored. I believe this could either be considered
a bug or a feature, depending upon your point of view.
It sounds to me that you are arguing that store and fetch operations
should respect these type declarations; the only way I can imagine
implementing that efficiently would be to add even more tag bits to
represent the allowable types for a cell. Remember, unlike many other
Lisp environments, we have locatives and it is legal to pass a locative
to one of these cells at runtime (or, equivalently to have another
structure that shares storage with the first without having an
equivalent type declaration) to some function compiled in an enviroment
that didn't know about the restriction of usage of this slot.
The only other alternative is to generate type checking code around every
slot access, which sounds like an incredible performance hit.
BTW, neither fixing defstruct accessors to generate this code or writing an
equivalent macro which provided this checking sounds to me like a hard
or terribly time-consuming project.
It appears to me that you are serious about catching as many errors as
possible in hardware (which is fine) and not serious at all about
catching the others.
My parity analogy is still apt here, if the parity is bad you KNOW the
memory word is wrong, but if it is good the value in memory might still
be semantically wrong, i.e. contain the a value that is meaningless or
wrong according to some higher-level criterion, so you can't know that
it is good. Similarly, it is clearly semantically incorrect to add 1
to NIL, so we detect that, but the software must detect that 8 isn't
a legal octal digit.
I contend that most of my errors that are currently
caught by the hardware could be caught even earlier if the compiler
inserted the obvious checks.
Not in the face of dynamically created and compiled code, separately compiled
code, and arbitrary use of locatives and structure sharing. Since we want
all those features, we have to give up compiled type checking (in lisp).
Of course, our Pascal, Ada and C products do as much type checking as
those languages require at compile time.
Personally, I would MUCH prefer having
compiler modes that let me develop code with all that extra checking
being performed, than having my develop and production modes be
identical, both catching ONLY the very limited set of type errors your tagged
hardware detects.
In summary, Common Lisp (even before CLOS) has provided a rich language of
TYPEs that extends type correctness beyond that of a small number of
primitive types (for which you have tag bits) and primitive operations
(for which your hardware checks those bits).
Again, as I understand it the focus in Common Lisp was more toward allowing
implementations to generate special instructions in those cases, rather
than enforcing those types, but since I wasn't personally involved I could
be wrong. I wonder how many Common Lisp implementations would actually
catch the type mismatches you described, and how many would just generate
code assuming that the slots contained integers and compute wrong answers?
Programmer defined operations
deal with both those types and, more commonly, programmer defined types.
Compile-time (where possible) and run-time (where not) detection of violations
of these typed interfaces is a big win for a developer, just like catching
"illegal second argument to +" or "too many parameters passed to FOO".
I agree with this 100%; one of the reasons we use flavors a lot is for this
benefit. Particular flavor instances respond to particular operations; if
you try to perform an operation on a object of the wrong type, the flavor
system causes a trap saying this is an illegal operation. Just as with
hardware type checking, this only occurs when you try to operate on the data,
not when you store or fetch it, because, as I said above, that seems expensive
to do efficiently.