[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Re: early error detection]



    Maybe you can enlighten me as to the documentation I SHOULD be reading
    (which is so clear as to the purpose of declarations) -- apparently
    it is not the documentation on page 310 of Steele, nor that on page 153.

Apparently you missed some of my mail.  Steele states that type-checking
in such cases is a decision left up to the implementors.  Symbolics has
documentation too, by the way.  <Select> D Show Doc defstruct.

	My complaint was a about a patently false claim by a 
    submitted by a symbolics employee -- namely, that symbolics "architecture"
    detects type errors at the earliest possible moment.

You are beating an imaginary dead horse.  As I said in other mail to
you, I don't believe anyone has made this patently silly claim.  The
hardware (or architecture) doesn't "do" anything by itself, it simply
provides the support for programs using it to do such things much more
efficiently than is possible using conventional hardware.

	It does not detect
    them at the earliest possible moment, and it does not necessarily detect them
    at all.  It detects a subset of all type errors -- namely, those involving
    a selected set of predefined types(e.g., number) used as operands to a
    selected set of predefined operations (e.g., +).

I would be interested to see an example of the system "not detecting"
such an error for reasons other than hardware breakage.  It's not at all
clear to me what you think happens in lieu of an arg type error - I've
certainly never observed this myself in several years of using Symbolics
systems.  In Unix-based lisps, on the other hand, it's typically quite
simple to obtain trash-your-lisp-environment-and-die behavior by passing
the wrong type argument to a function.

    I am still at a loss to understand how the following two reactions to my
    initial message can be self-consistent:
    a) doing more type checking than is done by the symbolics archtiecture
       would be TOO EXPENSIVE

I haven't said this, nor have I noticed anyone else saying it.  I did
say in my previous response to you that I think Symbolics made the
correct choice in not adding the overhead of a function call to each
structure slot update just for the sake of type-checking.  Perhaps there
could be a software switch to control this, but frankly I don't think
it's an issue Symbolics should bother spending any time or effort on,
resource limits or no.

    b) if I want more type checking, I should code in terms of FLAVORS even where
       structures would be functionally adequate.
    I find it very hard to believe that adding software type checking for
    declarations involving structure types, as a way of catching more, though
    certainly not all, type errors, cannot be made considerable more efficient
    than (b).

Perhaps it can, though I doubt the difference would be that noticeable.
Is this really an issue?  Do you have some reason to think that this
approach is unacceptably slow?  I've never found this to be the case;
the only good reason I've ever seen for using structures in preference
to flavors is CLtL compatibility, not performance.  If you're really
trying to get the maximum speed, you shouldn't take either of these
approaches anyway, so arguing about them on performance grounds seems
pretty pointless.