[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


I am in favor of making the behavior of CL as well specified as
possible. I also think that users should be discouraged from taking
advantage of subtle details. I don't think the two are incompatible; one
is a matter of portability and more complete programming language
semantics, and the other is a matter of good programming practice.

For example, I think it might be reasonable for *compilers* to take
advantage of the explicitness of the language semantics, even if *users*
are encoraged to avoid doing so. For example, it might well be useful to
know that  (setf (getf a b) c) and (car x)  are order independent
operations; this can be deduced if (SETF (GETF ...) ...) is constrained
only to have CDR-class side-effects.

The performance argument given is pretty weak. Perhaps the right
solution for DLA would be to have a SCL:NREVERSE which had the desired
(undefined) side-effect behavior? To be taken seriously, I'd like to
hear some actual statistics, e.g., is there any evidence that this would
make more than a 5% difference in total runtime in any implementation
for any benchmark?

(I've been trying to come up with a better way of phrasing the side
effect behavior. I think I'd be happier with an alternative to
"permitted to SETF" and "constrained"; maybe just "will only affect"
.... I know that several theses have been written on characterization of
side-effect behavior in Lisp programs, and it might be just as well for
a formal specification of CL to make reference to a more formal