[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Issue BOGUS-FIXNUMS (Version 2)

I approve of BOGUS-FIXNUMS:TIGHTEN-DEFINITION for the most part, but
I have a few comments and doubts to offer.

    (3) Remove BIGNUM from the table of standard type specifier symbols on
    page 43 of CLtL.

I don't approve or disapprove of this part; I'm of two minds about it.  It
may be an unnecessary incompatible change.  If anyone opposes it, I will
go along with them.

    (4) State that the constants MOST-POSITIVE-FIXNUM and
    MOST-NEGATIVE-FIXNUM are allowed to have a value of NIL to indicate
    that the implementation does not have a particular FIXNUM
    representation distinct from other integers. 

I don't think that allowing these constants to be NIL enhances portability.
That means a lot of "gratuitous" checking for NIL would be required.  I
think a better idea is to require that these constants always have integer
values, and that if an implementation really cannot identify any efficient
range of integers, it should set these constants to arbitrary values
consistent with the requirement that (SIGNED-BYTE 16) is a subtype of
FIXNUM.  Think about a program that would use these constants to
parameterize an algorithm, as in the example taken from Macsyma that does
modular arithmetic using the largest prime modulus that fits in a FIXNUM.
What does such a program gain by allowing NIL here?

In fact I would think that an implementation with only one representation
for integers could define integers represented in a single bignum-digit to
be its fixnums; those are more efficient than larger integers, just not by
as large a factor as in some other implementations.

    (5) Introduce a new constant, MAX-INTEGER-LENGTH.  This is the maximum
    number of bits appearing in any integer; therefore, it is an upper
    bound on the INTEGER-LENGTH function.  The value can be NIL if there
    are no limits short of memory availability.

Again I don't think allowing this constant to be NIL makes sense.  You
surely aren't saying that if this constant is non-NIL, the implementation
guarantees that there is enough memory to create at least one integer of
the specified length, let alone as many integers of that length as the
program might need.  Thus memory availability is always a limitation, and
implementations that truly have no representation limit on the number of
bits in an integer should set this constant to a value that is guaranteed
to be higher than the memory limit.

Possibly what I just said is an argument that this constant should not exist,
because there is no correct way to use it.  It tells a portable program
nothing about what it can or cannot do.

Or possibly it's an argument that under your definition, any implementation
with a non-NIL MAX-INTEGER-LENGTH would be in violation of the bottom of
CLtL page 13, which says there is no limit on the magnitude of an integer
other than storage.  By that reasoning the Symbolics 3600 would be in
violation, since its address space exceeds its bignum representation limit.
However, the bignum representation limit is large enough that numbers of
that size become impracticably slow [see, memory isn't the limit either,
the real limit can be asymptotic speed of arithmetic algorithms] and I
doubt that Symbolics would care to change their bignum representation
to allow larger bignums which no one could actually use.

    Introducing a new constant to describe the maximum size of integers makes
    it possible to describe an implementation's exact limitations on the range
    of integers it supports.  This constant would also be useful for the
    description of subset implementations.

It's true that it's useful to describe these aspects of an implementation.
I'm not sure that that justifies putting the description into the Common
Lisp language, rather than English.  On the whole, I weakly oppose part 5.


    Many programmers already use FIXNUM to mean "small integer"; this
    proposal makes this usage portable, and allows programmers to use the
    FIXNUM type specifier in a way similar to how the "int" type is used
    in C. 

I.e. as an unending source of bugs and portability problems.  :-(
Maybe I'm just down on C today.