[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: issue BOGUS-FIXNUMS (initial draft)
- To: jpff%maths.bath.ac.uk@NSS.Cs.Ucl.AC.UK
- Subject: Re: issue BOGUS-FIXNUMS (initial draft)
- From: Rob.MacLachlan@WB1.CS.CMU.EDU
- Date: Tue, 12 Jul 88 14:49:53 EDT
- Cc: sandra <@cs.utah.edu:sandra@cdr>, email@example.com
- In-reply-to: Your message of Tue, 12 Jul 88 10:33:24 -0000.
Striking the FIXNUM type specifier doesn't mean fixnums cease to exist: it
just means this implementation detail is somewhat better hidden. There
would be nothing to prevent an application that currently uses FIXNUM from
(deftype fixnum () '(signed-byte 16))
Or whatever property the program was assuming that FIXNUM had.
Any compiler that can't recognize this deftype as a subtype of its internal
fixnum type (if it in fact is) is broken (not to say that there aren't lots
of broken compilers).
But I think that the issue isn't quite as clear-cut as the anti-fixnums are
making it. In many implementations, a fixnum type constraint results in a
>10x performance improvement. People tuning programs for these
implementations cannot ignore this reality, and need some kind of handle on
a "good" integer subrange.
Even if the FIXNUM type specified were flushed, the constants delimiting
the implementation fixnum range should remain. Of course, with these
constants, once can always:
(deftype fixnum () `(integer ,most-negative-fixnum ,most-positive-fixnum))
And of course, people will do this. And their programs will still run
fast, and will still run with no problem on the vast majority of
implementations with a reasonable fixnum size.