[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: small integers



    Date: Sat, 10 Oct 92 22:38:31 -0400
    From: Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU
    
        One idea I had for dealing with fixnums overflowing and either turning into
        bignums, or signaling and exception was to use the concept of retry handlers;
        there could be a pair of retry handlers, one of which would convert the 
        result to a bignum and return, the other of which would just re-raise the
        condition. The programmer could select which handler he wanted active; the
        protocol in force should be determinable statically in most cases.
        
    I don't think this will work.  The fact that a function will return only a
    fixnum, if it returns anything at all, must be known at compile time if you
    are to generate optimal code.  Handlers are established dynamically, not
    lexically, so the compiler could not be sure what policy would be in effect
    when a function is called.

Right.  To explain further, an important part of fixnum compilation
is that the consumer of the operation is then allowed to not check
for receiving a bignum.

However, extracting out the kernal of the idea....

I could imagine a macro which compiles a block of code both ways,
and restarts the computation using the generic code if it gets
an overflow.  (Side effects outside the scope would have to be
restricted).

One other comment:  Bob Cassels says:
    If I'm willing
    to claim without any proof that the integer won't overflow, I should have a
    way to say that.  But that last claim shouldn't be so easy or natural to
    make, since if I'm wrong the user will suffer.

I think it is critical that this be easier to say, than the more
usual thing, which is to do the calculation wrong, and never
tell the user he's working with corrupt data.  In my theology
of programs, the lowest level of hell is occupied by programs
which appear to do useful work, but actually just lie to you,
especially unpredictably.

As bad as crashing & burning is from the user standpoint, it
is preferable to hidden failure.

Rather than pure declarations that remove safety in favor of
speed, I'd rather see type assertions, that let the user say
"Don't let anything larger than 1024 past here", and which
the compiler can then use to make type inferences, but which
will signal if this is violated.

If you do your checking outside the loop, it is really very
rare that you can't afford to do range checking!

I really think that multiple compilation is a more viable
approach than most people think.  The amount of code involved
in this sort of issue is trivial except in special memory-limited
situations.  Again, it only makes sense to do this sort of
optimization on small pieces of your code.

It's usually better to get the speed improvement by changing
the algorithm, instead.  Provide tools that let the developer
see just where bignums are being used under different circumstances,
and let him write algorithms that don't need large numbers to
get the job done.

Another possibily useful technique is to provide 64-bit integers
for use in interior calculations, and provide ways to store
these.  This then boils down to the same set of issues as
unboxed floating-point.  I find it difficult to imagine an
algorithm "accidentally" overflowing 64 bits.