[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: small integers



    Message-Id: <9210092040.AA20496@cambridge.apple.com>
    Date: Fri, 09 Oct 92 16:38:00 EDT
    From: "David A. Moon" <moon@cambridge.apple.com>
    To: Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU
    Subject: Re: small integers 

    
    > Date: Fri, 09 Oct 92 03:22:39 -0400
    > From: Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU
    
    To be successful, many implementations of Dylan are going to have to
    integrate well with the system environment in which they find
    themselves.  It is impossible to play in the Macintosh world without
    32-bit integers; [...] I imagine most other environments, such as most
    dialects of Unix, also depend on 32-bit integers.

This is exactly why CMU CL has special code generators for signed and
unsigned 32bit arithmetic.  As long as you are doing word-integer
arithmetic, there is no reason to invoke the overhead of arbitrary
precision.

I have given some though to how word-integers might be supported in Dylan.
This is really a corner of the foreign interface problem.  Probably there
should be a fairly clear correspondence between C/C++ types and Dylan
classes; many new classes would need to be introduced as part of the
foreign interface.  Some of those might be <int>, <unsigned-int>, etc.
And of course, those classes should work with C semantics, which I believe
ignores overflow totally.
    
    The combination of the <small-integer> class, the preferred Dylan
    programming style, and type inference in the compiler should eliminate
    the need to allow for bignum operands in most places where that is
    possible and the programmer cares about efficiency.

Well, that hasn't been our experience in Common Lisp.  Our programming
style types all function arguments and instance slots, and our type
inference is good, yet we still find the need to insert many FIXNUM output
type assertions in complex expressions.

    It may still be necessary to check for overflow, but that can certainly
    be open-coded with small cost, especially when the overflow causes a
    type error rather than continuing the computation with bignums.  This
    is what Pascal (at least on the Macintosh) does, so it can't be too
    expensive.

Yes, I agree that the cost of detecting overflow and signalling an error is
small, which is why I propose that <small-integer> overflow should always
signal an error.  However, in an overflow-to-bignum scheme, the only way
that you could signal an error instead of proceeding is if there were an
output type assertion, which won't be true of intermediate results in
complex expressions.
    
    There are published trace-scheduling-like techniques that [...]
    doesn't CMU's Common Lisp compiler do this?

Nope.  Self-style loop splitting would also work, but we'd really rather
not waste compile time and code size dealing with overflows that never
happen.  We just put in declarations until the compiler tells is it's
happy.  This technology does work, I just think we can do better.
    
    My question of whether it's feasible to eliminate the overflow checking
    through compiler optimization most of the time hasn't been answered
    yet.  I already know it's feasible to eliminate the type checking, so
    only the overflow checking remains.

Yes, the handing of overflow is all we are disagreeing about.  My answer is
that:
 -- yes it is usually possible, but
 -- type inference fails often enough to be annoying, and
 -- declaring appropriate subranges imposes a burden not present in other
    languages. 

  Rob