[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: small integers
- To: "Robert A. Cassels" <cassels@CAMBRIDGE.APPLE.COM>
- Subject: Re: small integers
- From: Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU
- Date: Sat, 10 Oct 92 19:15:07 -0400
- Cc: info-dylan@CAMBRIDGE.APPLE.COM
- In-reply-to: Your message of Sat, 10 Oct 92 12:07:35 -0500. <9210101602.AA22614@cambridge.apple.com>
So I personally would prefer a system where the default was to do
arithmetic as mathematically as possible, because that's the way I'd like
to think about the problem. [If I wanted everything mod 32, I'd say that.]
Then the question is how to make it as easy as possible for me to write
code that the compiler can turn into efficient machine instructions. I
would like the compiler to figure out as much as it can about integer
ranges. In cases where I know something about the range, I should have
easy, natural ways to give the compiler that information. If I'm willing
to claim without any proof that the integer won't overflow, I should have a
way to say that. But that last claim shouldn't be so easy or natural to
make, since if I'm wrong the user will suffer.
Well, I have to agree with most of that except the last sentence. If we
must choose between robustness (rolling over into bignums) or maximum
efficiency (no bignums and risk an overflow) as the default for programs
written in the most obvious, naive way, the it's politically correct to
choose robustness as the default.
There is a question of degree, however. We're not talking about just a
little bit of efficiency here. Declaring everything is sight to be fixnum
is one of the most important ways to improve your code speed in common
lisp. This often makes a very dramatic difference. And in this case I
think we are talking about just a little bit of added robustness.
Programmers often talk about the importance of robust code and then, when
the chips are down, go use whatever awful tool gives them the best
I've got no problem with making robustness the default, as long as I can
*easily* get C-like efficiency (no bignums, thank you very much) when
that's what is most important to me. If you make that hard in Dylan -- for
my own good, of course -- then I'm liable to go use C instead. (Well, OK,
*I* won't, but many programmers will...)
Speaking just for myself (Rob is off at the ASPLOS conference), I've come
around to the view that something like Jeff Piazza's proposal is the best
compromise in terms of meeting all the goals people have expressed. The
default action for arithmetic could be "correct", with the compiler doing
whatever is possible (usually not much) to make the code efficient, based
on the user's declarations. But there would also be a big switch -- a
declaration or whatever -- that would allow you to choose fixnum arithmetic
(with overflow checking) for some chunks of your program. And there would
be specialized fix+ and int+ operations that could be used by programmers
who need fine-grained control over the behavior of integer arithmetic.
This seems to meet most of the needs that have been expressed so far.
There is only one integer whose value is 5. For naive users, correct
arithmetic is the default. Discussion of the big switch and the
specialized arithmetic functions can be in a section of the manual reserved
for people who want to worry about "micro-tuning". On the other hand,
people who want maximum no-bignums efficiency can get it with a single
declaration -- no need to write a thousand arcane integer-range
declarations and THE statements. And people who really feel the need to
mix bignum arithmetic with a fixnum-only loop counter can get that by using
the more specialized functions.
A few details:
I don't think it will work to make + a macro. It needs to be a generic
function that dispatches on its argument types. It is the integer method
for binary+ that must somehow turn into different compiled code depending
on some declaration.
Before someone suggests it, I don't like the idea of making + use "correct"
maybe-bignum semantics in all situations, forcing the user to write fixnum+
to get the more efficient fixnum-only form. Some users will wirte a lot of
fixnum-only code, and we shouldn't force them to make their code ugly in
this way. So I think the big switch that affects the interpretation of +,
-, and friends is an important feature of this proposal.
I like the names <fixnum> and <bignum> for the two sub-classes of
<integer>, rather than <short-integer> and <long-integer>. Users would
hardly ever type <bignum>, but would often declare variables to be of type
<integer> or <fixnum>. I think that <short-integer> is too verbose, and it
seems a bit second-class. People might not use it when they should. But
this is just a preference, not a big issue.
Scott E. Fahlman
School of Computer Science
Carnegie Mellon University
5000 Forbes Avenue
Pittsburgh, PA 15213