[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Quotation concerning rational arithmetic



I stumbled across this familiar article again quite coincidentally,
looking for something else: "Computer Symbolic Math and Education:
A Radical Proposal" by David R. Stoutemeyer (U. Hawaii).  Excerpts
follow.  All emphases (*'s indicate italics) are Stoutemeyer's.

   "Now for the most damnatory indictments of commonly-taught languages:
	10. *The numbers and their arithmetic do not correspond
	    to those generally taught in schools!*
	11. *The numbers and their arithmetic do not correspond
	    to those used in everyday life!*
   "The limited-precision integer arithmetic of these languages is
bad enough in these regards, even without its usual overflow asymmetry
induced by 2's complement arithmetic.  For the floating-point
arithmetic of these languages, we can add the indictment:
	12. *Few other than the very best numerical analysts
	    fully understand the implications!*
   "In contrast, the arithmetic that students learn in elementary
school is:
	1. indefinite-precision rational arithmetic.
	2. rounded and exact indefinite-precision decimal-fraction
	   arithmetic.
   "True, in high-school chemistry or physics students may learn
scientific notation, which could be regarded as indefinite-magnitude,
arbitrary-but-fixed-precision, rounded-decimal arithmetic.  In contrast,
the floating-point arithmetic of commonly-taught languages is finite
magnitude, with only 1 to 3 alternative precisions, usually chopped
nondecimal.  All of these internal differences fom true scientific
notation have external manifestations which are baffling to most
people. ...
   "Admittedly, extended sequences of indefinite-precision arithmetic
operations on experimental data suffers an unjustifiable growth in digits,
but to that one can respond:
	1. Render onto floating-point arithmetic that which one must.
	2. Render onto more rational arithmetic all that one can.
   "Perhaps if floating-point computation becomes a choice rather than
an imposition, users will regard floating-point with more of the caution
it deserves.
   "Perhaps indefinite-precision rational and decimal-fraction arithmetic
are inevitably less efficient than their respective finite-precision
floating-point and integer counterparts.  However:
	1. The difference in efficiency would greatly decrease if the
	   indefinite-precision arithmetics were microcoded or hardwired
	   as are their finite-precision counterparts in most computers.
	2. ... [remark on "floating-slash arithmetic"]
	3. Even if the indefinite-precision arithmetic is substantially
	   slower, computing has become so inexpensive that for the
	   computational needs of most people, *the cost of indefinite-
	   precision computation is negligible compared to the labor of
	   assessing results done in an unnatural arithmetic*.
   "How negligible do computing costs have to become before software
and hardware designers abandon this historical obsession with efficiency?
[It'll never happen!  But anyway... --GLS]  If a certain computation
costs 10 times as much in rational arithmetic as in floating-point,
and the latter method was deemed worthwhile a few years ago when
computer costs were more than 10 times as much, is it not worthwhile now
to do the computation in a more humane arithmetic?
   "In the early days of computers, scientific computation was usually
done using binary fixed-point fractions having magnitudes restricted
to lie between 0 and 1.  The wide-spread acceptance of floating-point
brought substantially greater convenience for a large loss in efficiency.
For most work, computer costs have now decreased enough to justify
another such step in favor of human understanding.  Those who cling
to efficiency-worship should defend fixed-point fractions rather
than floating-point."

So how about it, fixed-point fans??