[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
floating point numbers in Lucid Lisp on the Apollo
- To: labrea!jpg%ALLEGHENY.SCRC.Symbolics.COM@labrea.stanford.edu
- Subject: floating point numbers in Lucid Lisp on the Apollo
- From: Jon L White <email@example.com>
- Date: Tue, 29 Sep 87 13:50:58 PDT
- Cc: firstname.lastname@example.org
- In-reply-to: Jeffrey P. Golden's message of Sat, 26 Sep 87 19:57 EDT <870926195737.9.JPG@SPOONBILL.SCRC.Symbolics.COM>
Date: Sat, 26 Sep 87 19:57 EDT
From: Jeffrey P. Golden <jpg@ALLEGHENY.SCRC.Symbolics.COM>
The DOMAIN/CommonLISP Reference Manual p.12-6 says:
"Common Lisp designates four floating-point number formats:
short-float, single-float, double-float, and long-float. ...
DOMAIN/CommonLISP represents all four types of floating-point
numbers in the single-float format." I had previously
discovered this for myself.
I expect some Apollo machines have double-floats. True?
So, I guess this just means that Lucid has decided not to give
its users access to Apollo double-floats at this time?
Can anyone clarify this?
The reference in the manual is a bit misleading; there are four
floating-point type names as specified by CLtL -- not necessarily
four floating-point number formats. CLtL pages 18 & 19 indicate how
these four type names should map onto whatever implementational types
are actually supported. However, note CLtL, page 17, where it says
"The precise definition of these categories is implemention-dependent."
Lucid's implementation will provide "dependencies" which permit the
the fastest executable code given the hardware available, even though
the resulting code may operate somewhat differently when run on another
machine with different hardware, or when compiled under differing
The 68020-based Apollos (i.e., all current models), as well as the SUN
3/160's and 3/260's have MC68881 floating-point co-processors attached.
Lucid's "pdlnum" compiler generates code that uses raw floating-point
operations in the co-processor where appropriate, and thus many, if not
most, open-coded floating point operations will be carried out in an
"extended" format of 80 bits of mantissa. This doesn't match any of the
"usual" four formats that you mention; and furthermore, this may be a
source of interpreter/compiler differences. That is, the particular points
in an arithmetic expression where "rounding" to 32- or 64- bits occurs
will generally be dictated by the need to store intermediate results in
main memory; not only will these points not be obvious to the programmer,
but they may vary from one speed/safety setting of the compiler to
another. This is the classic price you pay for an optimizing compiler.
If explicit rounding times are more important to you, then you may have to
add code to your program to control them (or else maybe not invoke the
It is true that the *current* release does not support a stored format
larger than 32-bits. That will change; in particular, we're developing
support for the 64-bit stored format. We have done a study in which we
found *no* numerically-intensive application that wanted a LONG-FLOAT
format larger that the typical 64-bit IEEE format (52 bits of stored
mantissa, 11 bits of exponent). So we are not likely to go out of the
way to provide a stored format larger than that until the need arises.
Packed arrays of floats is a different matter. We will support various
packing factors, including at least 32- and 64-bit formats. [This is
somewhat analogous to packed arrays of integers, except that more
processing is needed to pack and unpack].
-- JonL --