[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

precision of long floats

Bruno Haible wrote (in impnotes.txt):
"(SETF (LONG-FLOAT-DIGITS) 3322) sets the default precision of long
 floats to 1000 decimal digits."

How does he compute this? What I need is a function

(defun precision (n)
   ... )

to set the precision to n decimal digits. Any ideas?

Greetings Joerg

Joerg Baus | baus@sbusol.rz.uni-sb.de