[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

precision of long floats



Joerg Baus writes:
 > 
 > Bruno Haible wrote (in impnotes.txt):
 > "(SETF (LONG-FLOAT-DIGITS) 3322) sets the default precision of long
 >  floats to 1000 decimal digits."
 > 
 > How does he compute this? What I need is a function
 > 
 > (defun precision (n)
 >    ... )
 > 
 > to set the precision to n decimal digits. Any ideas?

(log n m) tells you a multiplying factor for the number of digits a
representation in base m (for example 2) would take over a
representation in base n (for example 10).

How about

(defun precision (n)
  "sets default long-float precision to N decimal digits."
  (setf (long-float-digits) (ceiling (* n (log 10 2))))
  (long-float-digits)	; return binary precision actually set
)

?

 	Joerg.
hoehle@inf-wiss.ivp.uni-konstanz.de