[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
precision of long floats
- To: Joerg Baus <baus@rz.uni-sb.de>
- Subject: precision of long floats
- From: hoehle@post.inf-wiss.ivp.uni-konstanz.de (Joerg-Cyril Hoehle)
- Date: Fri, 4 Jun 93 17:10:39 +0200
- Cc: clisp-list@[129.13.115.2]
- In-reply-to: <9306041442.AA17742@sbusol.rz.uni-sb.de>
Joerg Baus writes:
>
> Bruno Haible wrote (in impnotes.txt):
> "(SETF (LONG-FLOAT-DIGITS) 3322) sets the default precision of long
> floats to 1000 decimal digits."
>
> How does he compute this? What I need is a function
>
> (defun precision (n)
> ... )
>
> to set the precision to n decimal digits. Any ideas?
(log n m) tells you a multiplying factor for the number of digits a
representation in base m (for example 2) would take over a
representation in base n (for example 10).
How about
(defun precision (n)
"sets default long-float precision to N decimal digits."
(setf (long-float-digits) (ceiling (* n (log 10 2))))
(long-float-digits) ; return binary precision actually set
)
?
Joerg.
hoehle@inf-wiss.ivp.uni-konstanz.de