[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Mini benchmark.
John Carter wrote:
1) It reflects floating point performance, (which I am
most concerned with).
The reason that CLISP isn't faster for this function is
that the multiply isn't special-cased by the compiler for different
kinds of numbers; the multiply function has to figure out what kind of
type it is dealing with, and then do the operation.
To use CLISP effectively for computationally intensive numeric tasks,
you either will need to make careful use the builtins, hook up foreign
code via the FFI, or use pipes/sockets to C programs.
[It is worth noting that CL programs that use complex data structures
will likely have completely different performance characteristics
across CL implementations. One example that stands out for me where
CLISP is as faster or faster than even CMU Lisp is the Koza Genetic
Programing kernel. Even if you declare the heck out of the Koza
kernel, CLISP will still be as fast. I think the reason is that CLISP
can ascertain lots of different types just by looking at the address
(at least with most Unix machines). Other likely factors are the
footprint, the performance of hash-tables, and garbage collecting
performance. ]
The main thing you can be very sure of is that CL is not to blame.
Python (CMU Lisp), will give you performance equivalent to C.
(my adaptation of John's function)
(defun test (n)
(declare (type (unsigned-byte 29) n))
(let ((x 1.0d0)
(y (+ 1.d0 (/ 1.0d0 n))))
(declare (type double-float x y))
(loop
(when (zerop n) (return x))
(setq x (the double-float (* x y)))
(decf n))))
Sure, short floats are faster, but they have the minor disadvantage
of being fairly meaningless for this application. :-)
marcus@sayre[~] $ time a.out 200000000
y=1.000000, x=2.718282
real 12.25
user 11.92
sys 0.04
CMU Lisp:
* (time (test 200000000))
Evaluation took:
12.1 seconds of real time
11.982946 seconds of user run time
0.00745 seconds of system run time
0 page faults and
80 bytes consed.
2.718281805141847d0