[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: hmadorf on Lisp vs Fortran



> Date: Mon, 26 Oct 1992 16:19:05 -0500
> To: info-mcl
> From: hmadorf@eso.org (by way of alms@cambridge.apple.com (Andrew LM Shalit))
> 
> ....
> (defun factorial-b (n)
>   "Calculates n-factorial recursively and exactly with declarations."
>   (declare (type integer n)
>            (optimize (speed 3) (safety 0)))
>   (if (= n 0)
>     1
>     (* n (the integer (factorial-b (1- n))))))
> ;;(time (factorial-b 200))
> #|
> Apple Mac SE:
> (FACTORIAL-B 200) took 302 milliseconds (0.302 seconds) to run.
> Of that, 9 milliseconds (0.009 seconds) were spent in The Cooperative 
Multitasking Experience.
>  16152 bytes of memory allocated.
> Apple Mac SE/30:
> (FACTORIAL-B 200) took 47 milliseconds (0.047 seconds) to run.
>  16152 bytes of memory allocated.
> |#
> ;; Conclusion: the declarations do not help in MCL 2.0f

I think you've been told this a few times before, but since
it was sent to info-mcl again I'll say it again, since
someone should.

Integer declarations don't make any difference in most Common Lisps,
since integers have unlimited range.  Try a fixnum declaration
if that's what you mean.  Of course, you can't compute 200
factorial within the fixnum range, and you couldn't compute
it in Fortran either, since Fortran integers have about the
same limited range as Lisp fixnums.  So I don't know
what you think this part of your benchmark proves; it doesn't
seem to have much to do with what you're trying to measure.

All the log-factorial routines, which are closer to the Fortran
program you're comparing to, take the same amount of time on
MCL, since as Bill said there is no support for inline floating
point arithmetic at present in MCL and the float declaration
doesn't do anything.  It should have been a single-float, double-float,
short-float, or long-float declaration anyway, since just asking
for generic float isn't likely to make any difference since the
compiler wouldn't know which of the four kinds of float you want.
(Not all Common Lisps have all four kinds.)

The SPARC-2 output you gave is very hard to read, but it seems
evident from all the garbage collections that the Lisp you used
on that machine does not attempt to optimize floating point
arithmetic.  Does it claim to do so in its documentation?  If not,
it's the wrong thing to be measuring; if so, you should consult
the vendor of that Lisp to find out why your program isn't optimized
as expected (or you could try my suggestion above about specifying
which floating point precision you want).

Since you didn't give any benchmark results for the Fortran code,
just the source, I don't know what machine and compiler you used.
But I think if you want to measure the performance of Lisp for
floating point code, you should find a Lisp that claims to make
an effort to give good floating point performance (MCL does not),
and you should compare against a Fortran with comparable claims
running on the same machine.

And if you want to evaluate performance of various languages on
a CM-5, you should run your tests on a CM-5.  I would guess that
Thinking Machines would be happy to let you run a few tests over
the network.