[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Timings



Since you chose to compare KCL with Lucid Common Lisp, then I presume
that you have made a tacit assumption that of the "other Lisps around",
Lucid is the fastest?  If not, then maybe you should compare KCL to
that Lisp, whatever it is.

On the other hand, the numbers you quote for Lucid on some of the Gabriel
benchmarks don't accord with what I typically see on a Sun-3/160 (which
should be slower than your Sun-3/280?).   A few of the more confusing ones
to me are:

              AKCL     LUCID     KCL
. . . 
DIV2          3.217    6.460     3.217         
FRPOLY       48.683   32.980    52.000        
TAK           3.817    3.900     3.400         
PUZZLE        5.200    3.580     6.900         
. . . 

DIV2 comprises two tests -- one iterative and one recursive; I repeatably
get times like 1 second and 1 1/2 seconds respectively.  I just don't see 
any reasonable relation of those numbers to your 6.46 seconds.

Usually, the FRPOLY is broken down by cases -- three somewhat related
polynomials raised to three different powers -- and no case I know of has 
a time anywhere near 32.98 seconds.  Was this a summation of all the cases,
or what?

Roughly speaking, when I run (tak 18 12 6) interpretively, I get a time 
that is more than an order of magnitude slower than 3.9 seconds; and when 
I run it  compiled, I get a time that nearly an order of magnitude faster 
than that.  Do we have the same TAK in mind?  I am using the TAK definition 
found in section 3.1.1 of Gabriel's book.

Your time for PUZZLE is about 30% faster than mine (could that all be
explained by the 160/280 difference?).  But you couldn't have gotten it 
anywhere near the better values unless you had some reasonably placed 
declarations.  Does the KCL compiler take advantage of declarations?

Speaking of declarations . . . 

Note that TAK was printed in this book without declarations (indeed, Dick's
book was compiled before there were Common Lisps around).  A reasonable
addition of type declarations to this case yields a substantial difference 
-- possibly as much as 25% for Lucid Common Lisp, but maybe even more for
other "stock hardware" vendors [of course, this comes nowhere near explaining
the wide difference between your number and mine].  Since TAK *only* does
two things -- 1- and function-call -- then a nano-variation in the code
sequence emitted by the compiler can easily be magnified to major time
increase.  Consequently, for CL, I would recommend using:
    (defun tak (x y z)
      (declare (fixnum x y z)
	       (optimize (speed 3) (safety 0) (space 0) (compilation-speed 0)))
      (cond ((not (< y x)) z)
	    (t
	      (tak
		(tak (the fixnum (1- x)) y z)
		(tak (the fixnum (1- y)) z x)
		(tak (the fixnum (1- z)) x y)))))
as the Common Lisp definition, for benchmarking purposes, of TAK.

This sort of modification to the time-honored Gabriel benchmarks must be 
made for Common Lisp.  [In fact Scott Fahlman called for the generation of 
a Common Lisp benchmark suite, since the Gabriel ones are clearly based on 
MacLisp capabilities.  For example, the translation note in section 3.1.3 
of the book would be useful to a MacLisp compiler, but because of the 
oddities of (TYPE (FUNCTION ...) ...) in Common Lisp, it would not be of 
much use in CL.]  The reason I say this is that declarations are the 
approved CL way to call for specific optimization qualities; and the Gabriel
functions are micro-benchmarks that are purely designed to test maximal 
speed; there is no intent to test debugging capability or programming ease.
In fact, some of the benchmarks published in the book were run with 
LispMachine multi-processing turned off, or with Dolphin screen display 
turned off; both of these are totally unrealistic programming situations, 
but that is what you do to get maximal speed.

Benchmarking is a two-way street -- no theoretical speculations should
ever be believed without corroborating numbers; and no numbers should
every be trusted without mitigating analysis.  I would be very curious
to hear your analysis of how KCL is winning; especially, why it is
that recent improvements have made the "consy" benchmarks so much
faster (e.g., BOYER, BROWSE, DERIV, DDERIV).


-- JonL --