[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

benchmarking



To answer a couple of your questions and pose one to you.

The time measured in Bill Schelter's version of the Gabriel benchmarks
is definitely run time, not elapsed time.

The *-mod versions are not part of the official, original Gabriel
suite but are rather versions that Bill Schelter produced.  I don't
remember why he added them, with certainty, but I think that he wanted
to show what you could get with a little better declaration of
arithmetic.

Now, let me ask you a question.  When you ran your AKCL benchmarks, I
wonder if you proclaimed the functions involved to take a fixed number
of arguments and to return a single value (for those functions for
which this was true)?  If you do that, I think you will find (e.g., by
running disassemble) that an AKCL function call almost always becomes
a C function call, which one can in general SPECULATE will run about
as fast as one can imagine doing it on a given Unix machine.  On the
other hand, if you don't make such a declaration, you get results that
are poorer, sometimes much poorer.  Bill's version of the Gabriel
benchmarks includes an atomatic declaration of this kind, for all Lisps.

Below is an extremely simple test file to illustrate this point.  If
you compile it and invoke, say, (report 10000000) [ten million calls]
or some other large number, I think you'll see what I mean.  When I
just ran that on a Sparc 2, I got these results:

   cmulisp beta on a sparc-2

   * (report 10000000) 
   do-cost 
   Evaluation took: 
     1.57 seconds of real time 
     1.55 seconds of user run time 
     0.0 seconds of system run time 
     0 page faults and 
     0 bytes consed. 
   regular-call-cost 
   Evaluation took: 
     8.58 seconds of real time 
     8.57 seconds of user run time 
     0.0 seconds of system run time 
     0 page faults and 
     0 bytes consed. 
   funcall-cost 
   Evaluation took: 
     9.88 seconds of real time 
     9.86 seconds of user run time 
     0.0 seconds of system run time 
     0 page faults and 
     0 bytes consed. 
   NIL 

   akcl 1.530 on a sparc-2

   do-cost 
   real time : 1.283 secs 
   run time  : 1.267 secs 
   regular-call-cost 
   real time : 3.800 secs 
   run time  : 3.783 secs 
   funcall-cost 
   real time : 5.067 secs 
   run time  : 5.017 secs 

If we subtract away the ``do-cost,'' i.e., the cost of running the
loop, I seem to get the result that akcl takes only (3.8-1.3) = 2.5
seconds to do 10,000,000 function calls where as cmulisp takes
(8.6-1.5) = 7.1 seconds to do the same!  (On the other hand, if I do
not put in the proclamation for foo, then ackl takes about 6 TIMES
LONGER than it does with the proclamation!)  I regret that I have no
Allegro-or-Lucid-on-a-Sparc-2 to run for comparison times.

----  a very little  test file for function calling ---


(proclaim (quote (optimize (compilation-speed 0) (safety 0) (speed 3) (debug 0))))

;  Proclaim that function foo takes one value and returns one.

(proclaim '(function foo (t) t))

(defun foo (x) x)

(defun do-cost (n)
  (do ((i n (1- i))) ((= i 0))
      (declare (fixnum i))
      3))  ; Do nothing.

(defun regular-call-cost (n)
  (declare (fixnum n))
  (do ((i n (1- i))) ((= i 0))
      (declare (fixnum i))
      (foo t)))  ;  Call foo in the ordinary way.

(defun funcall-cost (n fn)
  (declare (fixnum n))
  (do ((i n (1- i))) ((= i 0))
      (declare (fixnum i))
      (funcall fn t)))   ;  The value of the funcall is ignored. 

(defun report (n)
  (format t "do-cost")
  (time (do-cost n))
  (format t "regular-call-cost")
  (time (regular-call-cost n))
  (format t "funcall-cost")
  (time (funcall-cost n #'foo)))