[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Benchmarking
>Depending on your application (and the quality of your C compiler) KCL may
>perform as well as a native Lisp, or may be 3 or more times slower.
Or maybe 2 times faster.
To: boyer@cs.utexas.edu
Cc: olphie@CLI.COM, hunt@CLI.COM
Subject: New Lisp
Reply-To: wfs@CLI.COM
I have installed cmulisp on mongoose.
It is invoked with /usr/local/bin/cmulisp
Here is a gabriel benchmark run on a sparc station 2.
[Note you have to input (quit) to get the lisp to end,
just the end of file doesn't do it, as it does in lucid,
allegro and akcl.]
Inserting file ~/chart-lisps
---Begin File ~/chart-lisps---
On a sparcstation 2
cmulisp version 15a
akcl 603 (compiled with standard sun cc of rel 4.1.1)
AKCL cmulisp
BOYER 1.550 2.140
BROWSE 2.717 7.480
CTAK 1.017 0.510
DDERIV 0.950 1.830
DERIV 0.683 1.630
DESTRU-MOD 0.267 0.287
DESTRU 0.258 0.287
DIV2 1.400 1.540
FFT-MOD 0.750 0.390
FFT 11.350 0.430
FPRINT 0.121 2.240
FREAD 0.150 0.918
FRPOLY 9.200 10.750
PUZZLE-MOD 1.067 2.280
PUZZLE 1.133 1.090
STAK 0.450 0.315
TAK-MOD 0.317 0.650
TAK 0.517 0.650
TAKL 0.183 0.450
TAKR 0.075 0.180
TPRINT 0.196 4.330
TRAVERSE 5.467 9.830
TRIANG-MOD 9.433 279.200
TRIANG 11.450 29.110
---End File ~/chart-lisps---
To: boyer@CLI.COM
Subject: Re: Benchmarking
In-Reply-To: Your message of Sat, 26 Oct 91 10:00:50 -0600.
<9110261500.AA24549@CLI.COM>
Date: Sat, 26 Oct 91 12:16:30 -0400
From: Rob_MacLachlan@LISP-PMAX2.SLISP.CS.CMU.EDU
It would be interesting to know more about the benchmark conditions. We
have been using our own version of the Gabriels with added declarations
and measurement improvements. Was (proclaim '(optimize ...)) used to set
the compilation policy? Python's default policy is fully safe. In
particular, all declarations are verified. This might explain the
pathology of triang-mod.
Are you measing run-time or elapsed time? Although I concede that elapsed
time is a more reasonable system measurement, run-time is much more
reproducible.
Many of your ss2 numbers are more or less the same as our ss1+ numbers. I
don't know what the speed difference between the machines is. However, TAK
in particular is way off. We have changed to running the sub-second
benchmarks for many iterations (but scaling the result.)
Also, as I noted in the README, we haven't done much sparc-specific tuning
yet, which is why when compared to Allegro on the SPARC we are about the
same, rather than somewhat faster.
I have appended our results for the ss1+ under three different compilation
policies. Fast-safe is the default. Could you point me at the version of
the gabriel's that you use? It seems that ours has gotten rather dated, as
we are missing the -mod versions.
Rob
p.s. Our version of the gabriels is in
/afs/cs/project/clisp/benchmarks/gabriel/bmarks.lisp
Benchmark Fast Weak-Safe Fast-Safe
List-Search 0.410 0.410 0.410
Struct-Search 0.330 0.330 0.330
Tak 0.101 0.105 0.105
Boyer 2.330 3.340 4.940
Browse 8.870 11.890 11.510
Ctak 0.617 0.615 0.618
Dderiv 1.200 1.030 1.110
Deriv 0.870 0.880 1.190
Destructive 0.410 0.426 0.430
Iterative-Div2 0.440 0.480 0.470
Recursive-Div2 0.710 0.660 0.670
Fft 0.430 0.658 0.670
Frpoly-Fixnum 0.790 0.910 1.260
Frpoly-Bignum 5.340 5.600 5.710
Frpoly-Float 3.800 4.100 4.070
Puzzle 1.040 1.720 1.730
Rtak 0.104 0.105 0.104
Takl 0.557 0.781 0.663
Stak 0.503 0.556 0.559
Init-Traverse 0.890 1.180 1.190
Run-Traverse 10.060 12.160 13.010
Triang 24.810 37.740 45.160