[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Common Lisp Benchmarks Using AutoClass II

With the availability of an XL1200 at our local sales office, I ran our
benchmark on it and present below the results and some other timings using 
this same benchmark.  This benchmark has meaning to us, in contrast to the 
Gabriel Benchmarks which have only limited utility -- they cannot solve too 
many real world problems.

Our benchmark is based upon AutoClass II, an unsupervised Bayesian 
classification system for independent data, which is currently being used
by several researchers on real world problems.  It utilizes floating point
operations and vector manipulation to a high degree.  The code was refined
to minimize garbage collection: the basic search loop is static with respect
to space allocation.  Thus excessive garbage can only be generated by an
operating system's implementation of math operations.  

The system was developed on the Symbolics and hence has no type declarations.
In comparing performance with Franz CL and Lucid CL, it should be noted that 
adding type declarations would probably improve their timings somewhat.  
Also, the version of Lucid CL which was available to us, is not their latest
version -- however, I think the Lucid timings are still generally 

Comparing benchmarks between Symbolics operating systems and UN*X operating
systems is not straight-forward, since the Common Lisp function TIME returns
elapsed time on the Symbolics, and cpu time for Franz and Lucid on UN*X
systems (the garbage collection portion is reported for Franz CL).  So for 
the Franz CL timings, the times reported as cpu times are the total cpu time 
minus the garbage collection time.  The Symbolics cpu benchmarks were run 
under Genera 8.0 with PROCESS:WITHOUT-PREEMPTION.  The Symbolics elapsed 
benchmarks include garbage collection AND other non-essential system processes
such as network processing, so they are not truly reflective of the benchmark.
Franz and Lucid benchmarks on Sun platforms were compiled using optimizations
of speed 3, and safety 0.  My benchmarking technique was to conduct four 
runs, throw out the worst one, and average the remaining three.  Time is 
measured in seconds.  %-cpu and %-elap are referenced to the XL1200.

   cpu  %-cpu  elapsed  %-elap  configuration
  5.30  100.0     5.60   100.0  Symbolics XL1200 (FPA; 16mb RAM, Genera 8.0)

 13.86   38.2                   Symbolics 3675 (FPA; 14mb RAM, Genera 8.0)

 14.35   36.9                   Symbolics XL400 (FPA; 16mb RAM, Genera 8.0)

 14.53   36.5    17.26    32.4  Symbolics 3653 (FPA; 16mb RAM, Genera 8.0)

                 22.78    24.6  Symbolics UX400S (16mb RAM; Genera 7.4, Sun 
                                   SparcStation 370 (Sun 4.0)
	         23.04    24.3  Texas Instruments Explorer II (32mb RAM,
			           Release 3.2)
                 36.15    15.5  Symbolics MacIvory (20mb RAM; FPA;
	                           Genera 7.3I)
 37.46   14.1    50.51    11.1  Franz Allegro CL 3.1.beta.22 - Sun
                                   Sparcstation 1 (16mb RAM; SunOS 4.0.3c)
                 64.07     8.7  Lucid CL 3.0.0.bet3 - Sun 3/280
			           (16mb RAM; SunOS 4.0)
                 82.56     6.8  Lucid CL 3.0.0.bet3 - Sun 3/60
			           (16mb RAM; SunOS 4.0)
106.35    5.0   142.05     3.9  Franz Allegro CL 3.1.beta.22 - Sun 3/60
			           (16mb RAM; SunOS 4.0.3c)

Will Taylor       Sterling Software/NASA Ames Research Center
MS 244-17, Moffett Field, CA 94035  taylor@pluto.arc.nasa.gov