[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


I have set up a directory >Qobi>Public> on Zermatt.LCS.MIT.EDU.
In that directory are several files:
Benchmark-Styled.lisp --- the original source of the benchmark
                          this file contains font codes so it can only
                          be read with Genera
Benchmark.lisp        --- the source with the font codes stripped
Benchmark.bin         --- the binary for 36xx series machines
Benchmark.text        --- the results that I have compiled so far
Try first to FTP those files yourself. I do not want to be inundated
with requests to send these files to people. Only if you are not
successfull in FTPing these files then let me know and I will see
what I can do.

One important caveat. It is not my profession to run benchmarks.
I did this just for my own information. This has two important
implications. First, there is no documentation --- either of
results or of how to run the program. What you see in this message
and in the above files is what you get. I have never written up
a formal document of the benchmark results and never intend to.
Sorry. Second, I have no plans and am not able to run these
benchmarks on other machine and software configurations. So please
don't ask me to. If you want more benchmark results then run them
yourself. If you do, please post the results to the network or
at least send them back to me, again just for my info.

The program is an early version of an experiment that I did
which learns word meanings from correlated linguistic and visual
input. The theory and operation of the program is described in a
paper to be published and presented at the Association for Computational
Linguistic Conference 1990 titled ``Acquiring Core Meanings of Words,
Represented as Jackendoff-Style Conceptual Structures, from Correlated
Streams of Linguistic and Non-Linguistic Input.'' You are refered to
that paper for further details.

To run the program do the following:
(in-package 'word)
(compile-file "Benchmark")
(load "Benchmark")

GROW-UP! is the only top-level function you need to call (with no arguments).
This should produce the following as output:
ENTERED: *[V] (BE ?0 (AT ?1))
SLID: *[V] (GO ?0 (PATH (FROM ?1) (TO ?2)))
FROM: *[P] (AT ?0)
TO: *[P] (AT ?0)
If it does, then the program is working. If it doesn't then for some reason
the program is not portable. In that case, PLEASE DON'T BOTHER ME by asking
me to fix it. FIX IT YOURSELF but please inform me of the necessary changes.

My experimental procedure was as follows. For each machine/software configuration
I booted a fresh copy of the lisp image. Then I would issue the command:
(in-package 'word).
Then I would either issue the command:
(proclaim '(optimize (speed 3) (safety 0) (compilation-speed 0)))
if I was testing production mode, or the command:
(proclaim '(optimize (speed 0) (safety 3) (compilation-speed 3)))
if I was testing development mode.
I realize that there are more gradations on some compilers so this
step may need to be modified. My intension was to get the fastest speed
as well as the safest speed.
Then I would issue the following command:
(progn (time (compile-file "Benchmark")) 
       (time (compile-file "Benchmark"))
       (time (compile-file "Benchmark")))
which would compile the file three times. The first time would typically
be slow while the compiler was loaded into the image and the working
set was paged in. I took the best of those three times as the
compilation time for that configuration.
Then I would boot a fresh copy of the lisp image again and would issue
the commands:
(in-package 'word)
(load "Benchmark")
(progn (time (grow-up!)) (time (grow-up!)) (time (grow-up!)))
Again, I would take the best of the three times as the execution time for
that configuration.
The results are summarized at the end of this message.
The machine configurations were:

Symbolics 3630 4096K Words Physical memory Genera 7.2
Symbolics XL400 4096K Words Physical memory Genera 7.4
NeXT Allegro
Sun 4/260 Lucid 3.0.1
Sun 4/260 Allegro 3.1.beta.22
SparcStation1 Lucid 3.0.1
SparcStation1 Allegro 3.1.beta.22
SparcStation330 Lucid 3.0.1
SparcStation330 Allegro 3.1.beta.22

The Unix machines had no other user processes running at the same time.
I don't know the main memory sizes for the Unix machines (I don't
know how to get that info from a Unix box) nor do I know what disks any
of the machines had. Sorry. Also, I didn't have time to fill in the
table. Sorry. I have subsequently run the benchmark on AKCL on an HP
Unix box and under Harlequin on a Sparcstation330 but since I didn't
rigorously follow the aforementioned procedures I have not included
the results in the table. I take NO RESPONSIBILITY for the accuracy
of these results and no other responsibility whatsoever for publishing
them. I just hope that it helps people and encourages manufacturers
and implementers to produce higher performance Lisp implementations.

Best of three tries
Times in seconds rounded to nearest second
Development is default speed, safety, compilation-speed
Production is (optimize (speed 3) (safety 0) (compilation-speed 0))
CPU times are
  Symbolics: WITHOUT-INTERRUPT times
  Lucid: Total Run time
  Franz: cpu time (total) user
Elapsed times are
  Symbolics: times with interrupts enabled
  Lucid: Elapsed Real Time
  Franz: real time

                      |            cpu            |          elapsed
Compilation           | development |  production | development | production
3630                  |    108      |             |    128      |
XL400                 |    114      |             |    130      |
NeXT-Franz            |    118      |     98      |    120      |   103
Sun4/260-Lucid        |             |             |             |
Sun4/260-Franz        |             |             |             |
SparcStation1-Lucid   |             |             |             |
SparcStation1-Franz   |     64      |     53      |     66      |    56
SparcStation330-Lucid |     10      |     39      |     15      |    42
SparcStation330-Franz |     50      |     42      |     55      |    44

                      |            cpu            |          elapsed
Execution             | development |  production | development | production
3630                  |    283      |             |    336      |
XL400                 |    187      |             |    227      |
NeXT-Franz            |   1398      |   1069      |   1448      |  1108
Sun4/260-Lucid        |             |             |             |
Sun4/260-Franz        |             |             |             |
SparcStation1-Lucid   |             |             |             |
SparcStation1-Franz   |    222      |    147      |    224      |   148
SparcStation330-Lucid |    132      |     81      |    133      |    81
SparcStation330-Franz |    168      |    108      |    169      |   108