[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: CLOS Speed



    Date: Fri, 24 Jun 88 10:36:58 EDT
    From: Brad.Myers@a.gp.cs.cmu.edu

    Our project is using the February 3rd, 1988 version of PCL (CLOS).  I
    am trying to convince our group to continue to use CLOS, but the problem
    is that the speed of slot access and method call is so slow that we are
    thinking of using a home-brewed object system.  Our measurements show
    that using CLOS is between 3 to 5 times slower than using structs and
    procedure calls on CMU Common Lisp on an IBM-RT.

This message contains some added comments which may be useful for
interpreting the performance estimates I gave in my last message.  As
was the case in my last message, I am speaking primarily about the
performance you can expect from PCL.

There are many different ways of looking at the difference in
performance between using CLOS and using ordinary functions and
defstruct.  I will only look at three comparisons between the two.

* Using a generic function versus using an ordinary function.

Generic functions and ordinary functions do different things.  While it
is interesting to compare the difference in performance of a generic
function call (what you call a method call) with an ordinary function
call, that number isn't really the interesting number.  What we really
want to compare is the difference between using a generic function, and
an ordinary function and typecase which has the same effect.

For example:

(defclass c1 () ())
(defclass c2 () ())

(defun test-generic-function (x)
  (let ((c1 (make-instance 'c1)))
    (time
      (dotimes (i x)
	(test-generic-function-internal c1)))))

(defmethod test-generic-function-internal ((c1 c1)) 'c1)
(defmethod test-generic-function-internal ((c2 c2)) 'c2)


(defstruct (s1 (:constructor make-s1)) dummy)
(defstruct (s2 (:constructor make-s2)) dummy)

(defun test-typecase (x)
  (let ((s1 (make-s1)))
    (time
      (dotimes (i x)
	(test-typecase-internal s1)))))

(defun test-typecase-internal (x)
  (typecase x
    (s1 's1)
    (s2 's2)
    (otherwise (error "Don't know what to do with ~S." x))))

In the two implementations that I just tried, test-generic-function is
faster than test-typecase.  Some numbers were:

100000 tries       generic-function    typecase

Implementation 1    396                2048
Implementation 2    184                 180
Implementation 3    560                 795


In each of these implementations, I expect that future work will improve
the generic function lookup time.  I expect the numbers to become:

                    300                2048
                     80                 180
                    400                 795

(I am quite deliberately ommitting the units or the implementations.
What is important here is the ratios.)

This shows that by comparing generic function call to what it really
corresponds to, you get a much more favorable comparison.

But remember, in places where an ordinary function with no typecase is
all you really need (now and in the future) you should use that.  Don't
use generic functions if you don't need them.  When you do need a
generic function, it is likely to be the fastest way to do what you
want.


* Using defstruct accessors versus using slot-value.

As I said in my previous message, it is reasonable to expect a use of
slot-value to take about twice as long as an access using a highly
optimized defstruct accessor.  A highly optimized defstruct accessor is
basically a single memory read (or aref).  A slot access is basically
two memory reads.  The extra memory read corresponds to the indirection
required to support multiple inheritance.


* Using defstruct accessors versus using accessor methods.

As mentioned above, in most implementations of defstruct, the compiler
inlines the code for defstruct accessors, so they become a single memory
read.  This provides high performance, but since the actual call to the
accessor has been compiled away it makes it impossible to change the
accessor without recompiling all the code that uses it.  The defstruct
accessors are essentially macros.

In CLOS, generic functions which have accessor methods methods generated
by defclass are just like other generic functions.  That is, the
compiler doesn't compile out calls to them.  At run time, method lookup
happens to get the appropriate method to run.  This offers much more
flexibility, it makes it possible to define methods which specialize
those accessors.

But there are some optimizations to make CLOS accessors faster.  This is
why the projected performance for this is approximately 1.3 function
calls.



I hope I have made it more clear what some of the real comparisons
between CLOS and defun+typecase+defstruct are.  Of course the real
issues can be much more complex, as are any attempts at real performance
benchmark.  The point I want to stress is that when you need the
functionality CLOS provides, it is likely to be the highest performance
mechanism for providing.  As Jim mentioned in his message, these
projected performance numbers are quite competitive with other object
systems.  To build a system of your own which provided better
performance would probably take a lot work.
-------