CLIM mail archive
[Prev][Next][Index][Thread]
Performance analysis of Clim incremental redisplay
Date: Thu, 7 Nov 1991 16:41 EST
From: "Bruce R. Miller" <miller@cam.nist.gov>
Date: Thu, 7 Nov 1991 13:24 EST
From: Peter Karp <pkarp@ai.sri.com>
I have lately been analyzing the performance of a CLIM application
under Lucid CLIM 1.0-Beta on a Sparc 1+, and I thought members of this
list might be interested in the results.
...
Graph size: 100 200 400 800
-----------------------------------------------------------
Incremental redisplay: 8 26 97 351
Updating-output full display: 21 57 180 595
Simple full display: 15 32 86 234
I will note in closing that I have been able to speed up the simple
full display by a factor of 3-4 by optimizing my code by hand and in
conjunction with Lucid's optimizing compiler. Presumably these
optimizations will not affect the incremental redisplay since my code
is only being executed for the single changed node during incremental
redisplay; the majority of the computation must be internal to CLIM.
Peter Karp
My point is that an important idea of incremental redisplay is to
minimize the amount of drawing that is done. If the drawing took an
infinitesimal amount of time, other than flicker, there would be no
reason to use incremental redisplay. Perhaps it is best for a `quick'
fix for quickly written drawing code? As you optimize the drawing, you shift
the tradeoff point.
Yes, this is the intention of incremental redisplay: it provides a
basic, general facility for doing redisplay, but as the display grows
more complicated, the programmer may decide to use more specialized
homegrown redisplay code. The current problem with incremental
redisplay in CLIM is that the tradeoff point comes too early. I have
two changes in mind that with move the tradeoff point later, without
requiring changing the basic implementation of redisplay. Another more
complicated scheme could move it still further. I do plan to make the
first two changes at some point, but don't know if I will ever get to
reimplementing redisplay altogether.
The non-linearity is interesting. Determining what has changed should
be linear. Does the non-linearity come from determining what, if
anything, is overlapped by a changed display object?
Yes, this is exactly where the non-linearity comes.
Is this the context for mentions I've seen for fancy datastructures in CLIM?
Ie. that they (more) quickly deterimine what objects are at <x,y> ?
The "fancy" datastructures in CLIM actually turn an O(n^2) algorithm
into O(n^3) for redisplay. Changing the convention used to store output
record coordinates could reduce the complexity to O(n^2). This is one
of the changes I plan to make, but the cost is that some other
operations get slower (such as nested FORMATTING-TABLE); I believe that
the operations that get slowed down are, on the whole, far less common
than the operations that get speeded up.
0,,
References:
Main Index |
Thread Index