[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]


    Date: Wed, 28 Feb 90 16:31 EST
    From: Barry Margolin <barmar@Think.COM>

        Date: Wed, 28 Feb 90 13:37 EST
        From: Jeffrey Mark Siskind <Qobi@ZERMATT.LCS.MIT.EDU>

        I think that you missed my point. I know what CPU time is and I know why it is
        reported and why people use it. I just reiterate my point that personally I
        find elapsed wall clock time to be the most appropriate measure of system
        performance. I know that it can vary depending on many factors such as system
        load, network load, and paging quirks. BUT ALL OF THESE AFFECT HOW LONG I MUST
        WAIT TO GET MY RESULT. And that is all that counts to me. 

    If you're trying to optimize a program, you need to be more precise than
    this.  When you're trying to figure out which part of the program needs
    to be optimized you don't want random factors to stear you the wrong
    way.  You want repeatable results that have some relationship to the
    program you're timing.

Ah yes. One of the uses of benchmarking is to guide optimization of a program.
For that use you need repeatable results and CPU time may be the right thing
to use. But I am not using benchmarking for that purpose. I am using it to
compare different platforms for running the same program, which I take as
a fixed given. (Remember that is how this whole conversation started.) For
that use repeatability is less of an issue than response time. To wit: say
I am comparing two platforms X and Y running program P. Perhaps
CPU-time(P,X) < CPU-time(P,Y) maybe even by a lot, although
Elapsed-time(P,X) > Elapsed-time(P,Y), perhaps also by a lot. This may happen,
because the two platforms measure different things via CPU time (factoring
out or not factoring out different kinds of operating system overhead) or
because while platform X is very efficient at MOVES and ADDS, it does really
poorly on paging, I/O, and context switching. So which machine will you buy?

This whole conversation started because I posted a benchmark. The sole reason
I chose that benchmark was because it was fairly indicative of a typical
program that I run on my machine daily as part of my research. By definition
it incorporates my programming style and its quirks and thus is biased to
benchmarking those performance-affecting features that I use as part of my style
and not those that I don't use. Different people and projects have different will
have different benchmarks which will exercise different aspects of a machines
performance and assign them different weights. Comming up with general numbers
was not my objective. I wanted do determine how different platforms
performed for my programming style. For that purpose, elapsed-time is more