[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Risc vs Symbolics (was: Poor Sun timings, as competition...)



    Date: Mon, 13 Aug 90 16:50 EDT
    From: Reti@Riverside.SCRC.Symbolics.COM

    The hardware that Genera runs on was designed to make certain combined
    operations efficient by doing them in parallel (e.g. type checking + arithmetic,
    checking for forwarding-pointer/oldspace reference + memory reference, matching
    up required and supplied arguments for common lisp functions, hashing for method
    dispatch, bounds-checking + array reference etc.).  If you implement those exact
    same semantic operations in a typical RISC machine instruction set, they could
    not occur in parallel and therefore would run slower when counted as number of
    cycles.  To come out even, the cycle time of the machine would correspondingly
    have to be some number of times smaller than the current Ivory cycle time (65 ns).

The better conventional-processor Common Lisp vendors have made the most
of what little hardware support is available, though - e.g., the handful
of tagged arithmetic instructions on the SPARC.  And theoretically, one
might even coax per-page read protection into supporting incremental
garbage collection.  (See the algorithm of Appel, Ellis, and Li.)

    Also, the storage of tags is a problem; if you want 32-bit integers and floating
    point numbers, but also want tags, you must have either a word size bigger than
    32 bits (Ivory has 40) or do some kludging around to simulate it which will no
    doubt have a negative impact on the performance of most if not all memory
    references.

Again, the SPARC offers a smidgeon of support for 2-bit tags.  Do we
need full 32-bit integers when we have bignums anyway?

    If you are willing to turn off (partly or wholly) type checking, array bounds
    checking, etc. you could get even more speed advantage, but this is a bad idea
    in my opinion, as these features help catch problems nearer their source and
    often mean the difference between difficulty and impossibility in quickly
    tracking down hard bugs in complex, layered software systems.  If you do decide

Compiler writers, argue, however, that a powerful compiler can optimize
away redundant, repetitive type checks while still keeping run-time type
errors from slipping through.  Optimized code may be less debuggable,
though.

	Lawrence G. Mayka
	AT&T Bell Laboratories
	lgm@iexist.att.com

Standard disclaimer.