[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: dylan scientific/multimedia

We are planning to develop a highly optimized Dylan implementation at CMU.
Our experience with CMU Common Lisp have give us a pretty good idea of the
problems of efficient numeric coding in Lisp-like type systems.  We are
planning to develop a Dylan implementation with strong emphasis on the
run-time speed of delivered applications.

By far the biggest performance problem is with floating point
representation.  When a float must be tagged, significant memory access and
memory allocation overheads are incurred.

Use of a tagged (32bit - tag) float representation (SHORT-FLOAT) avoids the
memory allocation overheads, but supporting a float format in software
tends to have unpleasant numeric and/or efficiency implications.  Much
serious numeric work requires 64bit floats anyway.

Given a compiler technology that supports the natural (untagged) float
representation, it is generally possible to come close to the performance
(2x or better) of C (or presumably FORTRAN) on RISC processors.  Getting
this close often requires various unnatural steps:
 -- A great deal of attention to program organization and compiler
    diagnostics to ensure that good numeric representations are used.
 -- Hand optimization of multi-dimensional array access loops.
 -- Use of non-portable high-performance binary I/O facilities.

What would be necessary in Dylan to get as good numeric performance as is
available in some Common Lisp implementations?
 -- Specialized array element types are definitely necessary.
 -- Type declarations for module (global) variables would be very useful.
    This is really only necessary for mutable variables.
 -- There also needs to be either a predefined class variable for each
    specialized array element type, or a general class generator which
    produces the specialized class when presented with an element type.
    These classes would be used in slot and variable type declarations.
 -- As long as SET! is basically never used on local variables, it will
    *not* be necessary to allow the type of local variable contents to be
    declared.  This implies that iterations will always be written using
    BIND-METHODS or equivalent forms of recursion.  (Note that the types in
    BIND and method specializers only specify the type of the initial value.)
 -- Efficient small-integer (i.e. array index) arithmetic must be possible.
    The difficulty of proving that small-integer arithmetic doesn't result
    in overlow is one of the major causes of inefficency in Common Lisp
    programs (even non-numeric ones.)

Dynamic method dispatch definitely causes problems for efficient
representation of arguments and return values.  It may only be possible to
do efficient float argument passing once a generic function has been

Overall, it seems that Dylan is fundamentally any worse than Common Lisp
for numeric efficiency.  Good numeric performance will require the
availability of some new array and number classes.  These can probably be
regarded as an additional library layer.  (Though I am doubtful of the
usefulness of untyped multi-dimensional arrays.  I guess arrays are
technically part of the level-1 library anyway.)

There are some fundamental problems which Dylan shares with Lisp relating
to run-time checking of function applicability and run-time function/method
redefinition.  However, Dylan's sealing capabilities provide a way to
indicate when this dynamisim is no longer necessary, allowing efficient
compilation.  Common Lisp implementations can offer "block compilation" as
an extension, but the language is defined to be dynamic.  I think it is
much better to generally discourage or forbid run-time function
redefinition, but to allow that the program development environment will
have a way to efficiently support redefinition (at the cost of run-time