[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: CMU CL 15c performance
Oh, I forgot to mention the type inference issues...
BTW, do you have any comments on why I had to declare types explicitly
in parts of the code that I have marked "INFERENCEn"? I think doing
this kind of inference in the compiler (if legal) would simplify
writing numerical code greatly.
;;; INFERENCE1:
;;;
;;; The variable assumes the range 0 ... (- (ARRAY-DIMENSION # #) 1) and
;;; is not assigned to; the variable is also used as an array subscript
Python will infer that a DOTIMES (or other loop with a fixnum count and a
inequality exit test) ranges only over fixnums. However, due to
limitations of the current flow analysis algorithm, it is confused when
the body contains a nested loop. Ideally it should also infer that a
bounds check is unnecessary, but I haven't even thought about how to do that.
;;; INFERENCE2:
;;;
;;; The variable is initialized as a single-float, and all assignments to
;;; are also obviously of type single-float (e.g., assignments of the form
;;; (setq var (min var single-float-value)))
Hmmmn. I would guess that MIN did work. However, there is a problem with
arithmetic operators like +, *, etc. The initial result type assumption
for these operators is NUMBER. In order to realize that the value is
always a float, it would have to first postulate that the value was a float
and then see if this solution were consistent, in effect do an inductive proof.
I have thought of some ways to do this, but these changes are pending a
reimplementation of front-end flow analysis.
;;; INFERENCE3:
;;;
;;; The variable is used as an array subscript.
Dynamic type inference is a somewhat subtle thing. Python will infer that
the value is a FIXNUM, but only downstream of the array reference.
Sometimes this is enough to prove that a loop variable is a fixnum, and
sometimes not. In a dynamically typed language one can write:
(dotimes (i n)
(if (typep i 'fixnum)
(aref v i)
(write-line "We got a big one!")))
Also, what are your thoughts on how/whether I can (or should be able
to) use polymorphic higher-order functions in numerical code
efficiently? At least for higher-order functions that have been
explicitly declared inline and are present in the same compilation
unit as the client, I would hope that any consing for return values
could be avoided.
Yes, it should work. One problem is that I believe there is currently a
bug which prevents functions defined in a block compilation from being
inline expanded within that block compilation. A temporary workaround
would be to place the definitions of the function you plan to inline expand
before the start of the block. [Note that the (DECLAIM (INLINE fun)) must
come before the DEFUN.]
The idea of explicitly declaring polymorphic types is an interesting one,
and would probably fit better in Common Lisp and a ML-like inference
scheme. One semantic problem is the need to define what it means to be
"the same type" when you are substituting in a pattern.
In CL, the type of a value is an approximation of the value, and multiple
types can describe the same value, but some are more specific than others.
Even ignoring the MEMBER type, it is possible to construct a type specifier
for any real number which only that exact number belongs to.
In your example:
(declare (type (function (A single-float) A) f)
(A base)
(type float-image image)
(values A))
Would a call to (fold-float-image f 1.0 image) only be able to return 1.0?
More likely, you mean it can return any SINGLE-FLOAT.
Rob