[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
cost of multiple-values, ML, destructuring
- To: Bob Kerns <rwk@WORLD.STD.COM>
- Subject: cost of multiple-values, ML, destructuring
- From: Rob_MacLachlan@LISP-PMAX1.SLISP.CS.CMU.EDU
- Date: Wed, 06 Jan 93 16:23:40 EST
- Cc: "Ben A. Hyde" <gensym!bhyde@HARVARD.HARVARD.EDU>, info-dylan@CAMBRIDGE.APPLE.COM
- In-reply-to: Your message of Wed, 06 Jan 93 13:20:34 -0500. <199301061820.AA18496@world.std.com>
To: "Ben A. Hyde" <gensym!bhyde@harvard.harvard.edu>
Subject: Re: define and set! with multiple-values
Date: Wed, 06 Jan 93 13:20:34 -0500
From: Bob Kerns <rwk@world.std.com>
[...]
> While I'm convinced that the tax is only a very few instructions on
> those forms I notice that in CMU Lisp on my sparc:
> (defun f (x) (g x)) -> 80 bytes
> (defun f (x) (declare (special x)) (g x)) -> 312 bytes
> (defun f (x) (declare (special x)
(values fixnum)) (g x)) -> 173 bytes
Special variables are much more expensive than multiple values.
I think that what Ben was pointing out here is that in CMU CL, the cost of
doing a special binding around an unknown-values TR call is 50%> than the
cost of doing the binding around a single-value call. Which is another way
of saying that the cost of a MV-PROG1 can be as great as 20 instructions.
[There is an implicit MV-PROG1, since the special binding must be undone
after the call returns.]
With the current CMU CL default calling convention, a
call-for-1-value...return-1-value doesn't have any cycles clearly
attributable to supporting multiple values. A single-value return does in
effect always indicate the return value count, but does so by incrementing
the return PC, and the return PC always has to be incremented in any case
(to strip off the tag.)
There is considerable complexity in the Python compiler related to multiple
values, but much of this is related to optimizing the /=1 case. Use is
made of the symmetry between call arguments and return values. The first
few values are passed in registers at negligible extra cost. (APPLY f l)
is compiled as (MULTIPLE-VALUE-CALL f (VALUES-LIST l)), which results in
some strange optimization opportunities. It is an open question whether
this effort would have been better spent on more efficient support for
stack or heap allocated return-value tuples (which might be more widely
useful.)
I believe that ML maintains the symmetry between call & return by saying
that all functions take one *argument* which is "conceptually" a vector
that is normally destructured into separate values. In SML/NJ there are
optimizations that avoid actually doing the allocation both for call and
return, but I believe that return value vectors (or records) are always
allocated for calls to separately compiled functions. This is relatively
unnoticeable, since all stack frames are also heap allocated.
The ML approach seems more elegant than Common Lisp multiple values, but I
really can't comment on the attainable efficiency or ease of use. ML has
convenient & comprehensive object-destructuring facilities integrated into
its declaration mechanisms. Dylan fixes CL's rather hideous syntax for
receiving multiple values by moving values destructuring into BIND.
Note that SYNTAX-CASE in the pending macro proposal provides for
destructuring of DYLAN code, which is the primary use of DESTRUCTURING-BIND
in CL.
Rob