[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Function Cells -- the history
- To: Padget@Utah-20.ARPA
- Subject: Function Cells -- the history
- From: JonL.pa@PARC-MAXC.ARPA
- Date: Tue ,27 Sep 83 19:16:00 EDT
- Cc: Lisp-Forum@MIT-MC.ARPA
There are two parts to your question:
1) who first converted atoms (i.e. LITATOMs or SYMBOLs) from list
structure with weird pointer in the first CAR, to standard block
record structure, and
2) who first differentiated between function context and argument
context when interpreting an atom.
No doubt, there is an element of "why" in your question too.
RE point 1:
Peter Deutsch's answer is quite correct as far as I know -- BBN Lisp
(predecessor to InterLisp) in mid 1960's. The arguments I heard in
later years for justification had more to do with paging behaviour
of compiled code rather than any other. It was *not* (emphasize!) for
the purpose of implementing point 2.
Interestingly, VAX/NIL had a modified "structure" for SYMBOLs; it had
been noted in MACSYMA that big systems could easily contain 2000 or
more symbols, only a small fraction of which utilised both the function
cell and the value cell. So for space saving a kludge was perpetrated in
which every symbol had one cell which was a pointer to either a function
cell, or a value cell, or a more standard block record of all requisite cells.
The reason this was "space saving" was that the implementation of
Local-vs-Dynamic and Function-vs-Value are independent, and thus in
fullness, one needs 4 (four!) value cells. Speed of access isn't an issue,
since compiled code normally links thru the value cell itself (or function
cell, if you must) rather than thru the symbol header; also, the extra cost
for this "hair" is verry small in terms of VAX instructions.
RE point 2:
Did any of the replies to you so far note that this differentiation was
part of Lisp1.5.? See page 18 of the "LISP 1.5 Programmer's Manual" where
the classic scheme is described. I did not "invent" this scheme. Neither
did MacLisp, nor its predecessor PDP6 Lisp, originate it. It was inherited
directly from Lisp 1.5 and CTSS Lisp [remember CTSS? the "Compatible Time
Sharing System" on a greatly modified IBM 7090? Most of the Lisp world
remembers only ITS -- "Incompatible Timesharing System" -- and now it too
is passing away. Sic Transit Gloria Mundi (sic)]
I venture the opinion that the underlying reason why "uniform" treatment
of identifiers is ever thought to be desirable is that it somewhat simplifies
the coding and conceptual complexity of the interpreter/system. There is a
strong series of arguments *against* this "optimisation", made by users of
said systems. Dan Bobrow offered the most compelling of these in his
answer: essentially stated, there is a general expectation that the function
name space is global, unless otherwise stated, and that the variable name
space is local, unless otherwise stated. Once this distinction is admitted,
there is little point in trying to force both meanings into a single "value"
cell -- the conceptual simplicity has been forfeited, and since "function
cell" and "value cell" machinery are isomorphic, there is virtually no
coding advantage remaining.
FUNCALL/APPLY* have been around "since antiquity" to make the exceptions
for function syntax, but VAX/NIL and current CommonLisp permit local
re-binding of function cells *** as an option; The Bobrow "general
expectation" is not violated in these systems. More recently, (i.e., post-1977)
there have been introduced declarations and/or constructs into Interlisp,
LispMachineLisp, VAX/NIL, and CommonLisp to supply the second exception
noted above, namely symbolic constants.
I have frequently offered the observation that this distinction is also
rooted in mathematical notation, where journals generally distinguish
function names from variable names by any of a number of devices:
such as, slant type-face vs perpendicular, boldface versus regular, or
conventional subsets of letters (e.g., "f" and "G" are clearly function
names, whereas "x" and "Y" are variable names). Curiously, I've heard
that there is a reversal of some of these conventions between the American
and the European publishers. Can anyone verify this?)
There seems also to be some confusion about the implementation status of
this "distinction", that variables are by default local. As far as I know,
this *is* true of all major Lisp implementations I've worked with. What
isn't true is that that the interpreters of said systems "do the right thing"
(as opposed to the compiled code environment, which defines the semantics).
Almost without exception, the interpreters implement **only** fluid
bindings. VAX/NIL had an interpreter that correctly implemented both
local and fluid variables (see my paper in the 1982 Lisp Conference for a
description of a non-consing scheme to do it). Many toy interpreters have
been built which implement distinctions, but do so by consing up the usual
old alist environment, or some equivalent thereof. Another interesting
interpreter which implements local variables was done prior to 1977 by the
group at IBM Research Center in Yorktown Heights (in Lisp/370), in which
the interpretation of a lambda application did a mini-compilation in order to
get *exactly* the same stack frame as would be built had the code been truly
compiled; the claim was that you wouldn't see differences between compiled
and interpreted code in Lisp/370.
More in defense of my conjecture above.:
Bobrow tells me that the first implementation of 940Lisp (the "root
stock" of BBN lisp) was with the function/variable "optimisation"
mentioned above. On a small computer, cramped for space, with
a one-man implementation team . . . but the user community
balked, and convinced the implementor to re-support the
mathmatically grounded distinction between function context
and value context. Major Lisp systems today have long passsed
the point where they are one-man projects, but happily haven't
reached OS syndrome yet (i.e., a system so big, with so many ad-hoc
extensions and patches, that no one even knows a complete subset
of people who collectively understand the system).
Several of the SCHEME-lovers have privately expressed the opinion to me
that the value of SCHEME is that it doesn't have a user community (except
for learners and tinkers), and thus *can* be kept super-simple and "clean".
David Warren, a principle implementor of PROLOG, also expressed (privately)
such an opinion, and will rue the day that the Japanese turn Prolog into
some kind operating system for 5th Generation hardware.
There will always be difference of viewpoint between those who want to
"muck around" with program semantics and small, "clean" systems, and
those who want a full complement of "power tools" for doing AI research
and applications program development.