[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
T and Franz and the "Lexical" question
- To: Olin.Shivers@CMU-CS-H.ARPA
- Subject: T and Franz and the "Lexical" question
- From: JonL.pa@PARC-MAXC.ARPA
- Date: Wed, 8 Feb 84 09:38:00 GMT
- Cc: Franz-Friends@Berkeley
- Original-date: 8 Feb 84 01:38 PST
Re your message of 4 Feb 84 04:18:13 EST
My apologies for what seems like a late entry in this discussion; but
my original reply to you was sent shortly after receipt of your note
on Saturday, and doesn't seem to have made it through the various mail
forwardings. I rather concur with the insightful comments about the place of
T "under the Lisp sun" made during this interval by Fateman and Fahlman, but
you may find the additional comments herein enlightening.
Your most recent comparison of T and Franz was so non-objective that
it shouldn't require serious comment (even by your own admission, you were
"beginning to froth at the mouth").
But you implied a certain familiarity with "lexical scoping", and those
allegations arising therefrom deserve rebuttal (precisely because so many
newcomers to the Lisp field seem to share the same misconceptions).
1) Lexical scoping and compatibility between the Interpreter/Compiler are
totally independent dimensions. Lisp/370, done at IBM Research Labs in
the mid 70's had dynamic scoping, *** but default lexical scoping ***,
with a perfectly harmonious interpreter; Lisp/370 was a "deep bound"
implementation, but see my paper in the 1982 Lisp Conference concerning
an interpreter for a shallow-bound Lisp which achieves similar harmony
(again - a Lisp that admits dynamic scoping).
2) Lexical scoping and compiler optimization have, again, almost nothing to
do with each other. The T compiler is a derivative of the early S1 Lisp
compiler by Guy Steele, which introduced the TN-packing ideas into Lisp
compilers for the first time I know of. Needless to say, that Lisp permits
3) You praise T's "naming conventions [which] were reworked without worrying
about compatibility with previous Lisps ...". Most AI researchers are
concerned about sharing code, as well as ideas; there's every good reason
for them to view this gratuitous renaming of the 25-year old Lisp primitives
as a bothersome self-indulgence on the part of the authors. At least the
changes made in Common Lisp were sanctioned by nearly a dozen different,
independent, implementation groups.
In fact, both the subject of the paper mentioned in (1), and the Lisp mentioned
in (2) were versions of NIL -- which has dynamic scoping, despite your
comparison of it to T. It also has lexical scoping. Where you, and so many
others, appear to be confused is in the belief that having any lexical scoping
rules out dynamic scoping *** in the same system ***. This exclusive nature
-- no dynamic scoping allowed -- is the rule for SCHEME (and I dare say T,
if it is a true "son of SCHEME"). When the mixed implementations are correctly
done, there is no interference between the declared, implicitly or otherwise,
lexical variables and the dynamic ones. [some writers use "fluid" or "special"
for dynamic; some also use "local" for lexical"].
In fact, most programs written in MacLisp and Franz are lexically scoped.
Some aren't. But the lexically-scoped ones are indeed compiled by the
MacLisp compiler (at least -- and I think the later varieties of Franz
compiler too) in essentially the same way as any restrictive lexically-scoped
language would have them compiled. The signal difference is the treatment
of FUNARGs, and a careful reading of the Sussman/Steele papers will
The reason why so many other Lispers do not share your enthusiasm for the
SCHEME approach is that they have dropped FUNARGS as an interesting thing
to worry about; hence there isn't much enthusiasm for a scheme (pun intended)
that "does them right". On the other hand, I'll admit that the functional
programming types are more enthused [note well: "functional programming"
does *** not *** mean merely that the basic module is a function. Regarding
the "globality" of function names in Lisp, a long discussion ensued last fall
on the distribution list LISP-FORUM@MIT-MC, and the net upshot was that it
was a trivial matter of style preference for most Lisp users.]
Rather, modern directions for Lisp have been inspired by SmallTalk, in which
"smart" objects are first-class citizens; and a plethora of primitives have
been added to make using them fun and easy [ Flavors in the Lisp Machine,
EXTENDs in MacLisp, and LOOPS in Interlisp-D]. While one can emulate "objects"
with FUNARGs, so too he can emulate FUNARGs with "objects" and this latter
approach is surely more general. A modest amount of compiler development and
systems support work would probably put both emulations in the same ball park
as far as speed goes; but the syntax for using FUNARGS as "objects" is too
restrictive [there's a long history in the MIT LispMachine project about
the first implementations of "flavors" as funargs, and how it wasn't quite
right that way.].
If it's not too much to ask, I'd be curious to know just which of
your professors at CMU declined to consider AI projects written in Franz.
Although I've heard a lot of complaints against Franz's lack of a good
debugging environment, I've never heard any reports that it was significantly
slower than the other VAX alternatives.