[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: 4.2

     > The term source code is used to refer to the {\word objects\/} 
     > constructed
     > by
     > {\function compile-file\/},
     > and additional {\word objects\/} constructed by
     > macroexpansion during {\function compile-file\/}.
     > %[rpg: I think the source code is whatever the representation is in
     > %whatever a file is. I think this use of READ as a semantic crutch is
     > %unnecessary.]

     I disagree.  "Objects constructed by COMPILE-FILE" is not specific
     enough and could be taken to refer to objects that COMPILE-FILE
     constructs and uses internally during the process of compilation, such
     as intermediate code or register mappings.  I think the original 
     definition makes it clear what objects we're talking about.

I agree that the proposed rewording doesn't capture the right information.
Maybe this:


The term source code is used to refer to two things:

* the {\word objects\/} constructed by {\function compile-file\/}
corresponding to the objects that READ would have produced on the
same input

* additional {\word objects\/} constructed by macroexpansion during
{\function compile-file\/}.


     Re the requirement that COMPILE-FILE uses READ.  This is indeed stated
     explicitly in CLtL (on page 69) and also in the
     EVAL-WHEN-NON-TOP-LEVEL proposal we voted in at the last meeting.

On page 69 it states:

``The EVAL-WHEN construct may be more precisely understood in terms of
a model of how the compiler processes forms in a file to be compiled.
Successive forms are read from the file using the function READ.''

I take the term ``model'' seriously, and I agree that the wording
should be `...as if read using READ....'', but I am not happy with
requiring READ to actually be used. It is an open question whether
some particular functions are identified that are actually used during

The reason I object to explicit use of READ is that there are
legitimate Common Lisp environments in which there is no such thing as
character-level source code (printed representation). Such an
environment would use abstract syntax trees to represent source code,
and only during printing would anything like an ascii representation
be available. These syntax trees can be grouped into files, and
COMPILE-FILE makes sense on them, but it is senseless to translate
into ascii so that READ can be used.  (Sadly READ is specified to take
printed representation.)

     ... and also in the EVAL-WHEN-NON-TOP-LEVEL proposal we voted in at the
     last meeting.

Sounds like you pulled the wool over our eyes.

     > %[rpg: There is a complicated issue: Can the compiler assume that the
     > %resulting code in a compile-file situation will be run in the same
     > %Lisp? The same implementation? The same computer?  The same type of
     > %computer?

     We've made a conscious decision in the compiler committee not to
     address problems relating to cross-compilation.  Kent claims that it's
     impossible to do a fully general cross-compiler

This has nothing to do with cross-compilation, per se. The issue is
that we need to state something about where the compiler can assume
the compiled code will run. Since we are outlining things the compiler
can assume, this seems like an obvious thing to discuss. The two choices
about what to say are:

1. It is assumed the code will be immediately loaded into the very image
the compiler that was just used is in.

2. A fresh copy of the above image.

That is, we state that the user can invoke COMPILE-FILE, and that some
behavior of that function is specified. But we never say what you can
do with the output. Well, we can LOAD it somewhere and it will run.
Where? I think we have to state something about a place that is
guaranteed to work.

     > %[rpg: the interpreter can assume the same thing, right? That is, a
     > %valid Common Lisp has be one in all code is compile-filed by a
     > %separate program and loaded and executed in the apparent Common Lisp
     > %image.]

     I don't understand this remark.  Yes, all of these things also apply
     to the evaluator.  The difference is that the evaluator is effectively
     allowed to do both compilation and execution at the same time, so the
     time at which things are allowed to happen is not so important.  This
     whole section is trying to address the question of what things happen
     during compilation and what things happen during execution, when those
     two times aren't necessarily the same.

The point is that if there is a statement in the compiler section
about CL semantics, and the same statement is true of the interpreter,
then the statement is about the language and belongs somewhere besides
the compiler section.

     > % The following paragraph from issue COMPILE-ENVIRONMENT-CONSISTENCY
     > %    seems likely to change:
     > \itemitem{\bull}  
     > The compiler can assume that type definitions made with 
     > {\function deftype\/}
     >   or 
     > {\function defstruct\/} in the compile time environment will retain the same 
     >   definition in the run time environment.  It can also assume that
     >   a class defined by 
     > {\function defclass\/} in the compile time environment will
     >   be defined in the run time environment in such a way as to have
     >   the same {\word superclasses\/} and compatible metaclass.  
     > %[rpg: compatible metaclass?]

     I'm confused.  Where did this language come from?  It's not the
     current language in the COMPILE-ENVIRONMENT-CONSISTENCY proposal, and
     it's not the language of Gregor's proposed amendment either.  Does
     this represent the new position of the CLOS people on this issue (that
     I haven't been told about)?

Since this is exactly your message of April 25 except for the word
``compatible'', you must be asking about that word. First, note that
my comment says ``compatible metaclass?'' whilst your original said
``same ... metaclass.'' My question is whether we want to restrict the
metaclass to be the same or is it allowed to be some simpler one?
Here is an *analogy*. The type INTEGER is defined in Common Lisp.  A
fancy compiler might be able to correctly determine that where
INTEGERs are used only fixnums are needed. Therefore, the compiler can
assume that the metaclass of INTEGER can be specialized to the one for
FIXNUM. Rather than restrict the smarts to subclasses of metaclasses,
I conjectured a useful term might be ``compatible'', which the CLOS
crowd threw around for a while.

[note: the reason subclass of metaclass is not sufficient is that
the object might have fewer operations done on it so that the
metaclass can support fewer operations (such as not supporting
slot-value), and so it's actually something like a superclass of
the metaclass that is simpler and which the compiler can assume.
Hence, the term ``compatible.'']