[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

INTEGRABLE PROCEDURES / LIGHTWEIGHT PROCESSES



(1) INTEGRABLE PROCEDURES

  a) What happens if one compiles a recursive procedure
  defined with define-integrable?

  b) Case statements which do a hard-wired dispatch, like
  the top level EVALUATE of the meta-circular evaluator
  (T Manual, p.110) could, in true functional style, be
  writen, eg.,
      ((CASE (CAR EXP)
         ((QUOTE) EVALUATE-QUOTE)
         ((IF)    EVALUATE-IF)
         ...)
         EXP ENV)
  Will this generate equivalent code, or does it lose if,
  say, EVALUATE-QUOTE is integrable?

(2) LIGHTWEIGHT PROCESSES IN T 3

      I have been giving some thought to what sorts of capabilities
I would like to see in a T with lightweight processes.  One of the
themes informing these suggestions is that no one really knows at this
point what the wins to be had from lightweight multiprocessing are,
so one would not want to prematurely exclude any capabilities.

  The types of behavior one might want to be able to specify are:
 a) allocate the cpu to this process at a given frequency
 (eg., the process handles some aspect of the user interface which has
 to be serviced at a certain rate to create an illusion of continuity)
 b) run this (these) process(es) whenever there is nothing more
 pressing to do.
 c) Allocate k times as much processing power to process A as process B.

b) and c) could be implemented with a priority mechanism. a) is a bit
trickier. If a block on input in any lightweight process blocks all
of them, however, mechanisms like a) are essential for achieving,
say, behavior where the machine will always service a keyboard
request in an nth of a second, but otherwise will keep busy doing
nice things. Setting a process-switch rate might accomplish the desired
result, depending on how it was coordinated with the priorities, but,
harping back to my ignorance theme, I think it is important to avoid
the danger of providing "primitives" which are so high level as
to box users into limited and possibly dead-end paradigms. Here is
a suggestion for a set of low level primitives general enough to
support wide ranging experiments:

    (PROCESS:CREATE exp env) -> process-object
      Create a process-stack etc., for running exp in env.
      Return descriptor. Process does not begin running.

    (PROCESS:RUN process-object quantum) -> undefined
      Run process for up to quantum of time. If quantum is NIL,
      run to completion. Blocks caller until done.

    (PROCESS:RELINQUISH-CPU) -> undefined
      Permits a process to return to whoever called PROCESS:RUN
      on it before its quantum has elapsed. If PROCESS:RUN is called
      again on the same process, execution resumes from the statement
      after the PROCESS:RELINQUISH-CPU. (A noop in the T process.)

     (PROCESS:KILL process-object) -> undefined
      GC's process stack, etc. (A no-op or == (exit) in the T process).

     (THE-PROCESS) -> process-object
       Returns the process-object of the process that executed the
       call.  (What for? Not sure. Permits suicide ...)

  I think these primitives can support straightforward implementation
of standard ideas like equi- or weighted time slicing, with or without
settable switch rate; recursive time slicing, AM style agendas, and
who knows what more exotic beasts.

      One final suggestion:  what about a (COPY-ENV env1) -> env2
capability, which creates a virtual copy of env1, with the property
that any sets of variables in env2 which are non-local get treated as
local. This would allow processes to run in identical environments
without communicating when desired, and with demand-driven space
overhead.
-------