[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [not about] gc-by-area and host uptimes
Date: Wed, 15 Feb 89 14:05 EST
From: Qobi@ZERMATT.LCS.MIT.EDU (Jeffrey Mark Siskind)
Date: Tue, 14 Feb 89 12:52:37 CST
From: forbus@p.cs.uiuc.edu (Kenneth Forbus)
Yes, the debuggers on the generic lisps currently, well, to be blunt,
suck rotten eggs.
[...]
and for me are only behind in one narrow area...
I've heard some of the arguments in favor of custom Lisp hardware such
as firewalling, EGC and restart. Could someone from Symbolics please
enlighten us as to the complete list of features from Genera which
could *NOT* be ported to stock hardware.
One flip answer to this question is "Genera", because the same talented
people, working on the best available software development platforms
(besides the Lispm) and putting in the same amount of effort, would have
produced something that falls far short of what Genera is. Environments
like Unix, VMS, etc. all have many times the man-years of effort invested in
them, with (in my opinion) a fraction of the benefits. Most of the
improvements generated by the hardware features are quite low-level, and
skew the tradeoffs in software design made throughout building Genera. For
example, you could gues that your use of object-oriented paradigms might be
different if the cost of sending a message to an object was 100 times more
expensive than calling a function, instead of just 1.5 to 2 times as
expensive.
Another flip answer is "none", because, after all, all machines are Turing
equivalent and can perform any computation, given enough time.
However, performance is an integral part of the equation; the hardware
features in the lisp machine weren't put in gratuitously, they were put in
because our philosophy of how software should be developed and used mandated
that certain operations which were too slow on stock hardware be improved.
So while most of the features could be implemented on stock hardware, their
performance would result in a system much less usable than Genera, perhaps
enough so to be completely worthless.
The full, non-flip answer to this question is hard to express because it
requires explaining the lispm philosophy of software environments (which
prompted the hardware design) and it takes a fair amount of background
context and experience with complex software development to appreciate some
of the issues. This philosophy is a lot of little tradeoffs, not just one
blinding flash of insight, so it takes a lot of verbiage to get the points
across. I'll take a stab at the end of this message (warning, it's
long), not as an official Symbolics representative but as a fan of the most
productive programming environment I've ever encountered.
I'll list here some of the hardware features that have a significant impact
on Genera software performance (the same hardware feature is often
used in distinct ways, so there are multiple entries); I've probably
left out some.
o run-time type checking in parallel with operation (tag hardware operating
in parallel)
If this isn't implemented in hardware, many operations in
Common lisp become several times slower, because the time required
to do the type-check is the same order of magnitude as that required
to do, say, an integer add. Part of the support for this is having
a wider than normal datapath that can do several things at once,
and a wider than normal microcode word to control this datapath.
Doing type checking without hardware would skew the relative performance
tradeoff for doing generic arithmetic (i.e. having one MULTIPLY instruction
for all combinations of integers, infinite-precision integers, small floats, large floats,
ratios and complex numbers), probably far enough that you'd hardly ever
use those numeric types.
Type checking also acts as a way to trap certain classes of errors early,
e.g. trying to do arithmetic on character strings. This operation
might never be detected and could simply provide wrong results to a computation
on stock hardware. This feature strikes me as being entirely analogous
to parity on memories; it lets you know at the earliest moment that something
has gone wrong.
This early-catching of errors is why I think the traditional approach to
slow type-checking (providing a compilation mode that turns it off) is
a mistake; it is precisely when your program is supposedly in production
that you most need to know that it has encountered such a bug. Even if you
handle the error with a trap handler and continue running, at least you have
the ability to know that it happened, rather than just blithely ignoring it.
To me its just as foolish as running parity on your memories only
while developing software, then turning it off when you deploy.
o invisible pointers (tag hardware)
This allows growing an array 'in place', an efficient copying garbage collector,
and several types of sophisticated sharing of data which are all impossible
to do with acceptable performance on stock hardware.
o CDR coding (tag hardware)
Allows large lists to be stored as one pointer per cell instead of two.
Not always necessary, but practically indispensible when dealing with
certain applications.
o More 'Intelligent' prefetcher (tag hardware)
The prefetcher can know when it no longer needs to fetch instructions because
they are tagged. (Not implemented in current hardware but certainly thought
about are intelligent data caches which know how much to load by looking at
the tags.)
o Efficient use of co-processors (tag hardware)
The floating point co-processor uses the tags to decide when it should perform
and operation in parallel with the fixed-point hardware, improving floating
point performance. The tag hardware is also be used to route the right
result to the destination, either the one from the integer unit or the
floating point unit.
o EGC hardware support (GC hardware)
Part of supporting an efficient GC is invisible pointers (already mentioned),
but another part is hardware that monitors writes of pointers to ephemeral
objects (freshly consed and assumed to have a fairly short lifetime) into
non-ephemeral objects and sets a bit for each page into which such pointers
are stored. This allows for very efficient finding of all the references
(otherwise you'd have to scan through all of virtual memory, which is huge
on lispms and would take a LONG time).
o Message dispatch hardware support
Genera relies very heavily on object-oriented programming; this necessarily
has more overhead than a function call because you have to match up the
code to run with the object type at runtime (method lookup) and also have
match up the displacements of the instance variables at runtime. This
is essentially a hash lookup, which has hardware help to make it be as
fast as possible, and thereby minimize the differential cost of message
sending as opposed to function calling.
o Microcode support (if you consider that hardware)
There are some features that are implemented in microcode, but would be
missing on stock hardware. The lispm does array bounds checking on
every array reference; if you do that in macrocode on stock hardware
you can easily slow down array references by a factor of 4 or more.
Microcode on the lispm cuts this down significantly, though it can't
eliminate it entirely.
Assuming all of the rest
could be, you can then leave it up to your customers to decide whether
or not the additional features are worth the additional hassle of
nonstandard hardware. (And if you played your cards right with the
likes of Sun, Intel, and Motorola you might actually be able to convince
them to add some of the requisite hardware to their next generation
processor offerings.)
We contacted Motorola and Intel early on in Symbolics history with precisely
that question, and a couple of times since then. In the past, they wouldn't
even consider it. Perhaps times have changed enough to try again, but I
wouldn't rate the prospect of success too likely.
Jeff
Okay, so here goes the background philosophy explanation, at least
my personal interpretation:
Firstly, let me say that if you are writing simple software, almost any
tools and almost any approach to writing and debugging that software will
work. In the old days people coded in machine language (1s and 0s),
non-symbolic assemblers, etc.; however, the programs so generated had to be
extremely simple because most of the mental work needed to write such
programs involved bookkeeping about the computer system rather than thought
and analysis of the problem being solved. Moreover, computer users
mistrusted the output and sanity-checked everything that was computed,
(the old machines failed all the time), so not a lot of error checking
was needed.
Higher level languages and more modern operating systems helped a lot,
because they provided a higher level of abstraction hence more powerful
tools; they also provided many common operations ready-made in libraries
(like math and I/O). However, there were still a lot of details of syntax
and mechanics of running the system that intruded into solving problems and,
more importantly, there was a lot that was hidden away by the abstraction
level so that if something went wrong, the cost of discovering what it was
and fixing it became much greater in some circumstances. Especially, the
costs became greater (and the risks as computer use became more widespread)
in the case of a bug's simply producing wrong output, rather than a crash.
It is hard to even know that this happened; and even when you find out,
it is usually extraordinarily difficult and expensive to track down the problem.
In certain application, the consequences of these bugs can be expensive
or even life-threatening; we've all heard horrors stories of programming
problems aboard the space shuttle, in cars, etc.
More complex pieces of software are generally, like more complex pieces of
hardware, made up of many parts, often by different authors and written at
different times, that are combined in powerful and novel ways. The more
complex the software, the less likely it is to be bug-free; the effect of
building layers upon layers of software on top of each other, as a lot of AI
applications do, is that you fairly frequently run into situations
unanticipated by the developers and therefore likely to be in error.
With traditional software environments, if these errors were in what was
considered 'operating system' or 'library' or 'compiler' functionality, or
even third-party applications (like spreadsheets, graphics packages, design
tools) whose internals were inaccessible, you were generally up the creek
without a paddle, and either had to spend a lot of time on the phone trying
to convince the vendor(s) that the problem(s) were serious enough for them
to fix right away, or do handstands (and usually violence to the modularity
and maintainability of your own code) to work around the problem.
The more this happens, the more software becomes just a pile of kludges,
bandaids, workaround, special cases, etc., eventually becoming so fragile
that any attempt to change something simply adds another bug somewhere else.
Finally software that is developed in this manner just has to be thrown away
and the functionality rewritten. In a career of 15 years working for IBM
end-users, DEC end-users and DEC itself, I saw dozens of projects go this
route, resulting in many man-years of wasted or not-very-productive effort.
[Bear with me, I'll be getting to the hardware in a little bit.]
The Symbolics environment represents an attempt to counter this trend with
tools that make the formerly inaccessible more accessible, the formerly
opaque more obvious, and the formerly insurmountable at least doable.
Instead of being forced to punt when something goes wrong in the operating
system or some other piece of code in the environment, you are at least
given they option of poking around in it and learning (pretty easily and
quickly) how it works; frequently you gain enough insight in this process to
realize you were doing something wrong, other times, you can develop a
workaround which patches the offending system, rather than distorting your
code.
When you work in the Symbolics software environment, it is more like working
in the real world. When your bathtub springs a leak, you can call the plumber.
But, if it is in the middle of the night and you are a bit handy with tools
you can poke around and figure out how to fix or patch it yourself; the best
choice depends upon your circumstances, inclination and skills. In other
software environments, many times the only recourse is to call the plumber.
The environment also supports tools that make it easy to invent new 'primitive'
operations to match the domain in which you are working, and to write programs
with appropriate modularity so that even sweeping changes can be localized to
one or a small number of places in the code. This allows software to evolve
over time as your understanding of it and the problem it is trying to solve
improves, to get better rather than decay with time.
How does the Lispm attain these goals? Well, first by being a totally open system.
Everything is uniformly written in Lisp, the scheduler, the disk driver, the
network system, even the C, Pascal, ADA, FORTRAN and Prolog compilers
generate Lisp. The result is that if you are using lisp for developing
applications you probably know enough to read and understand all the rest of
the code in the system.
(It also allows Symbolics to maintain complete source-level compatibility
while tailoring each hardware offering to the instruction set best suited to
the technology. The LM-2, 36xx and Ivory all have different instruction
sets, with different formats and even different calling conventions, but run
the same sofware.)
Another aspect of the openness of the system is that the data structures are
all obvious: the system knows their types, what the slots are called, and
how too display the objects in those slots. There are multiple tools which
allow you to browse throughout your virtual memory and the code; these tools make
it easy to collect information about programs, find existing code that's
similar to what you want to do, other callers of routines whose contract you
want to better understand, etc. These tools could be written to operate in
a stock hardware environment, but having a tagged architecture makes them a
lot easier to write, cleaner and therefore easier to upgrade and maintain.
(I told you I'd get to the hardware.)
Another goal of the Symbolics environment is to constrain the user
minimally. Dynamic allocation of objects and automatic reclamation via
garbage collector falls under this category; you can still do manual
allocation/deallocation in those circumstances for which it makes sense, but
you aren't forced to, especially when you are prototyping or exploring
various approaches. But garbage collecting many hundreds of megabytes of
virtual memory efficiently is a challenging problem; the copying garbage
collector relies on being able to use invisible pointers to allow
not-yet-encountered references to old objects to continue to work, as well
as hardware help for remembering where certain types of pointers are stored,
so it doesn't have to scan all of virtual memory to find them. Again, it is
possible to implement this type of garbage collector on standard machines,
but the performance will most likely be poor.
Other features that fall into the general category of minimal constraint:
o Ability to grow arrays already allocated reasonably efficiently; this uses
invisible pointers (i.e. the tag hardware). (Actually, this is
a more general ability; you can structure-forward any structure to
another, so the references to the first automatically become references
to the second. The circumstances that require this are rare, but in
those circumstances, invisible pointers are the only efficient way
to get the same functionality.)
o Ability to make arrays that will trap you attempt to store the wrong
data type in them (again the tag hardware); thus your software won't
have to check for illegal data types itself.
o Ability to indirect arrays to other arrays (so you can view the same
data in two different formats), but still retain bounds checking.
The last goal of the lispm environment is to provide powerful tools; some of the
well known ones are incremental compilation, symbolic debugging, on-line documentation,
graphics, user-interface tools.
However, in my opinion the most powerful are tools those that allow to
browse through and learn about the environment (and all the rest of the
tools); they allow you to leverage on the work of others, instead of
constantly having to re-invent the wheel. This is probably the single most
important benefit of the Symbolics environment.
Whereas in most operatings systems you have limited number of entry points
and a limited number of ways to share data, the lispm provides potential
access to all the code and data used by the operating system. One
example I like to use: many operating systems internally use a b* tree
to keep paging information. However, this code is inaccessible to the
user, either for copying or for calling directly because it is internal
to the operating system. If you want to manage a b* tree in your application,
you have to roll your own.
The cost of poking into, for example the scheduler or the network code, (if
that is what is the current hurdle to getting your application out), is the
same order of magnitude as the cost in poking around your colleague's code.
In other environments, if the source code is available at all it is likely
to be in another language, or at least, written with different conventions
and assumptions as user code, and therefore much harder to understand.
So much harder to understand that this probably isn't even an option in
those environment, so it seems unnatural to hear someone like me say that
it is done frequently in the Symbolics environment.
One reason these tools are powerful is that we built them modularly
according to our own philosophies. Because of the software philosophy,
which rests on the hardware's being able to do certain things efficiently,
we give more functionality per man-hour invested, and give our customers the
ability to produce more functionality per man-hour they invest in their own
software.