[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

C vs. Lisp



        Date: Fri, 18 Jan 91 19:34:04 -0800
        From: magerman@Neon.Stanford.EDU

        But C bashing is going too far.  In many ways, C is a much more powerful
        language than common-lisp.  I can think of many non-trivial applications
        that require number-crunching, text searching and sorting, and/or massive I/O
        that can't afford to wait for garbage collection or consing of any sort:

     	   Computer vision (although I am not an expert)

     Hmm, we've been doing vision work in *Lisp for years.
On a CM, sure.  Here I was comparing C to common-lisp, not to *Lisp.  But
you probably win this one, since I can't argue with someone in the field.

     	   Statistical Models for Natural Language
     I'm not sure what this is (as opposed to other forms of Natural Language
     research).
Here I was only concerned with I/O issues, so the specific example is not
important.
     	   Regular expression search (a la grep)

     Lisp is a natural for this!  Translate the regular expression into a Lisp
     expression, compile it, and then call it.  Or implement it as an FSM, where
     the states are implemented as alists mapping events to a function to call
     and a new state.

I'm sorry, but a modification of grep can search a 3 megabyte file for a
moderately complicated regular expression, ' [a-z]*/nn [a-z]*/[^ ]* ',  in
less than 3 seconds.  My Lispm might take that long to open the file on
a LMFS.

     	   Any application associated with loading large data sets into a
     		   connection machine without a data vault

     Huh?
Here I'm talking about the I/O speed in bringing large data sets into the
Lispm to be used by the CM.  Over a year ago, I had this complaint, and I was
sent a few low-level read/write hacks to partially solve this problem.  But,
I claim that if a function isn't documented, it is an internal function that
can not be used for portability and compatibility issues.  Lisp's documented
read functions are orders of magnitude more inefficient than C's documented
read functions.  This comparison was quantified in detail over a year ago,
and has not been substantially improved since.

        There are many situations where you want to have control over the way memory
        is allocated and where in memory things get written.  In the case of large
        data sets, you frequently want to allocate memory in such a way that data
        objects that will be compared or used together will be close together (i.e.
        on the same page) in memory.  Otherwise your OS goes crazy.  This type of
        situation (IMHO) *demands* C programming.

     How do you guarantee that two objects will be on the same page in C?  The
     best you can do is guarantee that they'll be close together, by allocating
     one big array and then carving it up yourself.

Exactly.  In C, when you read large data sets into memory, you read them into a
simple character array, parse them, and write them to a large, dynamically
allocated array.  Then you simply discard the character array.  In Lisp, you
cons up space *each time* you read in an object, then cons up more space while
parsing the object, then you might write it to a large lisp array.  But if
you read in 1 million objects, in C you use O(1) temporary storage space
which is easily discarded.  In lisp, you use O(1 million) cons cells, which
must be GC'ed.

     If you don't mind writing less clear code, you can do what you want in
     Lisp.  All the Common Lisps I'm familiar with implement the elements of
     unsigned-byte (with a byte size up to 32) arrays as immediate data, so no
     pointer chasing is done.  So, you can make a big array, and then carve it
     up into small arrays (perhaps using indirect arrays, if you don't mind

If I have to do this, I'd rather use C and UNIX, purely based on the response
time of the OS and the fact that documentation in C is geared toward doing this
while documentation for this in lisp is less abundant.
     allocating a few extra objects) in Common Lisp.  Of course, the elements of
     this array can't be arbitrary objects, they can only be positive integers,
     but you can write macros and functions to build some more complex data
     structures out of them.

This is crucial!!!  The whole concept of a (void *) in C, where you can tell
your program explicitly that you have a pointer to an unknown object and let
data-driven code process it for you.  It's not as sophisticated as flavors,
but it's lightning fast, which flavors certainly aren't.

I don't think we will get very far with this argument, because of the strength
of our opinions, but I'd be glad to continue discussing it anyway.  Who knows?
Maybe one of us will have a revelation.  I worked on LispMs for two years
before ever programming on a UNIX machine.  Once I learned about how the UNIX
OS worked, I could never go back to the Lispm.  If you take advantage of the
C system calls and the UNIX utilities, you can do magic that you could never
dream of on a Lispm.  I'm convinced of this.  Only because you can *run*
applications on a UNIX box in the time it takes you to *open* a file on
a LMFS.  That's just insane.

-- David Magerman
Stanford University
CS Dept.

P.S. - In response to other messages I got about the wonders about Genera-type
things like resources and other internals, my answer is: portability.  I
personally don't know anyone who is writing new code that isn't common-lisp,
or at least (lucid + some windowing package) compatible.  Before everyone
sends me mail saying that they are writing new code, I know you exist.  I just
think you are making a mistake.  And you are probably in the minority.