[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
C vs. Lisp
Date: Sat, 19 Jan 91 00:09:25 -0800
Hmm, we've been doing vision work in *Lisp for years.
....Here I was comparing C to common-lisp, not to *Lisp....
It matters not. I've done serious vision hacking in a conventional
language (Pascal), and boy was it a royal pain. The main difficulty for
my work was the typically narrow expressive range of a language such as
Pascal or C. Lisp, especially in an integrated environment like Genera,
lets you work at any and all levels of abstraction, as appropriate for
whatever particular thing you are doing. This is especially important
when you try to do something practical in a domain such as vision which
requires number crunching and other "low-level" processing as well as
higher level reasoning.
I will grant you that for purely number crunch intensive code, there is
no great advantage to Lisp versus C (though there is rarely any great
disadvantage). One example I've worked with in Lisp is a neural network
training and recognition system, in which the core code had a "C style"
to it because of how it was tuned for performance. The neat thing would
have been to write this portion in C and the remainder in Lisp, which
would have given us the best of both worlds. I think all this debating
over Lisp versus C might simply fade away if some of the vendors would
get their act together and provide a truly integrated approach to the
cooperative use of multiple languages in their software environments.
Unfortunately, this is difficult to accomplish in Unix.
Statistical Models for Natural Language
I'm not sure what this is (as opposed to other forms of Natural Language
Here I was only concerned with I/O issues, so the specific example is not
I'm sorry, but a modification of grep can search a 3 megabyte file for a
moderately complicated regular expression, ' [a-z]*/nn [a-z]*/[^ ]* ', in
less than 3 seconds. My Lispm might take that long to open the file on
Here I'm talking about the I/O speed in bringing large data sets into the
Lispm to be used by the CM. Over a year ago, I had this complaint, and I was
sent a few low-level read/write hacks to partially solve this problem....
It seems throughout this discussion that you are sometimes conflating
"lisp" with "lisp machines" and with "Symbolics I/O." The fact that
Symbolics I/O is not especially zippy does not somehow mean that Lisp is
a worse language than C (after all, you'll have even more slow fun if
you run C on a Symbolics!).
In C, when you read large data sets into memory, you read them into a
simple character array, parse them, and write them to a large, dynamically
allocated array. Then you simply discard the character array.
Is there any reason not to do the same thing in Lisp?
you read in 1 million objects, in C you use O(1) temporary storage space
which is easily discarded. In lisp, you use O(1 million) cons cells, which
must be GC'ed.
Not so. As has been said, you can control the consing if you wish.
You just don't have to if you'd rather not.
.... If you take advantage of the
C system calls and the UNIX utilities, you can do magic that you could never
dream of on a Lispm.
I don't think so. Can you provide any examples?
I'm convinced of this. Only because you can *run*
applications on a UNIX box in the time it takes you to *open* a file on
a LMFS. That's just insane.
Again, you are mixing I/O performance issues into a discussion which I
thought was about languages. Also, have you ever tried an XL1200?
Date: Sat, 19 Jan 91 13:25:23 -0800
[ a lot of discussion about C vs Lisp deleted ]
This may point to one of the reasons for the preceived differrence
between Lisp and C. Maybe Lisp does too much for programmers....
In C you must face every issue up front so you make much
leaner dicisions. Many exerienced Lisp programmers worry about
perfomance issues as they write, less experienced people may not.
My suspicion is that the most experienced of Lisp programmers do *not*
worry about performance during development, unless there is a glaring
performance problem. Rather, it is better to worry about readability,
algorithmic elegance and overall code quality first, and performance
later (and only when necessary). I find that this approach leads to
the best results in terms of performance as well as maintainability,
.... Upon reflection, this seems to me to be the crucial
problem that I have with common-lisp. It does so much for you (especially
symbolics common-lisp) that you relax your awareness of the amount of work
being done for you....
Maybe it's my own fault for getting lazy in common-lisp. But I find that C
encourages disciplined, efficient programming, whereas common-lisp encourages
more stylistic but less efficient programming. Beautiful code is nice, but
the message I'm getting is that no matter what language your in, it's gonna
get ugly if the task is hard enough.
I think not. What makes code ugly is usually the programmer's lack of
understanding of the problem being solved, and extra complexity injected
by the programmer to deal with subproblems which could be handled more
elegantly. It takes a tremendous amount of self-discipline to solve
hard problems elegantly in any language, because you have to be willing
to take the time to stop and think about issues until you understand
them well before you go off trying to resolve them into code. I'm not
saying that I am always capable of such self-discipline, nor that the
demands of a commercial environment will always allow programmers to
exercise it; it is merely an ideal toward which we should aspire.
If I'm gonna right ugly code, I might as well do it in C.
On this point we are in complete agreement.
-- David Magerman
- C vs. Lisp
- From: KMP@STONY-BROOK.SCRC.Symbolics.COM (Kent M Pitman)
- C vs. Lisp
- From: email@example.com (Ian Bruce)