[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Lisp is faster than Prolog? A personal plea
- To: mcgeer%ucbkim@Berkeley, narain@rand-unix
- Subject: Re: Lisp is faster than Prolog? A personal plea
- From: hilfingr%ucbrenoir@Berkeley (Paul Hilfinger)
- Date: Wed, 27 Mar 85 09:07:16 GMT
- Cc: prolog%ernie@Berkeley, franz-friends%ucbkim@Berkeley
- In-reply-to: Your message of 26 Mar 85 17:33:41PST
- Original-date: Wed, 27 Mar 85 01:07:16 pst
Please, please, please stop trying to compare the performance of Lisp and
Prolog by considering micro-benchmarks! Even in languages that are
essentially "the same" (from my perspective as a semanticist/language
designer or from the perspective of a Prolog programmer, FORTRAN, Pascal,
Modula 2, and Ada are all the same); such benchmarks are imperfect guides.
When comparing Lisp and Prolog, where the programs one might write to do a
particular problem might be radically different in strategy, anything that
compares the performance of tiny programs conveys almost no useful
information.
Please, please, please, PLEASE don't try to compare Prolog with Lisp (etc.)
using LIPS as a part of the measure! In comparing Prolog implementations, I
suppose LIPS might be of some interest, but when comparing Lisp with Prolog,
they don't help at all. The reason is simple: if Lisp is not suited for
doing logical inferences (in the Prolog sense) quickly, then the good Lisp
programmer simply does not formulate his solution using logical inferences.
(Patient: Doctor! Doctor! It hurts when I do this. Doctor: Well, then don't
do that.) It's like saying that my APL implementation, which uses lazy
evaluation and a bit of cleverness to compute
+/ ,((iota n) o.= iota n) x A +.x B
(the trace of the matrix product of nxn matrices A and B, I think) in time
O(n^2) instead of O(n^3), is "faster" than my FORTRAN implementation, which
requires time O(n^3) to do a direct transcription of this algorithm
(actually forming the full matrix product).
I think it wrong to say
"To do [deduction, searching, pattern matching and other AI-stuff] in
Franzlisp you must write Lisp functions and suffer the loss in speed
associated with simulating functionality in a high-level language."
because one DOESN'T use simulation if one wants speed, but instead goes
after an entirely different kind of solution (I won't argue that this
solution is "just as easy" as the Prolog solution; there are certainly many
instances in which Prolog solutions are simpler, and I haven't the foggiest
notion what the story is for large systems.)
* * *
Finally, a question. I was struck by Sanjai Narain's comment:
"However, in C-Prolog, you can do also do deduction, searching,
pattern matching and a lot of other AI-stuff at the same speed."
I notice that the Prolog digest is full of interesting puzzles whose
solution involves search. But are these representative? I think pattern
matching is certainly a big part of any Prolog program, but do deduction and
searching really form a big part of actual Prolog applications in practice?
I recall an article by Drew McDermott called the "The Prolog Phenonmenon"
that appeared (I think) in SIGArt at some point, maybe July '82. He asked
why it was that Prolog had not died out, as had PLANNER, which also
purported to support searching et al. He said some things on what he liked
and disliked about Prolog, and then made the following comment (emphasis
mine):
"The Europeans went in a different direction [from the Americans
in reaction to the problems of PLANNER-like languages]. What
they liked best about logic was its variable-binding machinery.
Their attitude towards backtracking has been simply that it is a
programmer's duty to remember that his program will be executed
backward as well as forward, that his programs must correct bad
guesses as well as exploit good ones. If the backwards
execution blows up, he must debug his program, not rewrite the
interpreter [the American approach], just as with more prosaic
kinds of infinite loops. Once this burden was shifted away
from the language implementer and onto the programmer, the
logical [!] next step was to freeze the interpreter design
and make it as efficient as possible. THE RESULT IS A
PROGRAMMING LANGUAGE, NOT A PROBLEM SOLVER OR THEOREM PROVER;
it doesn't compete with NOAH, but with Lisp. And it's my
impression that it competes pretty well.
"The effect is to reverse the usual images of the American
and European computer scientists. In this case, the Americans
have pursued impractical theoretical studies, while the
Europeans have bummed the hell out of a hack."
(By "backward execution," he is referring to backtracking, I believe). To
put this another way, one doesn't use Prolog's backtracking to do AI-style
(i.e., very large) search, but rather to do very local and carefully-
controlled "search," in the sense of "search this list (tree, ....) for an
element equal to this one" or "try all permutations of this tiny set for one
that satisfies P." Likewise, one doesn't use it to do what an AI
investigator would call "deduction." One CAN convince the Prolog machinery
to do more general AI-style searching efficiently, but only at the expense
of vastly obscuring the original clear, simple, declarative form of the
program.
Not being a real Prolog hacker (yet) I don't really know how accurate this
view is, and would appreciate some reaction (preferably semi-quantitative).