[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Floating point types in MCL...
- To: info-mcl
- Subject: Floating point types in MCL...
- From: email@example.com (Bill Andersen)
- Date: 25 Nov 91 22:23:40 GMT
- Newsgroups: comp.lang.lisp.mcl
- Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742
- Sender: firstname.lastname@example.org
Recently I built a toy neural net program using MCL. To my dismay,
I discovered that all floats are of type double-float in MCL, no
matter what you declare them. So, every time I run my net, doing many
floating point additions and multiplications, I *CONS LIKE THERE'S NO
TOMORROW*!!! For example:
(time (* 1.0 1.0)) returns something about 16 bytes being allocated.
This is unacceptable. Is this some sort of plot to further the
cause of symbolic AI over connectionism by making floating point
arithmetic unbelievably inefficient? Am I right in assuming that all
floats are represented as some sort of consed-up structure???? If so,
why is this? When can we expect a fix????
Bill Andersen (email@example.com) |
University of Maryland | clever .signature saying
Department of Computer Science | under construction
College Park, Maryland 20742 |