[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Floating point types in MCL...



   Recently I built a toy neural net program using MCL.  To my dismay,
I discovered that all floats are of type double-float in MCL, no
matter what you declare them.  So, every time I run my net, doing many
floating point additions and multiplications, I *CONS LIKE THERE'S NO
TOMORROW*!!!  For example:

  (time (* 1.0 1.0)) returns something about 16 bytes being allocated.

  This is unacceptable.  Is this some sort of plot to further the
cause of symbolic AI over connectionism by making floating point
arithmetic unbelievably inefficient?  Am I right in assuming that all
floats are represented as some sort of consed-up structure????  If so,
why is this?  When can we expect a fix????

    ...Bill


-- 
    Bill Andersen (waander@cs.umd.edu) |
    University of Maryland             | clever .signature saying
    Department of Computer Science     | under construction
    College Park, Maryland  20742      |