Yes, I understand. But not without complicating the calculation to keep track of the number of times I have done multiplication in the current result.

Formating with current scale does not work.

: (format (* 9.9 9.789 9.56789) 20)
-> "9272347445790000000000000000000000000000000.00000000000000000000"

So in the example below I would have to track the number of arguments to the * function. Then multiple that number against the current Scl value. In the below would give me 3 * 20 giving 60 to scale the format. Is that the best way to handle it? What if I have a a few dozen calculations with multiples of arguments. Then do I do
(format num 50000)   ?
And then significantly truncate the result to get the Scl number of significant digits I wanted to track?

(format (* 9.9 9.789 9.56789) 60)
-> "927.234744579000000000000000000000000000000000000000000000000000"

(round (* 9.9 9.789 9.56789))
-> "9,272,347,445,790,000,000,000,000,000,000,000,000,000,000.000"

The decimal is not in the correct place.

I didn't know if PicoLisp provided a solution without complicating every function which uses multiplication and division.

Thanks for the reply.


On 07/12/2017 01:05 PM, Aatos Heikkinen wrote:
I´ve found ´format´ and ´round´ functions quite helpful, see

On 12 Jul 2017 18:52, "Jimmie Houchin" < <>> wrote:


    I am trying to understand something about PicoLisp and Fixpoint

    I am writing an app and would consider PicoLisp should I get my
    head around Lisp. But I do not understand how to use numbers.

    I use a lot of floating point numbers and lots of calculations.
    From simple experiments it seems that the numbers and math appear
    to be accurate and correct. However I have no understanding of how
    to present a final result formatted correctly.


    (setq *Scl 5) ;; or could be 10 ...

    (* 0.00009 0.0009)

    -> 81

    >From another language: 0.0000000081


    (setq *Scl 20)

    (* 9.9 9.789 9.56789)

    -> 927234744579000000000000000000000000000000000000000000000000000

    >From another language 20 decimal points printed after converting
    to floating point:


    The math looks fine as far as these simple examples go. But if I
    do several to dozens of different calculations with floating point
    numbers with unknown values until streamed to the app from some
    source. How do I know where the decimal point really belongs in
    order to format correctly for human use? Is it possible?

    I personally do not have a problem with fixpoint for internal use,
    as that is simply an implementation issue. However I do need to
    convert back to the best floating point representation for display
    or storage.

    Any help in understanding would be greatly appreciated.




Reply via email to