Sorry for the delay.  My ISP's service has been flaky of late.  It was down much of Tuesday.  I'm still getting caught up.

On 4/10/2018 2:30 PM, wrote:

    How long do you want to wait for "truth" calculations.  Done using
    either rationals (software bigint / bigint fractions), or bigfloats
    (software adjustable width FP) with results converted to rational for
    comparison, the truth calculation is going to be many orders of
    magnitude slower than hardware FP math.
    Do you have enough memory?  Rationals can expand to fill all

I can wait a while, but it can't be too slow, of course. If we're talking hours just to get a single computation done that involves just a handful of adds or multiplies, then this is untenable for me. But my experience shows that Racket is plenty fast for this simple case. Are there cases where it takes a surprising amount of extra time to perform a series of multiplies and adds?

No offense, but I can't tell whether that's intended to be humor ... a "handful" of operations rarely is any problem.  A few thousands of handfuls is another thing altogether.   I don't know your definition of "handful", and you made a big deal about the speed of single floats ...

Bigfloat operations are proportional to their width, but since the values involved all are the same width, they are at least predictable.  Bigints though can cause nasty surprises - operations on them are proportional to their magnitude.  Exact rationals are a ratio of two integers, and either or both of them could be bigints.

Ripped from the docs [sec 3.2]:
(inexact->exact0.1) -> 3602879701896397/36028797018963968

That already is the size of a double precision complex.
Note that just squaring the value above gets you:


which roughly is the size of a double precision quaternion.

A  "handful of adds or multiplies" isn't likely to hurt you. But a lengthy series of them and you could be in (at least, time) trouble.

I haven't studied the rational implementation in detail:  I know it reduces results to lowest terms for binding or output, but I don't know when - or if - there is any reduction of intermediates within an operation chain, e.g., in a trig function.

As for memory space, I have 32 GB of memory to spare. Should I be concerned with this when my computations typically only contain a few multiplies or adds?

I mentioned the problem because we [i.e. the group] previously have seen attempts to do relatively complicated things with bigints in relatively small memory.   You're quite unlikely to run out of memory unless you do something really stupid.

But you still may find that exact math takes more time than you really want to wait.

I understand both speed and latency:  in a former existence I did hard real time image processing ... there was a period in my life when I fretted about the time wasted (16ms) waiting on the 1st of a series of camera images, and how much useful processing could have been done in that time.  Not a lot you can do until you get that 1st image [and sometimes not even then].  8-(


You received this message because you are subscribed to the Google Groups "Racket 
Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
For more options, visit

Reply via email to