If I had to do that, I'd do it at "reporting time". (Possibly cached,
if that turns out to take too long.)

Thanks,

-- 
Raul


On Mon, Sep 18, 2017 at 9:08 AM, Don Guinn <[email protected]> wrote:
> I went to one cent and went to a lot of trouble to make sure I rounded the
> same way that banks did.
>
> On Mon, Sep 18, 2017 at 6:44 AM, Raul Miller <[email protected]> wrote:
>
>> That depends on what I am doing.
>>
>> In professional contexts: I often work with 1000000 = 1 dollar for
>> archive purposes (because I deal with stuff that's not worth very
>> much). But for reporting purposes I'll change that to 1 = 1 dollar.
>>
>> Thanks,
>>
>> --
>> Raul
>>
>>
>> On Mon, Sep 18, 2017 at 8:24 AM, Don Guinn <[email protected]> wrote:
>> > So, when you work with money, do you have the number 1 equal to one
>> dollar
>> > or one cent? Ten cents is not exact in floating point if you units are
>> > dollars.
>> >
>> > On Mon, Sep 18, 2017 at 5:25 AM, Raul Miller <[email protected]>
>> wrote:
>> >
>> >> There are other cases, always.
>> >>
>> >> Of course, there are examples like bayesian numbers, complex numbers,
>> >> and quaternions where we indeed generally use integers or floating
>> >> point numbers under the covers. But we do have high precision rational
>> >> numbers for cases where that's necessary.
>> >>
>> >> Anyways, math is an open ended subject...
>> >>
>> >> We do indeed usually work with "measurable" values, where limited
>> >> precision is assumed. And when we go beyond that we do indeed usually
>> >> stick with integer values. But both of these are just "usually"...
>> >>
>> >> Thanks,
>> >>
>> >> --
>> >> Raul
>> >>
>> >> On Mon, Sep 18, 2017 at 5:04 AM, Erling Hellenäs
>> >> <[email protected]> wrote:
>> >> > Hi all!
>> >> >
>> >> > I think either we work with integers or with floating point. We use
>> >> floating
>> >> > point when we have non-countable and  integers when we have countable
>> >> data.
>> >> > Non countable data is things like measurable quantities in
>> engineering.
>> >> > Countable data is when we count a certain number of things, when the
>> >> result
>> >> > has to be exact numbers. Currency, time, date, indices, hashes,
>> cryptos.
>> >> > When we use floating point we want the results to be as accurate as
>> >> > possible. We want a certain number of accurate digits. We want
>> efficient
>> >> > calculations. We round the results.
>> >> > When we use digital data the results typically have to be exact. The
>> >> credit
>> >> > and debit sides must match when we do book-keeping. When we work with
>> >> date
>> >> > and time data we expect the results to be exact. Our indices have to
>> be
>> >> > exact. Hashes and cryptos have to be exact.
>> >> > Yet when we work with countable data we sometimes have to do floating
>> >> point
>> >> > calculations. We have to divide. We want to calculate the power of a
>> >> number
>> >> > in an efficient way. This is where our fuzzy floor, ceiling, residue
>> and
>> >> > antibase comes in. They are not there to give us any "fuzzy" results.
>> >> They
>> >> > are there for giving us exact results when we convert our floating
>> point
>> >> > data back to integer!
>> >> > To convert a 64-bit integer to floating point without losing accuracy
>> we
>> >> > need a 128-bit float? That's no problem, our computers could handle
>> this
>> >> for
>> >> > a long time? A floating-point unit in todays processors are 128 bit?
>> >> > Comparison tolerance can then be set to something between 2^64 and
>> 2^112,
>> >> > which is the precision of IEEE quadruple precision floating point? We
>> can
>> >> > avoid any risk of small errors in the calculations slipping into our
>> >> exact
>> >> > integer calculations? When converting back to integer our results can
>> be
>> >> > exact?
>> >> > J is seriously fucked up regarding this and has to be fixed?
>> >> >
>> >> > Cheers,
>> >> >
>> >> > Erling
>> >> >
>> >> >
>> >> > Den 2017-09-16 kl. 00:38, skrev Erling Hellenäs:
>> >> >>
>> >> >> Hi all !
>> >> >>
>> >> >> Yes, but for now my discussion was restricted to the fuzz as such.
>> For
>> >> now
>> >> >> I held the conversion errors out of my argumentation. We can then
>> see an
>> >> >> inconsistency? We get different results for float and integer
>> arguments?
>> >> >> With integers fuzzy floor and ceiling is not used, with floats it
>> is? We
>> >> >> should get the same results? Either both should use fuzzy floor and
>> >> ceiling,
>> >> >> or none should?
>> >> >>
>> >> >> We can also see that we get zero results, when they should not be
>> zero?
>> >> >> Our program recognizes an error, but does not notify the user of it?
>> Is
>> >> that
>> >> >> a reasonable behavior? We recognize an error, write no error message
>> and
>> >> >> instead give the user a faulty result?
>> >> >>
>> >> >> Then there is the question of possible random faults when the result
>> of
>> >> >> floor is larger than it's argument, the error is multiplied with a
>> large
>> >> >> number and digits are cut from the front of the result in the
>> >> subtraction.
>> >> >> We have to find out if we can get faulty results that are not
>> >> acceptable?
>> >> >>
>> >> >> For the conversion error case it is reasonable to assume that the
>> >> >> programmer should be aware of the auto-conversion of integers when
>> they
>> >> are
>> >> >> used in a float operation? That the float can hold only 16 digits?
>> The
>> >> >> alternative is to not have auto-conversion, to have an explicit
>> >> conversion
>> >> >> operator? That's how it is in most programming languages? As soon as
>> >> there
>> >> >> is a risk associated with the conversion the programmer is forced to
>> be
>> >> >> explicit about it? Or it is not even allowed?
>> >> >>
>> >> >> Another question raised in the thread is if (14^2) should be real or
>> >> >> integer. In Nial, for example, it was integer. In F# you had to
>> >> explicitly
>> >> >> make 14 and 2 to float before the operation. The result could be
>> easily
>> >> >> calculated with 128 bit floats?
>> >> >>
>> >> >> /Erling
>> >> >>
>> >> >> On 2017-09-15 23:14, Don Guinn wrote:
>> >> >>>
>> >> >>> Unfortunately, there are limits which computers give "right"
>> answers.
>> >> >>> Here
>> >> >>> is a case where the wrong answer is given:
>> >> >>>
>> >> >>>     (14^2) |!.0] 57290824867848391
>> >> >>> 104
>> >> >>>     196 | 57290824867848391
>> >> >>> 99
>> >> >>>
>> >> >>> Because the number on the right has no exact representation in float
>> >> >>> double.
>> >> >>>
>> >> >>>     x:57290824867848391 0.5
>> >> >>> 57290824867848392 1r2
>> >> >>>
>> >> >>> So, it's a hardware restriction. Either mod only accept integer
>> >> arguments
>> >> >>> or we have to deal with it. The fuzz is applied to the result,
>> which is
>> >> >>> within fuzz. One thing I have thought might help is that fuzz be
>> >> applied
>> >> >>> to
>> >> >>> the arguments and if they are both integral values, convert them to
>> >> >>> integer
>> >> >>> before applying mod.
>> >> >>>
>> >> >>> On Fri, Sep 15, 2017 at 12:49 PM, Erling Hellenäs
>> >> >>> <[email protected]>
>> >> >>> wrote:
>> >> >>>
>> >> >>>> Even if we take care of the zero case, a fuzzy residue will deliver
>> >> >>>> random
>> >> >>>> errors much higher than the small errors which naturally affects
>> real
>> >> >>>> numbers? Multiplied with a big number and then the lost precision
>> of
>> >> the
>> >> >>>> subtraction can make these errors very significant? /Erling
>> >> >>>>
>> >> >>>>
>> >> >>>> On 2017-09-15 17:25, Erling Hellenäs wrote:
>> >> >>>>
>> >> >>>>> Hi all!
>> >> >>>>>
>> >> >>>>> OK. I guess there is some way to implement this so that both Floor
>> >> and
>> >> >>>>> Residue are fuzzy without having the circular dependency. It
>> seems to
>> >> >>>>> basically be the implementation we have.
>> >> >>>>> It seems Floor gives the closest integer within comparison
>> tolerance
>> >> in
>> >> >>>>> the way it is specified then.
>> >> >>>>> 9!:19 [ 5.68434e_14
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> 25j5": (9!:18'') * 5729082486784839 % 196
>> >> >>>>>
>> >> >>>>> 1.66153
>> >> >>>>>
>> >> >>>>> 25j5":a=: 5729082486784839 % 196
>> >> >>>>>
>> >> >>>>> 29230012687677.75000
>> >> >>>>>
>> >> >>>>> 25j5": <. a
>> >> >>>>>
>> >> >>>>> 29230012687678.00000
>> >> >>>>>
>> >> >>>>> It means that this result, which we find strange, is according to
>> >> >>>>> specification?
>> >> >>>>>
>> >> >>>>> (14^2) | 5729082486784839
>> >> >>>>>
>> >> >>>>> 0
>> >> >>>>>
>> >> >>>>> It also means that this result, which we find correct, is not
>> >> according
>> >> >>>>> to specification?
>> >> >>>>>
>> >> >>>>> 196 | 5729082486784839
>> >> >>>>>
>> >> >>>>> 147
>> >> >>>>>
>> >> >>>>>
>> >> >>>>> So, now there is a question about if we should live with this
>> >> >>>>> inconsistency or change the specification or the part of the
>> >> >>>>> implementation
>> >> >>>>> that is not according to this specification?
>> >> >>>>>
>> >> >>>>> I think the efforts to create a consistent algebra is good.
>> However,
>> >> in
>> >> >>>>> my practical experience I found it important that functions give
>> >> >>>>> correct
>> >> >>>>> results. Incorrect results without any fault indication could mean
>> >> that
>> >> >>>>> the
>> >> >>>>> patient dies, the bridge collapses or the company gets insolvent.
>> >> Could
>> >> >>>>> the
>> >> >>>>> zero result really be considered correct? If not, is there a way
>> in
>> >> >>>>> which
>> >> >>>>> we could deliver a fault indication instead of the zero result?
>> This
>> >> >>>>> means
>> >> >>>>> we should have an error in the integer case and a NaN in the real
>> >> case?
>> >> >>>>>
>> >> >>>>> Cheers,
>> >> >>>>>
>> >> >>>>> Erling Hellenäs
>> >> >>>>>
>> >> >>>>> Den 2017-09-15 kl. 14:13, skrev Raul Miller:
>> >> >>>>>
>> >> >>>>>> Eugene's work should be thought of as specification, rather than
>> >> >>>>>> implementation.
>> >> >>>>>>
>> >> >>>>>> That said, chasing through the various implementations of floor
>> for
>> >> >>>>>> the various C data types can be an interesting exercise.
>> >> >>>>>>
>> >> >>>>>> Thanks,
>> >> >>>>>>
>> >> >>>>>>
>> >> >>>>> ------------------------------------------------------------
>> >> ----------
>> >> >>>>> For information about J forums see http://www.jsoftware.com/
>> >> forums.htm
>> >> >>>>>
>> >> >>>>
>> >> >>>> ------------------------------------------------------------
>> >> ----------
>> >> >>>> For information about J forums see http://www.jsoftware.com/
>> >> forums.htm
>> >> >>>>
>> >> >>> ------------------------------------------------------------
>> ----------
>> >> >>> For information about J forums see http://www.jsoftware.com/
>> forums.htm
>> >> >>
>> >> >>
>> >> >>
>> >> >> ------------------------------------------------------------
>> ----------
>> >> >> For information about J forums see http://www.jsoftware.com/
>> forums.htm
>> >> >
>> >> >
>> >> > ------------------------------------------------------------
>> ----------
>> >> > For information about J forums see http://www.jsoftware.com/
>> forums.htm
>> >> ----------------------------------------------------------------------
>> >> For information about J forums see http://www.jsoftware.com/forums.htm
>> >>
>> > ----------------------------------------------------------------------
>> > For information about J forums see http://www.jsoftware.com/forums.htm
>> ----------------------------------------------------------------------
>> For information about J forums see http://www.jsoftware.com/forums.htm
>>
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to