The same thing happened when J went to 64 bit integers. When large integers
were converted to float significance may be lost. Isn't a problem with J 32
bit.

On Mon, Apr 8, 2019 at 10:21 AM William Tanksley, Jr <wtanksle...@gmail.com>
wrote:

> What's happening is that unexact trumps exact -- floats imply rounding
> and other non-arithmetic behavior, so when they appear in data they
> always carry a huge warning sign.
>
> -Wm
>
> On Mon, Apr 8, 2019 at 8:48 AM 'Mike Day' via Programming
> <programm...@jsoftware.com> wrote:
> >
> > ... also it might be worth noting this, which I had wondered about but
> not confirmed before:
> >    VERSION_j_
> > 701.1 2
> >    datatype 1r3 2r7
> > rational
> >    datatype 1r3 2r7 1p1
> > floating
> >    datatype x:1r3 2r7 1p1
> > rational
> >
> > And I’ve just checked; it’s the same behaviour in J901 beta.
> >
> > So irrationals trump rationals, as they should, I suppose, just as any
> floats raise type integer to type floating.
> >
> > x: ensures rational here.
> >
> >
> > Mike
> >
> > Sent from my iPad
> >
> > > On 8 Apr 2019, at 14:30, Ian Clark <earthspo...@gmail.com> wrote:
> > >
> > > Linda wrote
> > >> The rational numbers are exact.
> > >
> > > Yes, programs using them have a nice crisp feel.
> > > I used to think the only people who should be using them are number
> > > theorists, but now I'm a convert to their general use.
> > > But it's worth remembering that in some circumstances you're working
> with
> > > rational approximations, not exact values,
> > > Common examples: anything involving π and √2.
> > >
> > > I'm using Roger Hui's rational replacements for trig based on (o.) --
> > >
> https://code.jsoftware.com/wiki/Essays/Extended_Precision_Functions#Collected_Definitions
> > > …which give 40 decimal places. They're fun to play with.
> > > No snags hit yet. But in the course of my investigations, some
> massively
> > > long rationals emerge.
> > > Can't see any performance deterioration yet, however, but I've
> developed
> > > code to cut-back a monster rational to (say) 40 decimal places.
> > >
> > > Ian Clark
> > >
> > >> On Mon, 8 Apr 2019 at 12:57, Linda Alvord <lindaalvor...@outlook.com>
> wrote:
> > >>
> > >> The rational numbers are exact.
> > >>
> > >> ([:+/\|.)^:(i.5)1 1
> > >> 1 1
> > >> 1 2
> > >> 2 3
> > >> 3 5
> > >> 5 8
> > >> fr=: 13 :'%/"1([:+/\|.)^:(i.y)1 1'
> > >> (,.0j25":"0 fr 20);' ';x:,.fr 20
> > >> ┌───────────────────────────┬─┬──────────┐
> > >> │1.0000000000000000000000000│ │ 1│
> > >> │0.5000000000000000000000000│ │ 1r2│
> > >> │0.6666666666666666300000000│ │ 2r3│
> > >> │0.5999999999999999800000000│ │ 3r5│
> > >> │0.6250000000000000000000000│ │ 5r8│
> > >> │0.6153846153846154200000000│ │ 8r13│
> > >> │0.6190476190476190700000000│ │ 13r21│
> > >> │0.6176470588235294400000000│ │ 21r34│
> > >> │0.6181818181818181700000000│ │ 34r55│
> > >> │0.6179775280898876000000000│ │ 55r89│
> > >> │0.6180555555555555800000000│ │ 89r144│
> > >> │0.6180257510729614300000000│ │ 144r233│
> > >> │0.6180371352785145600000000│ │ 233r377│
> > >> │0.6180327868852458800000000│ │ 377r610│
> > >> │0.6180344478216818200000000│ │ 610r987│
> > >> │0.6180338134001252000000000│ │ 987r1597│
> > >> │0.6180340557275542100000000│ │ 1597r2584│
> > >> │0.6180339631667065600000000│ │ 2584r4181│
> > >> │0.6180339985218034100000000│ │ 4181r6765│
> > >> │0.6180339850173579600000000│ │6765r10946│
> > >> └───────────────────────────┴─┴──────────┘
> > >>
> > >> Linda
> > >>
> > >>
> > >>
> > >> -----Original Message-----
> > >> From: Programming <programming-boun...@forums.jsoftware.com> On
> Behalf Of
> > >> William Tanksley, Jr
> > >> Sent: Friday, March 29, 2019 12:23 PM
> > >> To: Programming forum <programm...@jsoftware.com>
> > >> Subject: Re: [Jprogramming] converting from 'floating' to 'rational'
> > >>
> > >> Ian Clark <earthspo...@gmail.com> wrote:
> > >>> But why should I feel obliged to carry on using lossy methods when
> > >>> I've just discovered I don't need to? Methods such as floating point
> > >>> arithmetic, plus truncation of infinite series at some arbitrary
> > >>> point. The fact that few practical measurements are made to an
> > >>> accuracy greater than 0.01% doesn't actually justify lossy methods in
> > >>> the calculating machine. It merely condones them, which is something
> > >> else entirely.
> > >>
> > >> There will be a cost, of course. Supporting arbitrarily small and
> large
> > >> numbers changes the time characteristics of the computations in ways
> that
> > >> will depend on the log-size of the numbers -- and of course will blow
> the
> > >> CPU's caching. Also, because the intermediate values are being stored
> with
> > >> unlimited precision, you may find some surprises, such as values
> close to 1
> > >> which have enormous numerators and denominators.
> > >>
> > >> IMO it's a worthy experiment, especially if you wind up gathering data
> > >> about the cost and benefit.
> > >>
> > >> There's some interesting reflections going on about this on the
> "unums"
> > >> mailing list. The trouble with indefinite precision rationals is that
> they
> > >> are overkill for all of the problems where they're actually needed,
> since
> > >> the inputs and the solution will normally need to be expressed to only
> > >> finite digits. Now, I don't think this makes doing experiments with
> them
> > >> worthless; far from it. By tracking things like the smallest expected
> input
> > >> (for example the smallest triangle side, or the largest ratio between
> > >> sides) and the largest integer generated as intermediate value
> (perhaps
> > >> also tracking the ratio in which this integer appeared), we can wind
> up
> > >> answering how bad things can get (of course, this is the task of
> numerical
> > >> analysis).
> > >>
> > >> Ulrich Kulisch developed technology called the "super-accumulator",
> which
> > >> was supposed to function alongside the usual group of floating-point
> > >> registers. It stored an overkill number of bits to permit it to
> accumulate
> > >> multiple additions of products of arbitrary floats, the sort of
> operations
> > >> you need to evaluate polynomials and linear algebra.  Using this, he
> was
> > >> able to show that a large number of operations which were considered
> > >> unstable were possible to stabilize by providing this unrounded
> > >> accumulator. In the end this wasn't made part of the IEEE standard,
> but
> > >> it's being included in some of the numerical systems being developed
> in
> > >> response to the need for more flexible floating point formats from the
> > >> machine-learning world, where smaller-bitwidth floating point numbers
> both
> > >> make stability a serious concern and also make the required size of
> the
> > >> superaccumulator much smaller.
> > >>
> > >>> Ian Clark
> > >>
> > >> -Wm
> > >> ----------------------------------------------------------------------
> > >> For information about J forums see
> > >>
> https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.jsoftware.com%2Fforums.htm&amp;data=02%7C01%7C%7Cdd9b991445bd4aee1e7a08d6b462d9c2%7C84df9e7fe9f640afb435aaaaaaaaaaaa%7C1%7C0%7C636894733979469996&amp;sdata=0uEWVVEQ3RUboB2xRtJgdxL%2FUSZGkeff1L9HAEGmhYM%3D&amp;reserved=0
> > >> ----------------------------------------------------------------------
> > >> For information about J forums see
> http://www.jsoftware.com/forums.htm
> > > ----------------------------------------------------------------------
> > > For information about J forums see http://www.jsoftware.com/forums.htm
> > ----------------------------------------------------------------------
> > For information about J forums see http://www.jsoftware.com/forums.htm
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to