Don Kelly wrote
> Of some interest, looking at
the sine problem one can note that a =sin(a) =1.23e_5 from the first
term of the series and the second term is of the order of 10e_16. The
cosine series has a first term of 1 and a second of -7.56e-11.

That's the very problem which has triggered my foray into calculating with
rationals. For run-of-the-mill engineering problems, if r is a small number
in radians, you don't bother to calculate sin r, you just use r itself.
Ditto for cos r, you just use 1. But that yields a calculating engine that
has a blind spot for the very sorts of topic I want to investigate: the
experimental evidence for cosmological theories more elaborate than
Newton's Laws. Let me stress that the purpose is educational, for a lay
audience, not to advance the state-of-art in the speciality concerned
(where I trust the practitioners will know what they are doing when they
use a computer).

Thus two of the chapters in the book(s) I'm planning will deal with
attempting to synchronise Universal Time (UTC) between Mars and Earth, and
accurate timing for GPS satellites. Both require significant relativistic
corrections: not Special relativistic but General relativistic.  When it's
taught at school, the textbooks give the idea that for all practical
purposes you can use Newton's Laws, and that it took someone like Sir
Arthur Eddington to devise an experiment (actually an expedition) to
observe any difference between Newtonian and Einsteinian predictions.

Pupils are invited to believe that nothing in their normal lives will give
them the slightest use for Einstein's theories, let alone String Theory.
And when assured that there is a practical use, that act of assurance
assumes the nature of a religious belief, i.e. founded on faith, not facts.
By "facts" I mean observations plus *demystified* calculations to make
predictions about further observations. Was it Descartes who looked forward
to a time when educated citizens would resolve their differences by taking
up their tablets and saying "Let Us Calculate"? That's the reason the tool
is called TABULA, Latin for (wax) tablet. It's my 2¢ contribution to
stopping Science degenerating into yet another organised religion.

There will be other chapters on flat-earthism, evolution, environmental
degradation and homeopathic medicine. Not to take sides, mind you, but to
perform representative calculations to determine the magnitude of the
effect you're trying to observe in order to come to a rational conclusion.
To do that I need a tool that doesn't conflate (r) and (sin r) when
contemplating the angular difference between opposite edges of a distant
galaxy, and can add 1 to Avogadro's Number and come up with a different
number.

On Sun, 31 Mar 2019 at 03:09, Don Kelly <d...@shaw.ca> wrote:

> Fair enough. I graduated in Electrical Engineering in '54, then MSc, 56
> and PhD in mid 60's. I have used a slide rule and also 6 figure log
> tables, where needed, in the 50's. I was aware of sig figs from the
> first day of lectures. First programming was in '61 (using MAD (Michigan
> Algorithmic Decoder). Limits of digital representation were well known.
> Pick up any numerical analysis book and error estimation is high on the
> list. Tell a bricklayer that fuzz happens-he will understand. The
> history that you mention is known to me. In a case where hundreds or
> more simultaneous non-linear equations are needed (actually handled by
> engineers about 1964 for power system load flow studies, going to 64 bit
> vs 32 bit systems makes sense. I do see what you are doing and tried it
> for the sine(a) problem, assuming exact numbers i get a resultant number
> that is apparently 1 but differs  (i.e. 1-the floating point answer
> gives a difference of 7.56e_11 (roughly) which has good agreement to the
> conversion of the extended precision number to floating point.  This
> difference will not show up in Tabula .  Of some interest, looking at
> the sine problem one can note that a =sin(a) =1.23e_5 from the first
> term of the series and the second term is of the order of 10e_16. The
> cosine series has a first term of 1 and a second of -7.56e-11.
>
> Anyhow, I see what you are doing and the ability to use rational
> fractions has advantages in some cases. (i.e. 1r3 vs 1%3)and I wish you
> all the best and I found it interesting.
>
> Don
>
>
> On 2019-03-28 11:05 p.m., Ian Clark wrote:
> > @Devon - thanks for drawing my attention to a method of formatting I've
> > been overlooking in favour of dyadic (":). Though it isn't actually a
> > "display problem" I've got: more a choice of facilities I want to offer.
> > Anyway 8!:0 is another arrow in my quiver.
> >
> > @Don - If your point concerning the Church Clock t-table is that
> estimating
> > the height of the tower to be 75 ft carries the implication that this is
> > really 75 ±1 ft - a greater-than 1% error, which will propagate through
> the
> > calculations, take it as read. TABULA has a dropdown menu (in the range 0
> > to 9) to let you choose how many decimal places to show. Upping the
> setting
> > to "9" will yield extra digits after the decimal point, but not
> > "meaningful" ones – at least not meaningful to someone rebuilding the
> tower
> > (who is apt to estimate to the nearest course of bricks). But perhaps
> > they're meaningful to a lecturer trying to explain the limits of
> > calculating machines to a class of bricklayers. Or maybe they're
> engineers,
> > or even accountants, who'd lose faith in the entire model to see a column
> > of figures that don't add up. How to construe those extra digits will
> > differ in each case. TABULA is a general-purpose tool, not restricted to
> a
> > particular trade.
> >
> > I trust you're not saying that it is somehow wrong, e.g. wasteful or
> > misleading, to use a calculation method which delivers a precision that
> > errors of measurement will swamp. Though this is a view I used to hear,
> > more or less veiled, when I graduated in Math/Physics in the 1960s. Back
> > then we used slide-rules, which gave two places of decimals if you had
> good
> > eyesight, and we generally worked to 10% tolerance, for which 2 places of
> > decimals was adequate. Only a precision-freak would demand more, we told
> > ourselves.
> >
> > Gradually this view got extended to saying that showing data to surplus
> > places of decimals was not just too much of a good thing, but was
> actually
> > bad because it invited the superstitious to place some meaning on those
> > excess digits. I recall quite serious, quite heated discussions in the
> > scientific press about how if you knew your measurement error to 2 places
> > of decimals, then you ought to reveal it.
> >
> > Like so: 75.00 ±1.08 ft.
> >
> > About that I have no opinion. Except to offer you, as I do, a button to
> add
> > ±1% to a given reading to get an idea of the sensitivity of your model to
> > errors of measurement.
> >
> > More recently I used to hear scientists saying that 32-bit floating-point
> > numbers were good enough for anyone, and vendors' attempts to kid us all
> to
> > upgrade to 64-bit was just a ruse to "shift more iron". There's even a
> > proposal being aired to introduce 128-bit, but nobody seems to be taking
> it
> > seriously. Not even the guys who program GPS satellites (which need to
> > correct for general relativity) – leastways, they're keeping quiet about
> it.
> >
> > Then along comes this Clark guy who thinks there's a market for a
> > scientific calculator with unlimited precision. I feel your pain.
> >
> > But why should I feel obliged to carry on using lossy methods when I've
> > just discovered I don't need to? Methods such as floating point
> arithmetic,
> > plus truncation of infinite series at some arbitrary point. The fact that
> > few practical measurements are made to an accuracy greater than 0.01%
> > doesn't actually justify lossy methods in the calculating machine. It
> > merely condones them, which is something else entirely.
> >
> > No, I'm not a starry-eyed precision-freak. I use iterative methods to
> > calculate functions that solve equations, and I see rounding errors build
> > up exponentially, making my algorithms go unstable. Eliminating rounding
> > errors by performing these algorithms in rational arithmetic is something
> > that needs to be tried. And I've just begun trying it.
> >
> > Yes, at the end of the day computed quantities need to be displayed, and
> > conventionally such displays entail decimal numbers. Which need to be
> > chopped off at the right-hand end somewhere.
> >
> > That's not my department. I'll let my user see as many places of decimals
> > as she's the stomach for. Even if (like Ellie in Carl Sagan's novel
> > "Contact") she's looking for a personal message from a galactic
> > intelligence in the far digits of π.
> >
> > Ian Clark
> >
> > On Thu, 28 Mar 2019 at 02:27, Devon McCormick <devon...@gmail.com>
> wrote:
> >
> >> Ian - could your display problem be solved by always formatting displays
> >> but retaining arbitrary internal precision?  You probably already do
> this
> >> but thought I'd mention it because I just had to format a correlation
> >> matrix to show only two digits of precision, but was annoyed that my
> >> rounding fnc was showing me things like "0.0199999999" and "0.20000001"
> >> when I rediscovered "8!:0" and looked up how to format a number
> properly.
> >>
> >>
> >> On Wed, Mar 27, 2019 at 7:57 PM Don Kelly <d...@shaw.ca> wrote:
> >>
> >>> Probalbly the main problem, as exemplified in the "sine(a) problem is
> >>> that a and sine a are the same within 10e-16 and the cosine is 1-7.5e-5
> >>> but the display is only 6 figures and so the displayed result is
> rounded
> >>> to 1.  as it is closer to 1 than 0.999999.  This is grade 8
> >>> rounding.Increasing the numerical accuracy won't help but increasing
> the
> >>> display to say 12 digits might show something other (such as
> >>> 0.999999999993).
> >>>
> >>> Also, looking at the Tabula "church clock" somehow data which  is of 2
> >>> to 3 figure accuracy  is represented as  6 figure print precision
> >>> accuracy.  A height of 75 ft is of 2 figure accuracy  or 75+/_ 0.5 lb
> >>> is being presented in display as 75.000  and calculated to a greater
> >>> number of digits internally. The is doesn't mean that 75 is 75.0 nor
> >>> 75.000 etc. Using 64 digit floating point simply reduces computer error
> >>> ((partly due to converting from base 2 to base 10) hich but doesn't
> give
> >>> more accuracy than is present in the input data (neither does hand
> >>> calculation).  1r3 will be, in decimal form 0.3333333 ad nauseum no
> >>> matter how you do it and somewhere at the limit of the computer and
> real
> >>> world, it is rounded at some point.
> >>>
> >>> Don Kelly
> >>>
> >>> On 2019-03-27 11:32 a.m., Ian Clark wrote:
> >>>> Raul wrote:
> >>>>> So I am curious about the examples driving your concern here.
> >>>> I'm in the throes of converting TABULA (
> >>>> https://code.jsoftware.com/wiki/TABULA ) to work with rationals
> >> instead
> >>> of
> >>>> floats. Or more accurately, the engines CAL and UU which TABULA uses.
> >>>>
> >>>> No, I don't need to offer the speed of light to more than 10
> >> significant
> >>>> digits. My motivation has little to do with performing a particular
> >>>> scientific calculation, and everything to do with offering a
> >>>> general-purpose tool and rubbing down the rough edges as they emerge.
> >>> It's
> >>>> a game of whack-the-rat.
> >>>>
> >>>> The imprecision of working with floating-point numbers shows up so
> >> often
> >>>> with TABULA that I haven't bothered to collect examples. In just about
> >>>> every session I hit an instance or ten. But it's worse when using the
> >>> tool
> >>>> to demonstrate technical principles to novices, rather than do mundane
> >>>> calculations, because in the latter case it would be used by an
> >> engineer
> >>>> well-versed in computers and the funny little ways of floating point,
> >>>> tolerant comparisons and binary-to-decimal conversion. But a novice is
> >>> apt
> >>>> to suffer a crisis of confidence when she sees (sin A)=1.23E-5 in the
> >>> same
> >>>> display as (cos A)=1. Even hardened physicists wince.
> >>>>
> >>>> Two areas stand out:
> >>>> • Infinitestimals, i.e. intermediate values which look like zero – and
> >>>> ought to be zero – but aren't. They act like grit in the works.
> >>>> • Backfitting, where the user overtypes a calculated value and CAL
> >>> backfits
> >>>> suitable input values, for which it mainly uses N-R algorithms – which
> >>>> don't like noisy values.
> >>>>
> >>>> It's too early to say yet – I have to finish the conversion and think
> >> up
> >>>> examples to stress-test it before I can be sure the effort is
> >> worthwhile.
> >>>> So far I've only upgraded UU (the units-conversion engine), but
> already
> >>>> some backfitting examples which were rather iffy are hitting the
> target
> >>>> spot-on: particularly where the slope of the "hill" being climbed is
> >>> nearly
> >>>> zero. Even I succumb to feelings of pleasure to see (sin A)=0 in the
> >> same
> >>>> display as (cos A)=1.
> >>>>
> >>>> But knowing the innards of CAL, I can't understand how it can possibly
> >> be
> >>>> showing benefits at this early stage. Perhaps UU's rational values are
> >>>> leaking further down the cascade of calculations than I expected? I'd
> >>> love
> >>>> to get to the bottom of it, but my systematic "rationalization" of the
> >>> CAL
> >>>> code will destroy the evidence, just as exploring Mars will destroy
> the
> >>>> evidence for indigenous life. Too bad: I'm not aiming at CAL working
> >>>> occasionally, but every time.
> >>>>
> >>>> Thanks for reminding me about digits separation. Yes, my numeral
> >>> converter
> >>>> (I find I'm mainly working with numerals than numeric atoms) can
> >> already
> >>>> handle standard scientific notation, like '6.62607015E-34' -- plus a
> >> few
> >>>> J-ish forms like '1p1'. I only had to type-in π to 50 places of
> >> decimals
> >>> to
> >>>> feel the need for some form of digit separation (…a good tool should
> >>>> support ALL forms!) e.g.  '6.626,070,15E-34' but was unconsciously
> >>> assuming
> >>>> (y -. ',') would handle it.
> >>>>
> >>>> …It won't. SI specifies spaces as digit separators, and Germany uses
> >>> commas
> >>>> where the UK and USA use dots, e.g. '6,626 070 15E-34'. Okay, fine…
> but
> >>> in
> >>>> places I detect the first space in a (string) quantity to show where
> >> the
> >>>> numeral stops and the units begin. Ah well… another rat to whack.
> >>>>
> >>>> Ian
> >>>>
> >>>>
> >>>> On Wed, 27 Mar 2019 at 15:16, Raul Miller <rauldmil...@gmail.com>
> >> wrote:
> >>>>> On Tue, Mar 26, 2019 at 7:38 PM Ian Clark <earthspo...@gmail.com>
> >>> wrote:
> >>>>>> I will still employ my "mickey-mouse" method, because it's easily
> >>> checked
> >>>>>> once it's coded. I need built-into TABULA a number of physical
> >>> constants
> >>>>>> which the SI defines exactly, e.g.
> >>>>>> • The thermodynamic temperature of the triple point of water, Ttpw ,
> >> is
> >>>>>> 273.16 K *exactly*.
> >>>>>> • The speed of light in vacuo is 299792458 m/s *exactly*.
> >>>>>>
> >>>>>> The first I can generate and handle as: 27316r100 -whereas (x:
> >> 273.16)
> >>>>>> is 6829r25 . If you multiply top and bottom by 4 you get my numeral.
> >>> But
> >>>>>> (x:) will round decimal numerals with more than 15 sig figs and so
> >> lose
> >>>>> the
> >>>>>> exactness.
> >>>>> I was looking at
> >>>>> https://en.m.wikipedia.org/wiki/2019_redefinition_of_SI_base_units
> >> but
> >>>>> I did not notice anything with more than 10 significant digits. So I
> >>>>> am curious about the examples driving your concern here.
> >>>>>
> >>>>> (That said, if you're going to go there, and you have not already
> done
> >>>>> so, I'd use your approach on strings, and I'd make it so that it
> would
> >>>>> work properly on values like  '6.62607015e-34'. I might also be
> >>>>> tempted to make it accept group-of-three-digit representations to
> make
> >>>>> obvious typos stand out visually. Perhaps:   '6.626 070 15e-34')
> >>>>>
> >>>>> Thanks,
> >>>>>
> >>>>> --
> >>>>> Raul
> >>>>>
> ----------------------------------------------------------------------
> >>>>> For information about J forums see
> >> http://www.jsoftware.com/forums.htm
> >>>> ----------------------------------------------------------------------
> >>>> For information about J forums see
> http://www.jsoftware.com/forums.htm
> >>> ----------------------------------------------------------------------
> >>> For information about J forums see http://www.jsoftware.com/forums.htm
> >>
> >>
> >> --
> >>
> >> Devon McCormick, CFA
> >>
> >> Quantitative Consultant
> >> ----------------------------------------------------------------------
> >> For information about J forums see http://www.jsoftware.com/forums.htm
> > ----------------------------------------------------------------------
> > For information about J forums see http://www.jsoftware.com/forums.htm
> ----------------------------------------------------------------------
> For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm

Reply via email to