Probalbly the main problem, as exemplified in the "sine(a) problem is
that a and sine a are the same within 10e-16 and the cosine is 1-7.5e-5
but the display is only 6 figures and so the displayed result is rounded
to 1. as it is closer to 1 than 0.999999. This is grade 8
rounding.Increasing the numerical accuracy won't help but increasing the
display to say 12 digits might show something other (such as
0.999999999993).
Also, looking at the Tabula "church clock" somehow data which is of 2
to 3 figure accuracy is represented as 6 figure print precision
accuracy. A height of 75 ft is of 2 figure accuracy or 75+/_ 0.5 lb
is being presented in display as 75.000 and calculated to a greater
number of digits internally. The is doesn't mean that 75 is 75.0 nor
75.000 etc. Using 64 digit floating point simply reduces computer error
((partly due to converting from base 2 to base 10) hich but doesn't give
more accuracy than is present in the input data (neither does hand
calculation). 1r3 will be, in decimal form 0.3333333 ad nauseum no
matter how you do it and somewhere at the limit of the computer and real
world, it is rounded at some point.
Don Kelly
On 2019-03-27 11:32 a.m., Ian Clark wrote:
Raul wrote:
So I am curious about the examples driving your concern here.
I'm in the throes of converting TABULA (
https://code.jsoftware.com/wiki/TABULA ) to work with rationals instead of
floats. Or more accurately, the engines CAL and UU which TABULA uses.
No, I don't need to offer the speed of light to more than 10 significant
digits. My motivation has little to do with performing a particular
scientific calculation, and everything to do with offering a
general-purpose tool and rubbing down the rough edges as they emerge. It's
a game of whack-the-rat.
The imprecision of working with floating-point numbers shows up so often
with TABULA that I haven't bothered to collect examples. In just about
every session I hit an instance or ten. But it's worse when using the tool
to demonstrate technical principles to novices, rather than do mundane
calculations, because in the latter case it would be used by an engineer
well-versed in computers and the funny little ways of floating point,
tolerant comparisons and binary-to-decimal conversion. But a novice is apt
to suffer a crisis of confidence when she sees (sin A)=1.23E-5 in the same
display as (cos A)=1. Even hardened physicists wince.
Two areas stand out:
• Infinitestimals, i.e. intermediate values which look like zero – and
ought to be zero – but aren't. They act like grit in the works.
• Backfitting, where the user overtypes a calculated value and CAL backfits
suitable input values, for which it mainly uses N-R algorithms – which
don't like noisy values.
It's too early to say yet – I have to finish the conversion and think up
examples to stress-test it before I can be sure the effort is worthwhile.
So far I've only upgraded UU (the units-conversion engine), but already
some backfitting examples which were rather iffy are hitting the target
spot-on: particularly where the slope of the "hill" being climbed is nearly
zero. Even I succumb to feelings of pleasure to see (sin A)=0 in the same
display as (cos A)=1.
But knowing the innards of CAL, I can't understand how it can possibly be
showing benefits at this early stage. Perhaps UU's rational values are
leaking further down the cascade of calculations than I expected? I'd love
to get to the bottom of it, but my systematic "rationalization" of the CAL
code will destroy the evidence, just as exploring Mars will destroy the
evidence for indigenous life. Too bad: I'm not aiming at CAL working
occasionally, but every time.
Thanks for reminding me about digits separation. Yes, my numeral converter
(I find I'm mainly working with numerals than numeric atoms) can already
handle standard scientific notation, like '6.62607015E-34' -- plus a few
J-ish forms like '1p1'. I only had to type-in π to 50 places of decimals to
feel the need for some form of digit separation (…a good tool should
support ALL forms!) e.g. '6.626,070,15E-34' but was unconsciously assuming
(y -. ',') would handle it.
…It won't. SI specifies spaces as digit separators, and Germany uses commas
where the UK and USA use dots, e.g. '6,626 070 15E-34'. Okay, fine… but in
places I detect the first space in a (string) quantity to show where the
numeral stops and the units begin. Ah well… another rat to whack.
Ian
On Wed, 27 Mar 2019 at 15:16, Raul Miller <rauldmil...@gmail.com> wrote:
On Tue, Mar 26, 2019 at 7:38 PM Ian Clark <earthspo...@gmail.com> wrote:
I will still employ my "mickey-mouse" method, because it's easily checked
once it's coded. I need built-into TABULA a number of physical constants
which the SI defines exactly, e.g.
• The thermodynamic temperature of the triple point of water, Ttpw , is
273.16 K *exactly*.
• The speed of light in vacuo is 299792458 m/s *exactly*.
The first I can generate and handle as: 27316r100 -whereas (x: 273.16)
is 6829r25 . If you multiply top and bottom by 4 you get my numeral. But
(x:) will round decimal numerals with more than 15 sig figs and so lose
the
exactness.
I was looking at
https://en.m.wikipedia.org/wiki/2019_redefinition_of_SI_base_units but
I did not notice anything with more than 10 significant digits. So I
am curious about the examples driving your concern here.
(That said, if you're going to go there, and you have not already done
so, I'd use your approach on strings, and I'd make it so that it would
work properly on values like '6.62607015e-34'. I might also be
tempted to make it accept group-of-three-digit representations to make
obvious typos stand out visually. Perhaps: '6.626 070 15e-34')
Thanks,
--
Raul
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm
----------------------------------------------------------------------
For information about J forums see http://www.jsoftware.com/forums.htm