Re: (default) Real->Rat precision should match what compiler uses for literals

2018-03-04 Thread Solomon Foster
On Sat, Mar 3, 2018 at 3:32 PM, yary  wrote:

> Or instead of 1/2**(32 or 64), re-asking these questions about epsilon:
>
> "  Why so large?
>
>Why not zero?  "
>
> What's justification for using 1/100,000 vs. something smaller vs. 0 "max
> possible precision?"
>

The problem with using max possible precision is that you will generate a
Rat with a large denominator.  And when you start doing math with those,
the results will usually flip back over to Nums pretty quickly.  But if you
went through the trouble of converting your number from a Num to a Rat,
presumably it's because you wanted to work with Rats, so ending up with a
Num again is a frustrating result.

I don't remember if I was responsible for 1e-6 as the epsilon or not, but
it's quite possible it was me, because in my experience that number is a
common 3D tolerance in CAD software (I believe it's the default in ACIS)
and I tend to grab it for that purpose.  My intuition is that the number
should probably be between 1e-6 and 1e-8, but it would be great if someone
wanted to put in some research/testing to figure out a better value.

It's important to remember that the purpose of Rat in p6 is to dodge around
the common pitfalls of floating point numbers in casual use, while
degrading to Num to prevent your calculations from crawling to a halt when
the denominator gets too big.  For real world measures, a double's 15
digits of accuracy is normally overkill.  (If I'm doing the math right, if
you had a CAD model of the second Death Star in meters, 15 digits would
allow you to specify dimensions to the nanometer.)

If you actually need unbounded precision, then you should be using
FatRats.  One thing I think we absolutely should have is a quick and easy
way to convert from Num to FatRat with minimal loss of precision.  I
believe right now the default Num -> FatRat conversion also uses 1e-6 as an
epsilon, which seems wrong to me.

-- 
Solomon Foster: colo...@gmail.com
HarmonyWare, Inc: http://www.harmonyware.com


Re: (default) Real->Rat precision should match what compiler uses for literals

2018-03-04 Thread yary
The point of Rats is making Perl6 more correct and less surprising in
common cases, such as
$ perl6
> 1.1+2.2
3.3
> 1.1+2.2 == 3.3
True
> 1.1+2.2 != 3.3
False

vs any language using binary floating-point arithmetic

  DB<1> p 1.1+2.2
3.3
  DB<2> p 1.1+2.2 == 3.3

  DB<3> p 1.1+2.2 != 3.3
1

In that spirit, I'd expect numeric comparison in general, and epsilon
specifically, to be set so these return True:

> pi == pi.Rat # Does Num to Rat conversion keep its precision?
False
> pi.Str.Num == pi # Does Num survive string round-trip? - Nothing to do
with epsilon
False

On the other hand, the original poster Jim and I are both fiddling with the
language to see what it's doing with conversions- I'm are not coming across
this in an application.

What's the greatest value of epsilon that guarantees $x == $x.Rat for all
Num $x - and does that epsilon make denominators that are "too big?" My
understanding of floating point suggests that we should have an epsilon of
1/2**(mantissa bits-1). On a 64-bit platform that's 1/2**52 - for a 32-bit
float that's 1/2**23. Using a 64-bit Rakudo:

> pi.Rat(1/2**52) == pi  # Just enough precision to match this platform's
Num
True
> pi.Rat(1/2**51) == pi  # Epsilon falling a bit short for precision
False

In terms of correctness, epsilon=1/2**(mantissa bits-1) looks like a
winner. Is that acceptable for size and speed?

and digressing, how to make "pi.Str.Num == pi" True?