On Thu, Aug 7, 2014 at 9:02 AM, rjf <fate...@gmail.com> wrote:
>
>
> On Wednesday, August 6, 2014 8:11:21 PM UTC-7, Robert Bradshaw wrote:
>>
>>
>>
>> The are two representations of the same canonical object.
>
>
> The (computer algebra) use of the term, as in "simplified to a canonical
> form"  means
> the representation is canonical.  It doesn't make much sense to claim that
> all these
> are canonical:   1+1, 2,  2*x^0,  sin(x)^2+cos(x)^2 + exp(0).

The point was that there's a canonical domain in which to do the computation.

>> > And what structure is that?  Does Sage know about   Z_{nonprime} ?
>>
>> Of course, as illustrated.
>>
>> sage: Integers(13^1024)
>> Ring of integers modulo 4764...1
>
>
> How much does it know? Does it know that it is not a field, but that
> Integers(13) is a field?

sage: Integers(13).is_field()
True
sage: Integers(13^1024).is_field()
False

>> > I'm still confused.   Is the term "Real Field" in Sage  the (or some)
>> > real
>> > field?
>> >
>> > If it is an approximation to a field, but not a field, why are you
>> > calling
>> > it a field?
>>
>> Because it's shorter to type and easier to find/discover than
>> ApproximateRealField or something like that.
>>
>> > If that doesn't get you in trouble, why doesn't it?  Does Real Field
>> > inherit
>> > from
>> > Field?  Does every non-zero element have an inverse?
>>
>> Of course it suffers from the same issues that (standard) floating
>> point numbers do in any language, user be aware (and we at least
>> document that).
>
> And you know that everyone reads the documentation?
> No, it doesn't suffer from the same issues as in other languages, because
> those other languages probably don't refer to it as a field.

The issues of floating point errors and rounding are much larger than
the question of whether every element has an inverse. You seem very
fixated on the name.

We also have an object called the ring of integers, but really it's
the ring of integers that fits into the memory of your computer.
Should we not call it a Ring?

>> > Does Sage have other um, approximations, in its nomenclature?
>>
>> Sure. RealField(123)[x]. Power series rings. P-adics.
>
> These approximations are approximations by their nature.  If you are
> computing with a power series, the concept inherently includes an error term
> which you are aware of.  Real Field is (so far as I know) a concept that
> should have the properties of a field.  The version in Sage does not.
> It's like saying someone isn't pregnant.  well only a little pregnant.

They're no more approximate by nature than the real numbers.

The p-adic numbers form a field. For any choice of representation some
of them can be represented exactly on a computer, most can't. When
doing computations with p-adic numbers one is typically chooses a
precision (e.g. how many digits, not unlike a choice of number of
bits) to use.

Power series (to make things concrete, say the power series in one
variable over the integers) form a ring. For any choice of
representation some of them can be represented exactly on a computer,
most can't. When doing computations with power series one is typically
chooses a precision (e.g. how many terms, not unlike a choice of
number of bits) to use.

Real numbers form a field. For any choice of representation some of
them can be represented exactly on a computer, most can't. When doing
computations with real numbers...

>> >> It is more conservative to convert operands to the domain with less
>> >> precision.
>> >
>> > Why do you say that?  You can always exactly convert a float number  in
>> > radix b to
>> > an equal number of higher precision in radix b by appending zeros.
>> > So it is more conserving (of values) to do so, rather than clipping off
>> > bits from the other.
>>
>> Clipping bits (or digits) is exactly how one is taught to deal with
>> significant figures in grade school, and follows the principle of
>> least surprise (though floating point numbers like to play surprises
>> on you no matter what). It's also what floating point arithmetic does
>> when the exponent is different.
>
>
> It is of course also taught in physics and chemistry labs, and I used this
> myself in the days when slide-rules were used and you could read only
> 3 or so significant figures.  That doesn't make it suitable for a computer
> system.  There are many things you learn along the way that are simplified
> versions of the more fully elaborated systems of higher math.
> What did you know about the branch cuts in the complex logarithm
> or  log(-1)  when you were first introduced to log?

Only being able to store 53 significant bits is completely analogous
to only being able to read 3 significant (decimal) figures. I think
the analogy is very suitable for a computer system. It can clearly be
made much more rigorous and precise.

Or are you seriously proposing when adding 3.14159 and 1e-100 it makes
more sense, by default, to pad the left hand side with zeros (whether
in binary or decimal) and return 3.1415900000...0001 as the result?

>> >> We consider the rationals to have infinite precision, our
>> >> real "fields" a specified have finite precision. This lower precision
>> >> is represented in the output, similar to how significant figures are
>> >> used in any other scientific endeavor.
>> >
>> > Thanks for distinguishing between "field" and field.  You don't seem
>> > to understand the concept of precision though.
>>
>> That's a bold claim. My Ph.D. thesis depended on understanding issues
>> of precision. I'll admit explaining it to a layman can be difficult.
>
> Is your thesis available online?  I would certainly look at it and see
> how you define precision.

Yes, it's online.

I define precision (e.g. computing a value to a given precision) as
the negated log of the absolute (respectively relative) error, but
most importantly it's something you can have more or less of, and lose
due to rounding, etc. Perhaps I could have used the term "accuracy"
instead. I use the actual errors themselves in my analysis.

Generally, however, precision is something attached to a read "field"
and specifies the number of bits used to represent its elements (and,
consequently, exactly what rounding errors are introduced when doing
arithmetic).

>> > You seem to think
>> > that a number of low precision has some inaccuracy or uncertainty.
>> > Which it doesn't.   0.5 is the same number as 0.500000.
>> > Unless you believe that 0.5  is  the interval [0.45000000000....01,
>> > 0.549999..........9]
>> > which you COULD believe  -- some people do believe this.
>> > But you probably don't want to ask them about scientific computing.
>>
>> No, I don't think that at all. Sage also has the concept of real
>> intervals distinct from real numbers.
>
>
> I suppose that the issue here is I don't know what you mean by "real
> number".
> Sage has something in it called "real".  Mathematics uses that
> term too (e.g. Dedekind cut).  They appear to be different.
>>
>>
>> There's 0.5 the real number equal to one divided by two. There's also
>> 0.5 the IEEE floating point number, which is a representative for an
>> infinite number of real numbers in a small interval.
>
> Can you cite a source for that last statement?  While I suppose you
> can decide anything you wish about your computer, the (canonical?)
> explanation is that the IEEE ordinary floats correspond to a subset
> of the EXACT rational numbers equal to <some integer> X 2^<some integer>
> which is not an interval at all.
> (there are also inf and nan things which are not ordinary in that sense)
>
> So, citation?  (And I don't mean citing a physicist, or someone who
> learned his arithmetic in 4th grade and hasn't re-evaluated it since.
> A legitimate numerical analyst.)

How about the docs on error analysis for LAPACK, which are presumably
written by an expert:
https://software.intel.com/sites/products/documentation/hpc/mkl/mklman/GUID-51614DEE-9DF8-4D45-80F5-3B25CB7FF748.htm

There it says "you often need to solve a system Ax = b, where the data
(the elements of A and b) are not known exactly." So what are the IEEE
floats sitting in your computer when you're doing this computation?
They're representatives for the actual (either unknown or impossible
to represent) elements of your data. And, yes, they're also exact
rational numbers--once again they can be both--but the point is that
the input and output floating point numbers are viewed as
perturbations of the (actual) real field elements you care about.

Here they also talk about "loss of precision" similar in spirit to the
precision (of a value) as above.

>> >> This is similar to 10 + (5 mod 13), where the right hand side has
>> >> "less precision" (in particular there's a canonical map one way, many
>> >> choices of lift the other.
>> >>
>> >> Also, when a user writes 1.3, they are more likely to mean 13/10 than
>> >> 5854679515581645 / 4503599627370496, but by expressing it in decimal
>> >> form they are asking for a floating point approximation. Note that
>> >> real literals are handled specially to allow immediate conversion to a
>> >> higher-precision parent.
>> >
>> > What do you know when a user writes 1.3, really?
>>
>> Quite a bit. They probably didn't mean pi. If they really cared, they
>> could have been more specific. At least we recognize this ambiguity
>> but we can't let it paralyze us.
>
>
> I think there is, in the community that consumes "scientific computing"
> since 1960 or so, a set of expectations about "1.3".    You can use this,
> or try to alter the expectations for your system.  For example, if you
> displayed it as 13/10, that would be a change.  Or if you displayed it
> as [1.25,1.35].   But then you are using a notation 1.25, and what
> does that mean,  [[1.245,1.255], ....  ]  etc.
>>
>>
>> > You want the user
>> > to believe that Sage uses decimal arithmetic?  Seriously?  How far
>> > are you going to try to carry that illusion?
>>
>> If they don't immediately specify a new domain, we'll treat it as
>> having 53 bits. It's syntactic sugar.
>
>
> So it sounds like you actually read the input as  13/10, because only then
> can
> you  approximate it to higher precision than 53 bits or whatever.   Why not
> just admit this instead of talking
> about 1.3.

In this case the user gives us a decimal literal. Yes, this literal is
equal to 13/10. We defer interpreting this as a 53-bit binary floating
point number long enough for the user to tell us to interpret it
differently. This prevents surprises like

sage: RealField(100)(float(1.3))
1.3000000000000000444089209850

or, more subtly

sage: sqrt(RealField(100)(float(1.3)))
1.1401754250991379986106491649

instead of

sage: sqrt(RealField(100)(1.3))
1.1401754250991379791360490256

When you write 1.3, do you really think 5854679515581645 /
4503599627370496, or is your head really thinking "the closest thing
to 13/10 that I can get given my choice of floating point
representation?" I bet it's the latter, which is why we do what we do.

- Robert

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to