On Wednesday, August 6, 2014 8:11:21 PM UTC-7, Robert Bradshaw wrote:
>
>
>
> The are two representations of the same canonical object. 
>

The (computer algebra) use of the term, as in "simplified to a canonical 
form"  means
the representation is canonical.  It doesn't make much sense to claim that 
all these
are canonical:   1+1, 2,  2*x^0,  sin(x)^2+cos(x)^2 + exp(0).     

>
>
> >> 
> > And what structure is that?  Does Sage know about   Z_{nonprime} ? 
>
> Of course, as illustrated. 
>
> sage: Integers(13^1024) 
> Ring of integers modulo 4764...1 
>

How much does it know? Does it know that it is not a field, but that 
Integers(13) is a field?


> > I'm still confused.   Is the term "Real Field" in Sage  the (or some) 
>  real 
> > field? 
> > 
> > If it is an approximation to a field, but not a field, why are you 
> calling 
> > it a field? 
>
> Because it's shorter to type and easier to find/discover than 
> ApproximateRealField or something like that. 
>
> > If that doesn't get you in trouble, why doesn't it?  Does Real Field 
> inherit 
> > from 
> > Field?  Does every non-zero element have an inverse? 
>
> Of course it suffers from the same issues that (standard) floating 
> point numbers do in any language, user be aware (and we at least 
> document that). 
>

And you know that everyone reads the documentation?
No, it doesn't suffer from the same issues as in other languages, because
those other languages probably don't refer to it as a field.
 

>
> > Does Sage have other um, approximations, in its nomenclature? 
>
> Sure. RealField(123)[x]. Power series rings. P-adics. 
>

These approximations are approximations by their nature.  If you are
computing with a power series, the concept inherently includes an error term
which you are aware of.  Real Field is (so far as I know) a concept that
should have the properties of a field.  The version in Sage does not.
It's like saying someone isn't pregnant.  well only a little pregnant. 

>
> >
>
.... snip.... 

> >> It is more conservative to convert operands to the domain with less 
> >> precision. 
> > 
> > Why do you say that?  You can always exactly convert a float number  in 
> > radix b to 
> > an equal number of higher precision in radix b by appending zeros. 
> > So it is more conserving (of values) to do so, rather than clipping off 
> > bits from the other. 
>
> Clipping bits (or digits) is exactly how one is taught to deal with 
> significant figures in grade school, and follows the principle of 
> least surprise (though floating point numbers like to play surprises 
> on you no matter what). It's also what floating point arithmetic does 
> when the exponent is different. 
>

It is of course also taught in physics and chemistry labs, and I used this
myself in the days when slide-rules were used and you could read only
3 or so significant figures.  That doesn't make it suitable for a computer
system.  There are many things you learn along the way that are simplified
versions of the more fully elaborated systems of higher math.
What did you know about the branch cuts in the complex logarithm
or  log(-1)  when you were first introduced to log?
 

>
> >> We consider the rationals to have infinite precision, our 
> >> real "fields" a specified have finite precision. This lower precision 
> >> is represented in the output, similar to how significant figures are 
> >> used in any other scientific endeavor. 
> > 
> > Thanks for distinguishing between "field" and field.  You don't seem 
> > to understand the concept of precision though. 
>
> That's a bold claim. My Ph.D. thesis depended on understanding issues 
> of precision. I'll admit explaining it to a layman can be difficult. 
>

Is your thesis available online?  I would certainly look at it and see
how you define precision.
 

>
> > You seem to think 
> > that a number of low precision has some inaccuracy or uncertainty. 
> > Which it doesn't.   0.5 is the same number as 0.500000. 
> > Unless you believe that 0.5  is  the interval [0.45000000000....01, 
> > 0.549999..........9] 
> > which you COULD believe  -- some people do believe this. 
> > But you probably don't want to ask them about scientific computing. 
>
> No, I don't think that at all. Sage also has the concept of real 
> intervals distinct from real numbers. 
>

I suppose that the issue here is I don't know what you mean by "real 
number".
Sage has something in it called "real".  Mathematics uses that
term too (e.g. Dedekind cut).  They appear to be different. 

>
> There's 0.5 the real number equal to one divided by two. There's also 
> 0.5 the IEEE floating point number, which is a representative for an 
> infinite number of real numbers in a small interval. 
>

Can you cite a source for that last statement?  While I suppose you
can decide anything you wish about your computer, the (canonical?)
explanation is that the IEEE ordinary floats correspond to a subset
of the EXACT rational numbers equal to <some integer> X 2^<some integer>
which is not an interval at all.
(there are also inf and nan things which are not ordinary in that sense)

So, citation?  (And I don't mean citing a physicist, or someone who
learned his arithmetic in 4th grade and hasn't re-evaluated it since.
A legitimate numerical analyst.) 

>
> >> This is similar to 10 + (5 mod 13), where the right hand side has 
> >> "less precision" (in particular there's a canonical map one way, many 
> >> choices of lift the other. 
> >> 
> >> Also, when a user writes 1.3, they are more likely to mean 13/10 than 
> >> 5854679515581645 / 4503599627370496, but by expressing it in decimal 
> >> form they are asking for a floating point approximation. Note that 
> >> real literals are handled specially to allow immediate conversion to a 
> >> higher-precision parent. 
> > 
> > What do you know when a user writes 1.3, really? 
>
> Quite a bit. They probably didn't mean pi. If they really cared, they 
> could have been more specific. At least we recognize this ambiguity 
> but we can't let it paralyze us. 
>

I think there is, in the community that consumes "scientific computing"
since 1960 or so, a set of expectations about "1.3".    You can use this,
or try to alter the expectations for your system.  For example, if you
displayed it as 13/10, that would be a change.  Or if you displayed it
as [1.25,1.35].   But then you are using a notation 1.25, and what
does that mean,  [[1.245,1.255], ....  ]  etc. 

>
> > You want the user 
> > to believe that Sage uses decimal arithmetic?  Seriously?  How far 
> > are you going to try to carry that illusion? 
>
> If they don't immediately specify a new domain, we'll treat it as 
> having 53 bits. It's syntactic sugar. 
>

So it sounds like you actually read the input as  13/10, because only then 
can
you  approximate it to higher precision than 53 bits or whatever.   Why not 
just admit this instead of talking
about 1.3.
 

>
> > You imagine a user who understands rings and fields, and knows that 
> > Real Field is not a field, but knows so little about computers that he 
> > thinks 1.3  is 13/10 exactly?    (By the way, I have no problem  if 
> > 1.3 actually produces 13/10,  and to get a float, you have to try to 
> > convert it explicitly, and the conversion might even come up with an 
> > interval 
> > or an error bound or something that leads to "reliable" computing. 
> > Rather than some guessy-how-many-bits stab in the dark thing that 
> > prints as 1.3 
>
> Again, it's syntactic sugar. 
>

For something to be syntactic sugar, you have to specify what it means 
underneath.
It seems that 1.3  without further context is syntactic sugar for 13/10 .

sage:1.3   is syntactic sugar for
   
sage: 13/10.RealField(53)

Or something like that.   


> >> sage: QQ(1.3) 
> >> 13/10 
> >> sage: (1.3).exact_rational() 
> >> 5854679515581645/4503599627370496 
> >> sage: a = RealField(200)(1.3); a 
> >> 1.3000000000000000000000000000000000000000000000000000000000 
> >> sage: a.exact_rational() 
> >> 
> 522254864384171839551137680010877845819715972979407671472947/401734511064747568885490523085290650630550748445698208825344
>  
>
> > 
> > I assume it is possible to calculate all kinds of things if you 
> carefully 
> > specify them, 
> > in Sage.  After all, it has all those programs to call, including sympy. 
> > The 
> > issue iwe have been discussing is really what does it do 
> "automatically". 
>
> I don't think any answer is going to be right for everyone, given the 
> diversity of users we have. I personally think we've found a nice 
> happy medium, but you're free to disagree. 
>

Yes.   
... 

> >> sage: a = 0.1000000000000000000000000000000000 
> >> sage: a.precision() 
> >> 113 
> > 
> > 
> > So 0.1- 0.10000000000000000000000000000.... 
> > is 0.0?????????????????   where ? = undetermined? 
>
> sage: .1- 0.10000000000000000000000000000 
> 0.000000000000000 
> sage: parent(_) 
> Real Field with 53 bits of precision 
>
> > and anyone who writes x+0.1  is relegating that sum to 1 digit 
> "precision"? 
>
> As I mentioned previously, we default to the (somewhat arbitrary) 
> minimum of 53 bits of precision. 
>

OK,  so you are saying that .1   has 53 bits of precision even though it 
appears to have, oh, about 3 bits.
Are you familiar with the problems such a design decision causes in 
Mathematica? 

>
> >> > Inventing fast methods is fun, although (my opinion)  multiplying 
> >> > integers 
> >> > of astronomical size is hardly 
> >> > mainstream scientific computing. 
> >> > 
> >> > Not to say that someone might claim that 
> >> > this problem occurs frequently in many computations in pure and 
> applied 
> >> > mathematics... 
> >> 
> >> I've personally "applied" multiplying astronomically sized before 
> >> (thought the result itself is squarely in the domain of pure math): 
> >> http://www.nsf.gov/news/news_summ.jsp?cntn_id=115646/ 
> > 
> >  Assume there is an application that involves multiplication of 
> polynomials. 
> > You can multiply polynomials by encoding them as large integers, 
> > multiplying, and decoding.  Sometimes called Kronecker's Trick. 
> > 
> > So there are lots of applications.  Are they stupid tricks? Probably. 
>
> You claim "this idea has practical implications for efficient 
> programs" in http://www.cs.berkeley.edu/~fateman/papers/polysbyGMP.pdf 
> <http://www.google.com/url?q=http%3A%2F%2Fwww.cs.berkeley.edu%2F~fateman%2Fpapers%2FpolysbyGMP.pdf&sa=D&sntz=1&usg=AFQjCNFHh2WyIyZn8IQj_9iw7aCxGYRfrw>
>  
> Now you claim it's stupid. Maybe it's both. 
>
Yes it is both.  A stupid trick because (assuming you are grabbing someone
else's library code for huge integer multiplication) you might as well grab
someone else's library code for fast polynomial multiplication.  And that
would generally be faster.
The practical implication is,  if your system constrains you to using a 
naive polynomial
multiplication program,  but it also has available a super-fast integer 
multiplication program,
as well as  [generally very important] fast encoding and decoding, you have 
a choice of
how to do polynomial multiplication, notably for high degree polynomials 
with small, esp. finite
field, coefficients.  (This is not usually a choice a CAS designer has to 
make -- it is
easier to program a non-naive polynomial multiplication program.)

RJF





>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to