Re: 1-0.95

2014-07-03 Thread Marko Rauhamaa
Steven D'Aprano steve+comp.lang.pyt...@pearwood.info:

 On Wed, 02 Jul 2014 23:00:15 +0300, Marko Rauhamaa wrote:
 Steven D'Aprano steve+comp.lang.pyt...@pearwood.info:
 Rational(2).sqrt() * Rational(2).sqrt() == Rational(2)
False
 Square root of 2 is not a rational number.
 Nobody said it was. 

 Your comment can be read as implying it. You stated:

 [quote]
 Even arbitrary-precision RATIONALS [emphasis added] would 
 suffer from the same problem 
 [end quote]

 and then showed an invented example where you squared a NON-RATIONAL.

While √2 is irrational, the hypothetical Rational(2).sqrt() probably
would be another arbitrary-precision Rational number and thus,
necessarily an approximation.

That's completely analogous to Decimal(1) / Decimal(3) being an
approximation.

And the point: when dealing with real numbers on a computer, there's no
way to avoid approximations. If an aspiring programmer is dismayed at
the imprecision of 0.1, that probably wouldn't be the right moment to
talk about Decimal().

 By the way, there's no need to use an invented example. Here is an
 actual example:

 py import math
 py from fractions import Fraction
 py math.sqrt(Fraction(2))**2
 2.0004

Sure, although you were invoking arbitrary-precision rational numbers,
which Fraction() is not.

 I'm sorry Marko, have you not being paying attention? Have you ever
 done any numeric programming?

Your style is consistent and impeccable.

 Floating-point is *hard*, not perfect.

It can be both. The point is, regular floating point numbers will likely
the optimal choice for your numeric calculation needs. They are compact,
fast and readily supported by hardware and numeric software. Switching to
Decimal might give you a false sense of security.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


OT: speeds (physical, not computing) [was Re: 1-0.95]

2014-07-03 Thread Steven D'Aprano
On Wed, 02 Jul 2014 21:06:52 -0700, Rustom Mody wrote:

 On Thursday, July 3, 2014 7:49:30 AM UTC+5:30, Steven D'Aprano wrote:
 On Wed, 02 Jul 2014 23:00:15 +0300, Marko Rauhamaa wrote:
 
  On the other hand, floating-point numbers are perfect whenever you
  deal with science and measurement.
 
 /head-desk
 
 wink
 
 Just as there are even some esteemed members of this list who think that
 c - a is a meaningful operation
   where
 c is speed of light
 a is speed of an automobile
 
 
 /wink


You seem to be having some sort of nervous tic.

Subtracting two numbers a and c *is* a meaningful operation, even if they 
are speeds, and even in special relativity.

Consider an observer O in an inertial frame of reference. A car x is 
driving towards the observer at v metres per second, while a photon p 
travels away from the observer at c m/s:


x -- v  O p -- c


According to the observer, the difference in speeds between x and p is 
just (c - v), the same as in classic mechanics. The technical term for it 
is closing speed (or opening speed as the case may be) as seen by O.

Note that this is *not* the difference in speeds as observed by x, but I 
never said it was.


You don't have to believe me. You can believe the Physics FAQs, 
maintained by John Baez:

http://math.ucr.edu/home/baez/physics/Relativity/SR/velocity.html


The important part is the paragraph titled How can that be right? and 
ending In this sense velocities add according to ordinary vector 
addition.

As I wanted to confirm my understanding of the situation:

https://groups.google.com/forum/#!topic/sci.physics/BqT0p_7tHYg




-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-03 Thread Steven D'Aprano
On Thu, 03 Jul 2014 09:51:35 +0300, Marko Rauhamaa wrote:

 Steven D'Aprano steve+comp.lang.pyt...@pearwood.info:
[...]
 By the way, there's no need to use an invented example. Here is an
 actual example:

 py import math
 py from fractions import Fraction
 py math.sqrt(Fraction(2))**2
 2.0004
 
 Sure, although you were invoking arbitrary-precision rational numbers,
 which Fraction() is not.

In what way is Fraction not an arbitrary precision rational number? It's 
a rational number, and it can store numbers to any arbitrary precision 
you like (up to the limit of RAM) in any base you like. 

How about the smallest non-zero number representable in base 17 to 13004 
significant figures? I can represent that as a Fraction with no 
difficulty at all:

py x = 1/(Fraction(17)**13004)
py str(x)[:20] + ... + str(x)[-5:]
'1/572497511269282241...23521'


And it is calculated *exactly*.


Now, I admit that I have misgivings about using the term precision when 
it comes to discussing rational numbers, since the idea of significant 
figures doesn't really work very well with fraction notation. It's not 
clear to me how many significant figures x above should be described as 
having. The number of digits in its decimal expansion perhaps? But you 
started using the term, not me, so I'm just following your lead.

If you don't think Fraction counts as arbitrary precision rational 
number, what do you think does?



[...]
 Floating-point is *hard*, not perfect.
 
 It can be both. 

Perfect requires that it be flawless. It certainly is not flawless. As 
I have repeatedly stated, there are mathematical properties which 
floating point numbers do not obey. Given that they are supposed to model 
real numbers, the fact that they do not obey the mathematical laws 
applicable to real numbers is a pretty big flaw.


 The point is, regular floating point numbers will likely
 the optimal choice for your numeric calculation needs. They are compact,
 fast and readily supported by hardware and numeric software. Switching
 to Decimal might give you a false sense of security.

Ah, now this is a much more reasonable thing to say. Why didn't you say 
so in the first place? :-)




-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-03 Thread Marko Rauhamaa
Steven D'Aprano st...@pearwood.info:

 If you don't think Fraction counts as arbitrary precision rational 
 number, what do you think does?

I was assuming you were referring to an idealized datatype.

Fraction() doesn't have a square root method. Let's make one:

   def newton(x, n):
   guess = Fraction(1)
   for i in range(n):
   guess = (guess + x / guess) / 2
   return guess

newton(Fraction(2), 3)
   Fraction(577, 408)
newton(Fraction(2), 8)
   Fraction(489266466344238819545868088398566945584921822586685371455477\
   00898547222910968507268117381704646657,
   345963636159190997653185453890148615173898600719883426481871047662465\
   65694525469768325292176831232)
newton(Fraction(2), 18)

   ... keeps going and going and going ...

Point being, if you have trouble with floats, you will likely have it
with Decimal(), Fraction(), super-duper Rational(), Algebraic(),
Expressible(), you name it. You'll just have to develop an understanding
of numeric computation.

BTW, the same thing applies to integers, also. While Python has
abstracted out many of the 2's-complement arithmetic details, the bits
shine through.

 The point is, regular floating point numbers will likely the optimal
 choice for your numeric calculation needs. They are compact, fast and
 readily supported by hardware and numeric software. Switching to
 Decimal might give you a false sense of security.

 Ah, now this is a much more reasonable thing to say. Why didn't you
 say so in the first place? :-)

That's all I've been saying all along.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-02 Thread Steven D'Aprano
On Tue, 01 Jul 2014 14:17:14 -0700, Pedro Izecksohn wrote:

 pedro@microboard:~$ /usr/bin/python3
 Python 3.3.2+ (default, Feb 28 2014, 00:52:16) [GCC 4.8.1] on linux
 Type help, copyright, credits or license for more information.
 1-0.95
 0.050044
 
 
   How to get 0.05 as result?

Oh, this is a fantastic example of the trouble with floating point! Thank 
you for finding it!


py 0.05
0.05
py 1 - 0.95
0.050044
py 1 - 0.9 - 0.05
0.049975
py 1 - 0.5 - 0.45
0.04999


*This is not a Python problem*

This is a problem with the underlying C double floating point format. 
Actually, it is not even a problem with the C format, since this problem 
applies to ANY floating point format, consequently this sort of thing 
plagues *every* programming language (unless they use arbitrary-precision 
rationals, but they have their own problems).

In this *specific* case, you can get better (but slower) results by using 
the Decimal format:

py from decimal import Decimal
py 1 - Decimal(0.95)
Decimal('0.05')


This works because the Decimal type stores numbers in base 10, like you 
learned about in school, and so numbers that are exact in base 10 are 
(usually) exact in Decimal. However, the built-in float stores numbers in 
base 2, for speed and accuracy. Unfortunately many numbers which are 
exact in base 10 are not exact in base 2. Let's look at a simple number 
like 0.1 (in decimal), and try to calculate it in base 2:

0.1 in binary (0.1b) equals 1/2, which is too big.

0.01b equals 1/4, which is too big.

0.001b equals 1/8, which is too big.

0.0001b equals 1/16, which is too small, so the answer lies somewhere 
between 0.0001b and 0.001b.

0.00011b equals 1/16 + 1/32 = 3/32, which is too small.

0.000111b equals 1/16 + 1/32 + 1/64 = 7/64, which is too big, so the 
answer lies somewhere between 0.000111b and 0.00011b.

If you keep going, you will eventually get that the decimal 0.1 written 
in base 2 is .000110011001100110011001100110011... where the 0011 
repeats forever. (Just like in decimal, where 1/3 = 0.3... repeating 
forever.) Since floats don't use an infinite amount of memory, this 
infinite sequence has to be rounded off somewhere. And so it is that 0.1 
stored as a float is a *tiny* bit larger than the true decimal value 0.1:


py Decimal.from_float(0.1)
Decimal('0.155511151231257827021181583404541015625')


All the troubles with floating point numbers start with this harsh 
reality, numbers have to be rounded off somewhere lest they use infinite 
memory, and that rounding introduces errors into the calculation. 
Sometimes those errors cancel, and sometimes they reinforce.

To understand what is going on in more detail, you can start with these 
links:

https://docs.python.org/2/faq/design.html#why-am-i-getting-strange-results-with-simple-arithmetic-operations

http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

http://effbot.org/pyfaq/why-are-floating-point-calculations-so-inaccurate.htm

http://blog.codinghorror.com/why-do-computers-suck-at-math/

https://randomascii.wordpress.com/category/floating-point/

https://www.gnu.org/software/gawk/manual/html_node/Floating-Point-Issues.html


Good luck!


-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-02 Thread Marko Rauhamaa
Steven D'Aprano steve+comp.lang.pyt...@pearwood.info:

 This is a problem with the underlying C double floating point format.
 Actually, it is not even a problem with the C format, since this
 problem applies to ANY floating point format, consequently this sort
 of thing plagues *every* programming language (unless they use
 arbitrary-precision rationals, but they have their own problems).

Actually, it is not a problem at all. Floating-point numbers are a
wonderful thing.

 This works because the Decimal type stores numbers in base 10, like you 
 learned about in school, and so numbers that are exact in base 10 are 
 (usually) exact in Decimal.

Exactly, the problem is in our base 10 mind. Note, however:

Decimal(1) / Decimal(3) * Decimal(3)
   Decimal('0.')

Even arbitrary-precision rationals would suffer from the same problem:

Rational(2).sqrt() * Rational(2).sqrt() == Rational(2)
   False

Yes, I'm making it up, but it's still true.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-02 Thread Skip Montanaro
On Wed, Jul 2, 2014 at 11:59 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 Yes, I'm making it up, but it's still true.

I don't think there's any reason to be hypothetical:

In [149]: d
Out[149]: Decimal('2')

In [150]: d.sqrt() * d.sqrt() == d
Out[150]: False

:-)

Skip
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-02 Thread Steven D'Aprano
On Wed, 02 Jul 2014 19:59:25 +0300, Marko Rauhamaa wrote:

 Steven D'Aprano steve+comp.lang.pyt...@pearwood.info:
 
 This is a problem with the underlying C double floating point format.
 Actually, it is not even a problem with the C format, since this
 problem applies to ANY floating point format, consequently this sort of
 thing plagues *every* programming language (unless they use
 arbitrary-precision rationals, but they have their own problems).
 
 Actually, it is not a problem at all. Floating-point numbers are a
 wonderful thing.

No, *numbers* in the abstract mathematical sense are a wonderful thing. 
Concrete floating point numbers are a *useful approximation* to 
mathematical numbers. But they're messy, inexact, and fail to have the 
properties we expect real numbers to have, e.g. any of these can fail 
with IEEE-754 floating point numbers:

1/(1/x) == x

x*(y+z) == x*y + x*z

x + y - z == x - z + y

x + y == x implies y == 0

You think maths is hard? That's *nothing* compared to reasoning about 
floating point numbers, where you cannot even expect x+1 to be different 
from x.

In the Bad Old Days before IEEE-754, things were even worse! I've heard 
of CPUs where it was impossible to guard against DivideByZero errors:

if x != 0:  # succeeds
print 1/x  # divide by zero

because the test for inequality tested more digits than the divider used. 
Ouch.


 This works because the Decimal type stores numbers in base 10, like you
 learned about in school, and so numbers that are exact in base 10 are
 (usually) exact in Decimal.
 
 Exactly, the problem is in our base 10 mind.

No no no no! The problem is that *no matter what base you pick* some 
exact rational numbers cannot be represented in a finite number of digits.

(Not to mention the irrationals.)



 Note, however:
 
 Decimal(1) / Decimal(3) * Decimal(3)
Decimal('0.')

Yes! Because Decimal has a finite (albeit configurable) precision, while 
1/3 requires infinite number of decimal places. Consequently, 
1/Decimal(3) is a little bit smaller than 1/3, and multiplying by 3 gives 
you something a little bit smaller than 1.

Ironically, in base 2, the errors in that calculation cancel out:

py 1/3*3 == 1
True


and of course in base 3 the calculation would be exact.


 Even arbitrary-precision rationals would suffer from the same problem:

Not so.

py from fractions import Fraction
py Fraction(1, 3)*3 == 1
True

Arbitrary precision rationals like Fraction are capable of representing 
*every rational number* exactly (provided you have enough memory).


 Rational(2).sqrt() * Rational(2).sqrt() == Rational(2)
False

Square root of 2 is not a rational number.



-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-02 Thread Marko Rauhamaa
Steven D'Aprano steve+comp.lang.pyt...@pearwood.info:

 Rational(2).sqrt() * Rational(2).sqrt() == Rational(2)
False

 Square root of 2 is not a rational number.

Nobody said it was. It's just that even arbitrary-precision rational
numbers wouldn't free you from the issues of floating-point numbers. The
Decimal number class won't do it, either, of course.

On the other hand, floating-point numbers are perfect whenever you deal
with science and measurement. And when you deal with business (= money),
integers are the obvious choice.

I would venture to say that the real applications for Decimal are very
rare. In practice, I'm afraid, people with rather a weak understanding
of numbers and computation might gravitate toward Decimal unnecessarily.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-02 Thread Chris Angelico
On Thu, Jul 3, 2014 at 6:00 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 Steven D'Aprano steve+comp.lang.pyt...@pearwood.info:

 Rational(2).sqrt() * Rational(2).sqrt() == Rational(2)
False

 Square root of 2 is not a rational number.

 Nobody said it was. It's just that even arbitrary-precision rational
 numbers wouldn't free you from the issues of floating-point numbers. The
 Decimal number class won't do it, either, of course.

They do free you from the issues of floating point. In exchange, they
give you the problems of rationals. (Most notably, addition becomes
very slow. Remember grade school and learning to add/subtract vulgar
fractions?)

 On the other hand, floating-point numbers are perfect whenever you deal
 with science and measurement. And when you deal with business (= money),
 integers are the obvious choice.

Why are floats perfect for science, but not for other situations?

Integers are great if you can guarantee you can fit within them -
which, when you're talking about money, presumably means you're
working in a fixed-point system (eg with the popular
something-and-cents notation (including the GBP with pounds and
pence), you store currency in cents, which is fixed-point two places
after the decimal). What about when you have to work with fractions of
a cent? Ah! I know! Let's have two integers - one for the number of
dollars/euro/pounds/etc, and then another one that says how much out
of 2**32 of another one we have!

book = (29, 2147483648) # $29.50
airfare = (2468, 2920577761) # $2468.68
interest = (1, 616212701) # $1.1434732

See, integers are the obvious choice for money!

 I would venture to say that the real applications for Decimal are very
 rare. In practice, I'm afraid, people with rather a weak understanding
 of numbers and computation might gravitate toward Decimal unnecessarily.

Your second part? Possibly. There was some discussion about an import
hook that would turn all unmarked non-integer literals into Decimals
rather than floats, and it was decided that it wouldn't be worth it.
But there definitely are real uses for Decimal, and quite a lot of
them - just as there were very solid reasons for REXX's numeric
implementation having been fairly similar. (Although - unsurprisingly
given that Python has had another couple of decades of development -
not as sophisticated. For instance, REXX doesn't have numeric
contexts, so all changes to precision etc are global.)

Numbers can't be represented in a computer in any way that doesn't
potentially demand infinite storage. There are two basic techniques
for storing numbers: ratios, possibly where the denominator is
selected from a very restricted set (IEEE floating point is (usually)
this - the denominator must be a power of two), and algebraic symbols,
where you represent sqrt(2) as \u221a2 and evaluate to an actual
number only at the very end, if ever (which gets around the problems
of intermediate rounding, and allows perfect cancelling out -
pow(\u221a2,8) == 16). No matter what system you use, you're
eventually going to get down to a choice: retain all the precision you
possibly can, and maybe use infinite or near-infinite storage; or
throw away the bits that aren't going to affect the calculation
significantly, and keep the object size down to something reasonable.
I do seem to recall, back in maths class, being allowed to use either
22/7 or 3.14 for π, because the difference between either of those and
the true value was not significant :) It's the same in computing,
except that it's common to go as far as 3.141592653589793 (a number I
memorized out of the GW-BASIC manual, back when I first started
programming with floating point). Short of actually running on a
Turing machine, your program is always going to be bound by these
restrictions.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-02 Thread Steven D'Aprano
On Wed, 02 Jul 2014 23:00:15 +0300, Marko Rauhamaa wrote:

 Steven D'Aprano steve+comp.lang.pyt...@pearwood.info:
 
 Rational(2).sqrt() * Rational(2).sqrt() == Rational(2)
False

 Square root of 2 is not a rational number.
 
 Nobody said it was. 

Your comment can be read as implying it. You stated:

[quote]
Even arbitrary-precision RATIONALS [emphasis added] would 
suffer from the same problem 
[end quote]

and then showed an invented example where you squared a NON-RATIONAL. By 
the way, there's no need to use an invented example. Here is an actual 
example:

py import math
py from fractions import Fraction
py math.sqrt(Fraction(2))**2
2.0004


 It's just that even arbitrary-precision rational
 numbers wouldn't free you from the issues of floating-point numbers.

Hmmm, well, I wouldn't put it that way. I would put it that a rational 
number class has problems of its own. A correct, non-buggy rational 
number class does not suffer from most of the problems of floating point 
numbers. For example, apart from x = 0, the following:

1/(1/x) == x

is always true in mathematics, and always true with a rational class, but 
not always true with floating point:

py x = 49.0
py 1/(1/x) == x
False

py x = Fraction(49)
py 1/(1/x) == x
True


The specific problem you show, with sqrt, comes about because it takes 
you outside of the rationals and into the floating point numbers.


 The Decimal number class won't do it, either, of course.

Decimals are floating point number, so they suffer from the same kind of 
failures as other floating point numbers.


 On the other hand, floating-point numbers are perfect whenever you deal
 with science and measurement. 

/head-desk

I'm sorry Marko, have you not being paying attention? Have you ever done 
any numeric programming? Floating-point is *hard*, not perfect. Even 
*trivially simple* arithmetic problems can burn you, badly. Have you not 
heard of catastrophic cancellation, or the Table Maker's Dilemma, or ill-
conditioned equations? If scientists don't have to worry about these 
things (and they actually do), it's because the people writing the 
scientific libraries have worried about them.

Almost every thing that most non-experts believe is true about doing 
calculations on a computer is false -- including, I daresay, me. I'm sure 
I've probably got some wrong ideas too. Or at least incomplete ones.

This page gives some examples:

http://introcs.cs.princeton.edu/java/91float/

including a calculation of the harmonic sum 1/1 + 1/2 + 1/3 + 1/4 + ... 
Mathematically that sum diverges to infinity; numerically, it converges 
to a fixed, finite value.


 And when you deal with business (= money),
 integers are the obvious choice.

Unfortunately there is nothing obvious about using integers. If you want 
to represent $1.01, what could be more obvious than using 1.01? But 
that's the *wrong solution*.

Unfortunately using integers for money is trickier than you may think. If 
all you're doing is multiplying, adding and subtracting, then using 101 
for $1.01 is fine. But as soon as you start doing division, percentages, 
sales tax calculations, interest calculations, currency conversions, 
etc., you've got a problem. How do you divide 107 cents by 3? If you just 
truncate:

py (107//3)*3
105

you've just lost two cents. If you round to the nearest whole number:

py (round(107/3))*3
108

you've just invented a cent from thin air. Both answers are wrong. 
Depending on your application, you may pick one or the other, but either 
way, you have to care about rounding, and that's neither obvious nor easy.


 I would venture to say that the real applications for Decimal are very
 rare. In practice, I'm afraid, people with rather a weak understanding
 of numbers and computation might gravitate toward Decimal unnecessarily.

Financial applications are one of the real applications for Decimal. You 
can think of Decimal numbers as an easy way to fake the use of integers, 
without having to worry about moving the decimal point around or come up 
with your own rounding modes.

py from decimal import *
py setcontext(Context(prec=3))
py (Decimal(1.07)/3)*3
Decimal('1.07')


-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-02 Thread Rustom Mody
On Thursday, July 3, 2014 7:49:30 AM UTC+5:30, Steven D'Aprano wrote:
 On Wed, 02 Jul 2014 23:00:15 +0300, Marko Rauhamaa wrote:

  On the other hand, floating-point numbers are perfect whenever you deal
  with science and measurement. 

 /head-desk

wink

Just as there are even some esteemed members of this list who think
that c - a is a meaningful operation
  where
c is speed of light
a is speed of an automobile


/wink
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-02 Thread Gregory Ewing

Rustom Mody wrote:

Just as there are even some esteemed members of this list who think
that c - a is a meaningful operation
  where
c is speed of light
a is speed of an automobile


Indeed, it should be (c - a) / (1 - (c*a)/c**2).
Although loss of precision might give you the
right answer anyway. :-)

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-02 Thread Rustom Mody
On Thursday, July 3, 2014 10:25:17 AM UTC+5:30, Gregory Ewing wrote:
 Rustom Mody wrote:
  Just as there are even some esteemed members of this list who think
  that c - a is a meaningful operation
where
  c is speed of light
  a is speed of an automobile

 Indeed, it should be (c - a) / (1 - (c*a)/c**2).
 Although loss of precision might give you the
 right answer anyway. :-)

:-)

Surprising how unfamiliar familiar equations like the Lorentz
transformation look when converted from its usual mathematicerese and
put into programmerese

I like to think unicode would help.
But I find it does not help much:

(c-a)/(1 - ca/c²)

[And I would have sworn there should be a √ somewhere?
Dont remember any of this…]
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-02 Thread Ian Kelly
On Wed, Jul 2, 2014 at 10:55 PM, Gregory Ewing
greg.ew...@canterbury.ac.nz wrote:
 Although loss of precision might give you the
 right answer anyway. :-)

There aren't that many digits in the speed of light.  Unless we're
talking about a very, very slow-moving automobile.
-- 
https://mail.python.org/mailman/listinfo/python-list


1-0.95

2014-07-01 Thread Pedro Izecksohn
pedro@microboard:~$ /usr/bin/python3
Python 3.3.2+ (default, Feb 28 2014, 00:52:16) 
[GCC 4.8.1] on linux
Type help, copyright, credits or license for more information.
 1-0.95
0.050044
 

  How to get 0.05 as result?

  bc has scale=2 . Has Python some similar feature?

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-01 Thread Mark Lawrence

On 01/07/2014 22:17, Pedro Izecksohn wrote:

pedro@microboard:~$ /usr/bin/python3
Python 3.3.2+ (default, Feb 28 2014, 00:52:16)
[GCC 4.8.1] on linux
Type help, copyright, credits or license for more information.

1-0.95

0.050044




   How to get 0.05 as result?

   bc has scale=2 . Has Python some similar feature?



Asked and answered roughly one trillion times.  Try searching for python 
floating point, not that this is specific to python.


--
My fellow Pythonistas, ask not what our language can do for you, ask 
what you can do for our language.


Mark Lawrence

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com


--
https://mail.python.org/mailman/listinfo/python-list


Re: 1-0.95

2014-07-01 Thread pecore
Pedro Izecksohn izecks...@yahoo.com writes:

 pedro@microboard:~$ /usr/bin/python3
 Python 3.3.2+ (default, Feb 28 2014, 00:52:16) 
 [GCC 4.8.1] on linux
 Type help, copyright, credits or license for more information.
 1-0.95
 0.050044
 

   How to get 0.05 as result?

print(%4.2f%(1-0.95))

i.e., you can change how a result is displayed, not its internal
representation
-- 
https://mail.python.org/mailman/listinfo/python-list