Re: Working with the set of real numbers

2014-03-06 Thread Oscar Benjamin
On 5 March 2014 12:57, Dave Angel da...@davea.name wrote:
  Oscar Benjamin oscar.j.benja...@gmail.com Wrote in message:
 On 4 March 2014 23:20, Dave Angel da...@davea.name wrote:

 On the assumption that division by 2 is very fast,  and that a
  general multiply isn't too bad, you could improve on Newton by
  observing that the slope is 2.

err = n - guess * guess
 guess += err/2

 I gues you mean like this:

 def sqrt(n):
 err = guess = 1
 while err  1e-10:
 err = n - guess * guess
 guess += err/2
 return guess

 This requires log2(10)*N iterations to get N digits.

 No idea how you came up with that,  but I see an error in my
  stated algorithm,  which does surely penalize it. The slope isn't
  2, but 2x.  So the line should have been
 guess += err/(2*guess)

Ah, now I understand. This is the same algorithm that Chris posted
which in turn is the same as the one that I had previously posted: all
three are just using Newton's method.

Newton's method uses the slope argument to find a root of f(x) i.e. an
x0 such that f(x0) = 0. We start with a guess x[n]. We then find the
value of the function f(x[n]) and slope fp(x[n]) and extrapolating to
the point where f(x) hits the x-axis we find an improved estimate with

x[n+1] = x[n] - f(x[n]) / fp(x[n])

In our case f(x) = x**2 - y and so fp(x) = 2*x which gives

x[n+1] = x[n] - (x[n]**2 - y) / (2*x[n])

and after rearranging we can express this as

x[n+1] = (x[n] + y/x[n]) / 2.

So my loop

while x ** 2 - y  x * eps:
x = (x + y/x) / 2

and Chris' loop:

while abs(guess1-guess2)  epsilon:
guess1 = n/guess2
guess2 = (guess1 + guess2)/2

and now your loop

while err  1e-10:
err = n - guess * guess
guess += err/(2 * guess)

are all performing the same basic algorithm. The only significant
difference is that mine tests for a relative error condition where as
the other two test for absolute error. This means that it will still
converge to an answer with the correct precision even when the root is
large e.g. sqrt(1e100). The other two are susceptible to infinite
loops in this case.

 Now if you stop the loop after 3 iterations (or use some other
  approach to get a low-precision estimate,  then you can calculate
  scale = 1/(2*estimate)

 and then for remaining iterations,
 guess += err *scale

Fair enough. That's a reasonable suggestion for improvement. This is
often known as the modified Newton's method. For lack of a better
reference see here:
http://math.stackexchange.com/questions/645088/modified-newtons-method

Using an approximate slope to avoid division can be a significant
optimisation. This is especially true when using the multidimensional
generalisation of Newton's method since in this case the division is
replaced with solving a system of linear equations (vastly more
expensive). However as noted in that stackexchange answer the use of
an approximate slope prevents quadratic convergence so in the
asymptotic case where we want large numbers of digits it can be much
slower.

Actually it's worse than that since we are guaranteed to converge to
an incorrect answer. The update step x = (x + y/x) / 2 will converge
when x^2 = y but we replace it with x = (x + y/x0)/2 where x0 is near
to the true root. This will converge when x == (x + y/x0)/2 i.e. when
x = y/x0 which is not the correct answer (and can be computed without
a loop in any case). It's possible to fix that by alternating between
steps where we improve our estimate  of the slope and steps where we
use that estimate for refinement but I don't think that the advantage
of not using division counts against the loss of quadratic convergence
once we get to more than 10s of digits.

You can compare them yourself with this code:

def sqrt1(y, prec=1000):
'''Solve x**2 = y'''
assert y  0
eps = D(10) ** -(prec + 5)
x = D(y)
with localcontext() as ctx:
ctx.prec = prec + 10
while abs(x ** 2 - y)  x * eps:
x = (x + y/x) / 2
return x

def sqrt2(y, prec=1000):
'''Solve x**2 = y'''
assert y  0
eps = D(10) ** -(prec + 5)
x = D(y)
with localcontext() as ctx:
ctx.prec = prec + 10
for n in range(7):
x = (x + y/x) / 2
x0r = 1 / x
while abs(x ** 2 - y)  x * eps:
err = x**2 - y
err2 = x - x0r * y
x = (x + x0r * y) / 2
return x

sqrt2(2) goes into an infinite loop at an error level of ~1e-100 (the
exact point where this happens depends on how many warmup iterations
it uses).

[snip]

 This method works out much slower than Newton with division at 1
 digits: 40s (based on a single trial) vs 80ms (timeit result).

 Well considering you did not special-case the divide by 2, I'm not
  surprised it's slower.

Multiplying by D('0.5') instead of dividing by 2 makes no measurable
difference to the timings. I don't know whether that's because it's
already 

Re: Working with the set of real numbers

2014-03-06 Thread Chris Angelico
On Thu, Mar 6, 2014 at 11:27 PM, Oscar Benjamin
oscar.j.benja...@gmail.com wrote:
 So my loop

 while x ** 2 - y  x * eps:
 x = (x + y/x) / 2

 and Chris' loop:

 while abs(guess1-guess2)  epsilon:
 guess1 = n/guess2
 guess2 = (guess1 + guess2)/2

 and now your loop

 while err  1e-10:
 err = n - guess * guess
 guess += err/(2 * guess)

 are all performing the same basic algorithm. The only significant
 difference is that mine tests for a relative error condition where as
 the other two test for absolute error. This means that it will still
 converge to an answer with the correct precision even when the root is
 large e.g. sqrt(1e100). The other two are susceptible to infinite
 loops in this case.

This is one place where I find the REXX 'numeric fuzz' setting to be a
useful trick. The definition is that, whenever two numbers are
compared for equality, the 'numeric digits' setting is effectively
reduced by the fuzz for that comparison. So let's say we're
calculating to a precision of 100 digits ('numeric digits 100'). You
could then code the loop as do until guess1=guess2, with a numeric
fuzz of, say, 2. That'll give you 98 digits of accuracy, *regardless
of scale*. It's like setting epsilon to whatever would be 1e-98 if we
were working with numbers between 0 and 1, more or less. A sliding
epsilon.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread Steven D'Aprano
Following up on my own post.

On Wed, 05 Mar 2014 07:52:01 +, Steven D'Aprano wrote:

 On Tue, 04 Mar 2014 23:25:37 -0500, Roy Smith wrote:
 
 I stopped paying attention to mathematicians when they tried to
 convince me that the sum of all natural numbers is -1/12.
[...]
 In effect, the author Mark Carrol-Chu in the GoodMath blog above wants
 to make the claim that the divergent sum is not equal to ζ(-1), but
 everywhere you find that divergent sum in your calculations you can rub
 it out and replace it with ζ(-1), which is -1/12. In other words, he's
 accepting that the divergent sum behaves *as if* it were equal to -1/12,
 he just doesn't want to say that it *is* equal to -1/12.
 
 Is this a mere semantic trick, or a difference of deep and fundamental
 importance? Mark C-C thinks it's an important difference. Mathematicians
 who actually work on this stuff all the time think he's making a
 semantic trick to avoid facing up to the fact that sums of infinite
 sequences don't always behave like sums of finite sequences.

Here's another mathematician who is even more explicit about what she's 
complaining about:

http://blogs.scientificamerican.com/roots-of-unity/2014/01/20/is-the-sum-of-positive-integers-negative/

[quote]
There is a meaningful way to associate the number -1/12 to the 
series 1+2+3+4…, but in my opinion, it is misleading to call 
it the sum of the series.
[end quote]

Evelyn Lamb's objection isn't about the mathematics that leads to the 
conclusion that the sum of natural numbers is equivalent to -1/12. That's 
conclusion is pretty much bulletproof. Her objection is over the use of 
the word equals to describe that association. Or possibly the use of 
the word sum to describe what we're doing when we replace the infinite 
series with -1/12.

Whatever it is that we're doing, it doesn't seem to have the same 
behavioural properties as summing finitely many finite numbers. So 
perhaps she is right, and we shouldn't call the sum of a divergent series 
a sum?


-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread wxjmfauth
Mathematics?
The Flexible String Representation is a very nice example
of a mathematical absurdity.

jmf

PS Do not even think to expect to contradict me. Hint:
sheet of paper and pencil.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-05 Thread Ned Batchelder

On 3/5/14 4:00 AM, wxjmfa...@gmail.com wrote:

Mathematics?
The Flexible String Representation is a very nice example
of a mathematical absurdity.

jmf

PS Do not even think to expect to contradict me. Hint:
sheet of paper and pencil.



Reminder to everyone: JMF makes no sense when he talks about the FSR, 
and absurdly seems to think hinting at paper and pencil will convince us 
he is right.  Don't engage with him on this topic.


--
Ned Batchelder, http://nedbatchelder.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-05 Thread Oscar Benjamin
On 4 March 2014 23:20, Dave Angel da...@davea.name wrote:

 One problem with complexity claims is that it's easy to miss some
  contributing time eaters. I haven't done any measuring on modern
  machines nor in python, but I'd assume that multiplies take
  *much* longer for large integers, and that divides are much
  worse. So counting iterations isn't the whole story.

Agreed but there's a big difference between log(N) iterations and N iterations!

 On the assumption that division by 2 is very fast,  and that a
  general multiply isn't too bad, you could improve on Newton by
  observing that the slope is 2.

err = n - guess * guess
 guess += err/2

I gues you mean like this:

def sqrt(n):
err = guess = 1
while err  1e-10:
err = n - guess * guess
guess += err/2
return guess

This requires log2(10)*N iterations to get N digits. So the penalty
for using division would have to be extreme in order for this to
better. Using Decimal to get many digits we can write that as:

def sqrt2(n, prec=1000):
'''Solve x**2 = y'''
eps = D(10) ** -(prec + 5)
err = guess = D(1)
with localcontext() as ctx:
ctx.prec = prec + 10
while abs(err)  eps:
err = n - guess*guess
guess += err/2
return guess

This method works out much slower than Newton with division at 1
digits: 40s (based on a single trial) vs 80ms (timeit result).

 Some 37 years ago I microcoded  a math package which included
  square root.  All the math was decimal, and there was no hardware
  multiply or divide. The algorithm I came up with generated the
  answer one digit at a time, with no subsequent rounding needed.
  And it took just a little less time than two divides. For that
  architecture,  Newton's method would've been too
  slow.

If you're working with a fixed small precision then it might be.

 Incidentally, the algorithm did no divides, not even by 2. No
  multiplies either.  Just repeated subtraction,  sorta like divide
  was done.

 If anyone is curious,  I'll be glad to describe the algorithm;
  I've never seen it published, before or since.  I got my
  inspiration from a method used in mechanical,  non-motorized,
  adding machines.  My father had shown me that approach in the
  50's.

I'm curious.


Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread Oscar Benjamin
On 5 March 2014 07:52, Steven D'Aprano st...@pearwood.info wrote:
 On Tue, 04 Mar 2014 23:25:37 -0500, Roy Smith wrote:

 I stopped paying attention to mathematicians when they tried to convince
 me that the sum of all natural numbers is -1/12.

 I'm pretty sure they did not. Possibly a physicist may have tried to tell
 you that, but most mathematicians consider physicists to be lousy
 mathematicians, and the mere fact that they're results seem to actually
 work in practice is an embarrassment for the entire universe. A
 mathematician would probably have said that the sum of all natural
 numbers is divergent and therefore there is no finite answer.

Why the dig at physicists? I think most physicists would be able to
tell you that the sum of all natural numbers is not -1/12. In fact
most people with very little background in mathematics can tell you
that.

The argument that the sum of all natural numbers comes to -1/12 is
just some kind of hoax. I don't think *anyone* seriously believes it.

 Well, that is, apart from mathematicians like Euler and Ramanujan. When
 people like them tell you something, you better pay attention.

Really? Euler didn't even know about absolutely convergent series (the
point in question) and would quite happily combine infinite series to
obtain a formula.

snip
 Normally mathematicians will tell you that divergent series don't have a
 total. That's because often the total you get can vary depending on how
 you add them up. The classic example is summing the infinite series:

 1 - 1 + 1 - 1 + 1 - ...

There is a distinction between absolute convergence and convergence.
Rearranging the order of the terms in the above infinite sum is
invalid because the series is not absolutely convergent. For this
particular series there is no sense in which its sum converges on an
answer but there are other series that cannot be rearranged while
still being convergent:
http://en.wikipedia.org/wiki/Harmonic_series_(mathematics)#Alternating_harmonic_series

Personally I think it's reasonable to just say that the sum of the
natural numbers is infinite rather than messing around with terms like
undefined, divergent, or existence. There is a clear difference
between a series (or any limit) that fails to converge  asymptotically
and another that just goes to +-infinity. The difference is usually
also relevant to any practical application of this kind of maths.


Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-05 Thread Mark Lawrence

On 05/03/2014 12:21, Oscar Benjamin wrote:


Why the dig at physicists? I think most physicists would be able to
tell you that the sum of all natural numbers is not -1/12. In fact
most people with very little background in mathematics can tell you
that.



I'll put that one to the test tomorrow morning when the bin men come 
round.  I fully expect them to dial 999 and ask that the paramedics are 
armed with plenty of sedatives.


--
My fellow Pythonistas, ask not what our language can do for you, ask 
what you can do for our language.


Mark Lawrence

---
This email is free from viruses and malware because avast! Antivirus protection 
is active.
http://www.avast.com


--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-05 Thread Dave Angel
 Oscar Benjamin oscar.j.benja...@gmail.com Wrote in message:
 On 4 March 2014 23:20, Dave Angel da...@davea.name wrote:

 One problem with complexity claims is that it's easy to miss some
  contributing time eaters. I haven't done any measuring on modern
  machines nor in python, but I'd assume that multiplies take
  *much* longer for large integers, and that divides are much
  worse. So counting iterations isn't the whole story.
 
 Agreed but there's a big difference between log(N) iterations and N 
 iterations!
 
 On the assumption that division by 2 is very fast,  and that a
  general multiply isn't too bad, you could improve on Newton by
  observing that the slope is 2.

err = n - guess * guess
 guess += err/2
 
 I gues you mean like this:
 
 def sqrt(n):
 err = guess = 1
 while err  1e-10:
 err = n - guess * guess
 guess += err/2
 return guess
 
 This requires log2(10)*N iterations to get N digits. 

No idea how you came up with that,  but I see an error in my
 stated algorithm,  which does surely penalize it. The slope isn't
 2, but 2x.  So the line should have been
guess += err/(2*guess)

Now if you stop the loop after 3 iterations (or use some other
 approach to get a low-precision estimate,  then you can calculate
 scale = 1/(2*estimate)

and then for remaining iterations, 
guess += err *scale

 So the penalty
 for using division would have to be extreme in order for this to
 better. Using Decimal to get many digits we can write that as:
 
 def sqrt2(n, prec=1000):
 '''Solve x**2 = y'''
 eps = D(10) ** -(prec + 5)
 err = guess = D(1)
 with localcontext() as ctx:
 ctx.prec = prec + 10
 while abs(err)  eps:
 err = n - guess*guess
 guess += err/2
 return guess
 
 This method works out much slower than Newton with division at 1
 digits: 40s (based on a single trial) vs 80ms (timeit result).

Well considering you did not special-case the divide by 2, I'm not
 surprised it's slower.

 
 Some 37 years ago I microcoded  a math package which included
  square root.  All the math was decimal, and there was no hardware
  multiply or divide. The algorithm I came up with generated the
  answer one digit at a time, with no subsequent rounding needed.
  And it took just a little less time than two divides. For that
  architecture,  Newton's method would've been too
  slow.
 
 If you're working with a fixed small precision then it might be.
 
 Incidentally, the algorithm did no divides, not even by 2. No
  multiplies either.  Just repeated subtraction,  sorta like divide
  was done.

 If anyone is curious,  I'll be glad to describe the algorithm;
  I've never seen it published, before or since.  I got my
  inspiration from a method used in mechanical,  non-motorized,
  adding machines.  My father had shown me that approach in the
  50's.
 
 I'm curious.
 

A later message,  I guess.  I can't write that much on the tablet.


-- 
DaveA

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-05 Thread Dave Angel
 Dave Angel da...@davea.name Wrote in message:
  Oscar Benjamin oscar.j.benja...@gmail.com Wrote in message:
 On 4 March 2014 23:20, Dave Angel da...@davea.name wrote:


 If anyone is curious,  I'll be glad to describe the algorithm;
  I've never seen it published, before or since.  I got my
  inspiration from a method used in mechanical,  non-motorized,
  adding machines.  My father had shown me that approach in the
  50's.
 
 I'm curious.
 
 
 A later message,  I guess.  I can't write that much on the tablet.
 
 
 Given a microcodable architecture with no hardware support for multiply or 
divide, clearly multiply will be several times as fast as divide (at least).  
There was a BCD ALU, so add and subtract of decimal values was quite 
reasonable.  All floating point logic, however, is just microcode.

Divide is implemented via repeated subtraction of the divisor from
 the dividend.  The count of how many subtracts is done is the
 quotient. Naturally, this is combined with digit shifts, so you
 find one quotient digit at a time.  For a 13 digit result, the
 maximum subtracts are 13*10.

Multiply is much faster, as you know ahead of time for each column
 how many adds you're supposed to do.  So you can have
 precalculated multiples of the divisor on hand, and you can
 subtract instead of add when appropriate.

Square root is implemented as a kind of variable division, where
 the divisor is changing constantly.  Everyone knows that the
 sum of the first n odd numbers is n squared.  So if you started
 with a square, you could repeatedly subtract odd numbers from it
 till you reached zero, and the square root will be roughly half
 the last odd number subtracted.

So to make this work across multiple columns it turns out you can
 accumulate these odd numbers, doing the appropriate shifts after
 each column, and if you take the last number shifted, you can
 just add 1 and divide by 2.

In many architectures, that would be as far as you can go, but in
 the particular one I was using, generating those pesky odd
 numbers was more expensive than you'd expect.  So it turned out
 to be quicker to just do twice as many subtracts.

Instead of subtracting 1,3, 5, etc., till the value went negative,
 we subtract 0 and 1, 1 and 2, 2 and 3, etc.  You have twice as
 many subtracts, but no divide by 2 at the end.  And for each
 column, you need not go beyond 8 + 9, since if it were more than
 that, we would have picked it up in the previous column.  So you
 do not have to propagate the carry across the trial
 divisor.

Supposing the correct result will be 7.1234567, you will at one
 stage of operations, be subtracting
       71230
       71231
       71231
       71232
       71232
       71233
       71233
       71234
The next subtract will make the result go negative, so you either
 do it, detect negative and undo it, or you do some compare
 operation.


I am here glossing over all the details of normalizing the
 dividend so the exponent is even, and calculating the final
 exponent, which at first approximation is half the original
 one.


-- 
DaveA

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread Steven D'Aprano
On Wed, 05 Mar 2014 12:21:37 +, Oscar Benjamin wrote:

 On 5 March 2014 07:52, Steven D'Aprano st...@pearwood.info wrote:
 On Tue, 04 Mar 2014 23:25:37 -0500, Roy Smith wrote:

 I stopped paying attention to mathematicians when they tried to
 convince me that the sum of all natural numbers is -1/12.

 I'm pretty sure they did not. Possibly a physicist may have tried to
 tell you that, but most mathematicians consider physicists to be lousy
 mathematicians, and the mere fact that they're results seem to actually
 work in practice is an embarrassment for the entire universe. A
 mathematician would probably have said that the sum of all natural
 numbers is divergent and therefore there is no finite answer.
 
 Why the dig at physicists? 

There is considerable professional rivalry between the branches of 
science. Physicists tend to look at themselves as the paragon of 
scientific hardness, and look down at mere chemists, who look down at 
biologists. (Which is ironic really, since the actual difficulty in doing 
good science is in the opposite order. Hundreds of years ago, using quite 
primitive techniques, people were able to predict the path of comets 
accurately. I'd like to see them predict the path of a house fly.) 
According to this greedy reductionist viewpoint, since all living 
creatures are made up of chemicals, biology is just a subset of 
chemistry, and since chemicals are made up of atoms, chemistry is 
likewise just a subset of physics.

Physics is the fundamental science, at least according to the physicists, 
and Real Soon Now they'll have a Theory Of Everything, something small 
enough to print on a tee-shirt, which will explain everything. At least 
in principle.

Theoretical physicists who work on the deep, fundamental questions of 
Space and Time tend to be the worst for this reductionist streak. They 
have a tendency to think of themselves as elites in an elite field of 
science. Mathematicians, possibly out of professional jealousy, like to 
look down at physics as mere applied maths.

They also get annoyed that physicists often aren't as vigorous with their 
maths as they should be. The controversy over renormalisation in Quantum 
Electrodynamics (QED) is a good example. When you use QED to try to 
calculate the strength of the electron's electric field, you end up 
trying to sum a lot of infinities. Basically, the interaction of the 
electron's charge with it's own electric field gets larger the more 
closely you look. The sum of all those interactions is a divergent 
series. So the physicists basically cancelled out all the infinities, and 
lo and behold just like magic what's left over gives you the right 
answer. Richard Feynman even described it as hocus-pocus.

The mathematicians *hated* this, and possibly still do, because it looks 
like cheating. It's certainly not vigorous, at least it wasn't back in 
the 1940s. The mathematicians were appalled, and loudly said You can't 
do that! and the physicists basically said Oh yeah, watch us! and 
ignored them, and then the Universe had the terribly bad manners to side 
with the physicists. QED has turned out to be *astonishingly* accurate, 
the most accurate physical theory of all time. The hocus-pocus worked.


 I think most physicists would be able to tell
 you that the sum of all natural numbers is not -1/12. In fact most
 people with very little background in mathematics can tell you that.

Ah, but there's the rub. People with *very little* background in 
mathematics will tell you that. People with *a very deep and solid* 
background in mathematics will tell you different, particularly if their 
background is complex analysis. (That's *complex numbers*, not 
complicated -- although it is complicated too.)


 The argument that the sum of all natural numbers comes to -1/12 is just
 some kind of hoax. I don't think *anyone* seriously believes it.

You would be wrong. I suggest you read the links I gave earlier. Even the 
mathematicians who complain about describing this using the word equals 
don't try to dispute the fact that you can identify the sum of natural 
numbers with ζ(-1), or that ζ(-1) = -1/12. They simply dispute that we 
should describe this association as equals.

What nobody believes is that the sum of natural numbers is a convergent 
series that sums to -1/12, because it is provably not.

In other words, this is not an argument about the maths. Everyone who 
looks at the maths has to admit that it is sound. It's an argument about 
the words we use to describe this. Is it legitimate to say that the 
infinite sum *equals* -1/12? Or only that the series has the value -1/12? 
Or that we can associate (talk about a sloppy, non-vigorous term!) the 
series with -1/12?


 Well, that is, apart from mathematicians like Euler and Ramanujan. When
 people like them tell you something, you better pay attention.
 
 Really? Euler didn't even know about absolutely convergent series (the
 point in question) and would quite happily 

Re: Working with the set of real numbers

2014-03-05 Thread Steven D'Aprano
On Wed, 05 Mar 2014 12:50:06 +, Mark Lawrence wrote:

 On 05/03/2014 12:21, Oscar Benjamin wrote:

 Why the dig at physicists? I think most physicists would be able to
 tell you that the sum of all natural numbers is not -1/12. In fact most
 people with very little background in mathematics can tell you that.


 I'll put that one to the test tomorrow morning when the bin men come
 round.

Do you seriously think that garbos (bin men) know more about mathematics 
than mathematicians?


 I fully expect them to dial 999 and ask that the paramedics are
 armed with plenty of sedatives.

You know that rather large piece of machinery in Europe called the Large 
Hadron Collider? The one which is generating some rather extraordinary 
proofs of fundamental physics, such as the Higgs Boson? A lot of that 
physics is based on theory which uses the same logic and mathematics that 
you are mocking.

Laugh away, but the universe behaves as if the sum of the natural numbers 
is -1/12.



-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread Chris Angelico
On Thu, Mar 6, 2014 at 4:43 AM, Steven D'Aprano
steve+comp.lang.pyt...@pearwood.info wrote:
 Physics is the fundamental science, at least according to the physicists,
 and Real Soon Now they'll have a Theory Of Everything, something small
 enough to print on a tee-shirt, which will explain everything. At least
 in principle.

Everything is, except what isn't.

That's my theory, and I'm sticking to it!

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread Chris Kaynor
On Wed, Mar 5, 2014 at 9:43 AM, Steven D'Aprano 
steve+comp.lang.pyt...@pearwood.info wrote:

 At one time, Euler summed an infinite series and got -1, from which he
 concluded that -1 was (in some sense) larger than infinity. I don't know
 what justification he gave, but the way I think of it is to take the
 number line from -∞ to +∞ and then bend it back upon itself so that there
 is a single infinity, rather like the projective plane only in a single
 dimension. If you start at zero and move towards increasingly large
 numbers, then like Buzz Lightyear you can go to infinity and beyond:

 0 - 1 - 10 - 1 - ... ∞ - ... -1 - -10 - -1 - 0


This makes me think that maybe the universe is using ones or two complement
math (is there a negative zero?)...

Chris
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread Grant Edwards
On 2014-03-05, Chris Kaynor ckay...@zindagigames.com wrote:
 On Wed, Mar 5, 2014 at 9:43 AM, Steven D'Aprano 
 steve+comp.lang.pyt...@pearwood.info wrote:

 At one time, Euler summed an infinite series and got -1, from which he
 concluded that -1 was (in some sense) larger than infinity. I don't know
 what justification he gave, but the way I think of it is to take the
 number line from -∞ to +∞ and then bend it back upon itself so that there
 is a single infinity, rather like the projective plane only in a single
 dimension. If you start at zero and move towards increasingly large
 numbers, then like Buzz Lightyear you can go to infinity and beyond:

 0 - 1 - 10 - 1 - ... ∞ - ... -1 - -10 - -1 - 0


 This makes me think that maybe the universe is using ones or two complement
 math (is there a negative zero?)...

If the Universe (like most all Python implementations) is using
IEEE-754 floating point, there is.

-- 
Grant Edwards   grant.b.edwardsYow! This PIZZA symbolizes
  at   my COMPLETE EMOTIONAL
  gmail.comRECOVERY!!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread Oscar Benjamin
On 5 March 2014 17:43, Steven D'Aprano
steve+comp.lang.pyt...@pearwood.info wrote:
 On Wed, 05 Mar 2014 12:21:37 +, Oscar Benjamin wrote:

 The argument that the sum of all natural numbers comes to -1/12 is just
 some kind of hoax. I don't think *anyone* seriously believes it.

 You would be wrong. I suggest you read the links I gave earlier. Even the
 mathematicians who complain about describing this using the word equals
 don't try to dispute the fact that you can identify the sum of natural
 numbers with ζ(-1), or that ζ(-1) = -1/12. They simply dispute that we
 should describe this association as equals.

 What nobody believes is that the sum of natural numbers is a convergent
 series that sums to -1/12, because it is provably not.

 In other words, this is not an argument about the maths. Everyone who
 looks at the maths has to admit that it is sound. It's an argument about
 the words we use to describe this. Is it legitimate to say that the
 infinite sum *equals* -1/12? Or only that the series has the value -1/12?
 Or that we can associate (talk about a sloppy, non-vigorous term!) the
 series with -1/12?

This is the point. You can identify numbers with many different
things. It does not mean to say that the thing is equal to that
number. I can associate the number 2 with my bike since it has 2
wheels. That doesn't mean that the bike is equal to 2.

So the problem with saying that the sum of the natural numbers equals
-1/12 is precisely as you say with the word equals because they're
not equal!

If you restate the conclusion in more accurate (but technical and less
accessible) way that the analytic continuation of a related set of
convergent series has the value -1/12 at the value that would
correspond to this divergent series then it becomes less mysterious.
Do I really have to associate the finite negative value found in the
analytic continuation with the sum of the series that is provably
greater than any finite number?

snip

 At one time, Euler summed an infinite series and got -1, from which he
 concluded that -1 was (in some sense) larger than infinity. I don't know
 what justification he gave, but the way I think of it is to take the
 number line from -∞ to +∞ and then bend it back upon itself so that there
 is a single infinity, rather like the projective plane only in a single
 dimension. If you start at zero and move towards increasingly large
 numbers, then like Buzz Lightyear you can go to infinity and beyond:

 0 - 1 - 10 - 1 - ... ∞ - ... -1 - -10 - -1 - 0

 In this sense, -1/12 is larger than infinity.

There are many examples that appear to show wrapping round from
+infinity to -infinity e.g. the tan function. The thing is that it is
not really physical (or meaningful in any direct sense).

So for example I might consider the forces on a particle, apply
Newton's 2nd law and arrive at a differential equation for the
acceleration of the particle, solve the equation and find that the
position of the particle at time t is given by tan(t). This would seem
to imply that as t increases toward pi/2 the particle heads off
infinity miles West but at the exact time pi/2 it wraps around to
reappear at infinity miles East and starts heading back toward its
starting point. The truth is less interesting: the solution tan(t)
becomes invalid at pi/2 and mathematics can tell us nothing about what
happens after that even if all the physics we used was exactly true.

 Now of course this is an ad hoc sloppy argument, but I'm not a
 professional mathematician. However I can tell you that it's pretty close
 to what the professional mathematicians and physicists do with negative
 absolute temperatures, and that is rigorous.

 http://en.wikipedia.org/wiki/Negative_temperature

The key point from that page is the sentence A definition of
temperature can be based on the relationship  It is clear that
temperature is a theoretical abstraction. We have intuitive
understandings of what it means but in order for the current body of
thermodynamic theory to be consistent it is necessary to sometimes
give negative values to the temperature. There's nothing unintuitive
about negative temperatures if you understand the usual thermodynamic
definitions of temperature.

 Personally I think it's reasonable to just say that the sum of the
 natural numbers is infinite rather than messing around with terms like
 undefined, divergent, or existence. There is a clear difference between
 a series (or any limit) that fails to converge  asymptotically and
 another that just goes to +-infinity. The difference is usually also
 relevant to any practical application of this kind of maths.

 And this is where you get it exactly backwards. The *practical
 application* comes from physics, where they do exactly what you argue
 against: they associate ζ(-1) with the sum of the natural numbers (see, I
 too can avoid the word equals too), and *it works*.

I don't know all the details of what they do there and whether or not

Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread Roy Smith
In article 53176225$0$29987$c3e8da3$54964...@news.astraweb.com,
 Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote:

 Physics is the fundamental science, at least according to the physicists, 
 and Real Soon Now they'll have a Theory Of Everything, something small 
 enough to print on a tee-shirt, which will explain everything. At least 
 in principle.

A mathematician, a chemist, and a physicist are arguing the nature of 
prime numbers.  The chemist says, All odd numbers are prime.  Look, I 
can prove it.  Three is prime.  Five is prime.  Seven is prime.  The 
mathematician says, That's nonsense.  Nine is not prime.  The 
physicist looks at him and says, H, you may be right, but eleven 
is prime, and thirteen is prime.  It appears that within the limits of 
experimental error, all odd number are indeed prime!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread Steven D'Aprano
On Wed, 05 Mar 2014 21:31:51 -0500, Roy Smith wrote:

 In article 53176225$0$29987$c3e8da3$54964...@news.astraweb.com,
  Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote:
 
 Physics is the fundamental science, at least according to the
 physicists, and Real Soon Now they'll have a Theory Of Everything,
 something small enough to print on a tee-shirt, which will explain
 everything. At least in principle.
 
 A mathematician, a chemist, and a physicist are arguing the nature of
 prime numbers.  The chemist says, All odd numbers are prime.  Look, I
 can prove it.  Three is prime.  Five is prime.  Seven is prime.  The
 mathematician says, That's nonsense.  Nine is not prime.  The
 physicist looks at him and says, H, you may be right, but eleven is
 prime, and thirteen is prime.  It appears that within the limits of
 experimental error, all odd number are indeed prime!

They ask a computer programmer to adjudicate who is right, so he writes a 
program to print out all the primes:

1 is prime
1 is prime
1 is prime
1 is prime
1 is prime
...



-- 
Steven D'Aprano
http://import-that.dreamwidth.org/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread Chris Angelico
On Thu, Mar 6, 2014 at 2:06 PM, Steven D'Aprano
steve+comp.lang.pyt...@pearwood.info wrote:
 They ask a computer programmer to adjudicate who is right, so he writes a
 program to print out all the primes:

 1 is prime
 1 is prime
 1 is prime
 1 is prime
 1 is prime

And he claimed that he was correct, because he had - as is known to be
true in reality - a countably infinite number of primes.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread Grant Edwards
On 2014-03-06, Roy Smith r...@panix.com wrote:
 In article 53176225$0$29987$c3e8da3$54964...@news.astraweb.com,
  Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote:

 Physics is the fundamental science, at least according to the
 physicists, and Real Soon Now they'll have a Theory Of Everything,
 something small enough to print on a tee-shirt, which will explain
 everything. At least in principle.

 A mathematician, a chemist, and a physicist are arguing the nature of 
 prime numbers.  The chemist says, All odd numbers are prime.  Look, I 
 can prove it.  Three is prime.  Five is prime.  Seven is prime.  The 
 mathematician says, That's nonsense.  Nine is not prime.  The 
 physicist looks at him and says, H, you may be right, but eleven 
 is prime, and thirteen is prime.  It appears that within the limits of 
 experimental error, all odd number are indeed prime!

Assuming spherical odd numbers in a vacuum on a frictionless surface,
of course.

-- 
Grant


-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-05 Thread Roy Smith
In article 5317e640$0$29985$c3e8da3$54964...@news.astraweb.com,
 Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote:

 On Wed, 05 Mar 2014 21:31:51 -0500, Roy Smith wrote:
 
  In article 53176225$0$29987$c3e8da3$54964...@news.astraweb.com,
   Steven D'Aprano steve+comp.lang.pyt...@pearwood.info wrote:
  
  Physics is the fundamental science, at least according to the
  physicists, and Real Soon Now they'll have a Theory Of Everything,
  something small enough to print on a tee-shirt, which will explain
  everything. At least in principle.
  
  A mathematician, a chemist, and a physicist are arguing the nature of
  prime numbers.  The chemist says, All odd numbers are prime.  Look, I
  can prove it.  Three is prime.  Five is prime.  Seven is prime.  The
  mathematician says, That's nonsense.  Nine is not prime.  The
  physicist looks at him and says, H, you may be right, but eleven is
  prime, and thirteen is prime.  It appears that within the limits of
  experimental error, all odd number are indeed prime!
 
 They ask a computer programmer to adjudicate who is right, so he writes a 
 program to print out all the primes:
 
 1 is prime
 1 is prime
 1 is prime
 1 is prime
 1 is prime
 ...

So, a mathematician, a biologist, and a physicist are watching a house.  
The physicist says, It appears to be empty.  Sometime later, a man and 
a woman go into the house.  Shortly after that, the man and the woman 
come back out, with a child.  The biologist says, They must have 
reproduced.  The mathematician says, If one more person goes into the 
house, it'll be empty again.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Gregory Ewing

Chris Angelico wrote:

In constant space, that will produce the sum of two infinite sequences
of digits.


It's not constant space, because the nines counter
can grow infinitely large.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-04 Thread Ian Kelly
On Mon, Mar 3, 2014 at 11:35 PM, Chris Angelico ros...@gmail.com wrote:
 In constant space, that will produce the sum of two infinite sequences
 of digits. (And it's constant time, too, except when it gets a stream
 of nines. Adding three thirds together will produce an infinite loop
 as it waits to see if there'll be anything that triggers an infinite
 cascade of carries.) Now, if there's a way to do that for square
 rooting a number, then the CF notation has a distinct benefit over the
 decimal expansion used here. As far as I know, there's no simple way,
 in constant space and/or time, to progressively yield more digits of a
 number's square root, working in decimal.

The code for that looks like this:

def cf_sqrt(n):
Yield the terms of the square root of n as a continued fraction.
   m = 0
d = 1
a = a0 = floor_sqrt(n)
while True:
yield a
next_m = d * a - m
next_d = (n - next_m * next_m) // d
if next_d == 0:
break
next_a = (a0 + next_m) // next_d
m, d, a = next_m, next_d, next_a


def floor_sqrt(n):
Return the integer part of the square root of n.
n = int(n)
if n == 0: return 0
lower = 2 ** int(math.log(n, 2) // 2)
upper = lower * 2
while upper - lower  1:
mid = (upper + lower) // 2
if n  mid * mid:
upper = mid
else:
lower = mid
return lower


The floor_sqrt function is merely doing a simple binary search and
could probably be optimized, but then it's only called once during
initialization anyway.  The meat of the loop, as you can see, is just
a constant amount of integer arithmetic.  If it were desired to halt
once the continued fraction starts to repeat, that would just be a
matter of checking whether the triple (m, d, a) has been seen already.

Going back to your example of adding generated digits though, I don't
know how to add two continued fractions together without evaluating
them.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-04 Thread Ian Kelly
On Tue, Mar 4, 2014 at 4:19 AM, Ian Kelly ian.g.ke...@gmail.com wrote:
 def cf_sqrt(n):
 Yield the terms of the square root of n as a continued fraction.
m = 0
 d = 1
 a = a0 = floor_sqrt(n)
 while True:
 yield a
 next_m = d * a - m
 next_d = (n - next_m * next_m) // d
 if next_d == 0:
 break
 next_a = (a0 + next_m) // next_d
 m, d, a = next_m, next_d, next_a

Sorry, all that next business is totally unnecessary.  More simply:

def cf_sqrt(n):
Yield the terms of the square root of n as a continued fraction.
m = 0
d = 1
a = a0 = floor_sqrt(n)
while True:
yield a
m = d * a - m
d = (n - m * m) // d
if d == 0:
break
a = (a0 + m) // d
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Chris Angelico
On Tue, Mar 4, 2014 at 10:05 PM, Gregory Ewing
greg.ew...@canterbury.ac.nz wrote:
 Chris Angelico wrote:

 In constant space, that will produce the sum of two infinite sequences
 of digits.


 It's not constant space, because the nines counter
 can grow infinitely large.

Okay, okay, technically yes. But the counter can go a long way up
before it takes up any additional space, so all that's really saying
is that this has a major flaw with anything that produces a long
stream of nines. It can't tell the difference between
.8 and .[11] where the 11
suddenly carries and it has to flip it all back.

Anyway, that was like ten minutes' knocking-together work, you can't
expect it to be perfect. I'm amazed it even worked. :)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Marko Rauhamaa
Chris Angelico ros...@gmail.com:

 As far as I know, there's no simple way, in constant space and/or
 time, to progressively yield more digits of a number's square root,
 working in decimal.

I don't know why the constant space/time requirement is crucial. Anyway,
producing more digits simple: URL: http://nrich.maths.org/5955.

I believe producing the nth digit is O(n) in time and space.

Still, there's more to arithmetics than that. For example, if you have
two generated decimal expansions, you don't have an effective algorithm
to generate the decimal expansion of their ratio. That's because there's
no effective algorithm to decide if a  b.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Chris Angelico
On Wed, Mar 5, 2014 at 6:49 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 Chris Angelico ros...@gmail.com:

 As far as I know, there's no simple way, in constant space and/or
 time, to progressively yield more digits of a number's square root,
 working in decimal.

 I don't know why the constant space/time requirement is crucial. Anyway,
 producing more digits simple: URL: http://nrich.maths.org/5955.

 I believe producing the nth digit is O(n) in time and space.

The reason for striving for constant space/time is because the obvious
method (cut-and-try) is already O(n) for the nth digit, which means
it's quadratic on the number of digits desired. That gets pretty
nasty. So what I was asking was: By representing values as continued
fractions rather than as decimal digits, are you able to perform a
straight-forward transformation that produces the square root, in
constant time (which means linear in the length)? And I guess the
answer's no. CF representation doesn't have the advantage I was
wondering about.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Oscar Benjamin
On 4 March 2014 19:58, Chris Angelico ros...@gmail.com wrote:
 On Wed, Mar 5, 2014 at 6:49 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 Chris Angelico ros...@gmail.com:

 As far as I know, there's no simple way, in constant space and/or
 time, to progressively yield more digits of a number's square root,
 working in decimal.

 I don't know why the constant space/time requirement is crucial. Anyway,
 producing more digits simple: URL: http://nrich.maths.org/5955.

 I believe producing the nth digit is O(n) in time and space.

 The reason for striving for constant space/time is because the obvious
 method (cut-and-try) is already O(n) for the nth digit, which means
 it's quadratic on the number of digits desired. That gets pretty
 nasty.

I don't quite follow your reasoning here. By cut-and-try do you mean
bisection? If so it gives the first N decimal digits in N*log2(10)
iterations. However each iteration requires a multiply and when the
number of digits N becomes large the multiplication is worse than
linear. So the result is something like N**2 log(N)log(log(N)),

To me the obvious method is Newton iteration which takes O(sqrt(N))
iterations to obtain N digits of precision. This brings the above
complexity below quadratic:

#!/usr/bin/env python

from decimal import Decimal as D, localcontext

def sqrt(y, prec=1000):
'''Solve x**2 = y'''
assert y  0
eps = D(10) ** -(prec + 5)
x = D(y)
with localcontext() as ctx:
ctx.prec = prec + 10
while x ** 2 - y  x * eps:
x = (x + y/x) / 2
return x

print(sqrt(2))

Some modification would be required to handle a situation where it
ends in a run of nines or zeros if you really care about the exact
digits rather than having a bounded error.


Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Marko Rauhamaa
Oscar Benjamin oscar.j.benja...@gmail.com:

 To me the obvious method is Newton iteration which takes O(sqrt(N))
 iterations to obtain N digits of precision. This brings the above
 complexity below quadratic:

 #!/usr/bin/env python

 from decimal import Decimal as D, localcontext

 def sqrt(y, prec=1000):
 '''Solve x**2 = y'''
 assert y  0
 eps = D(10) ** -(prec + 5)
 x = D(y)
 with localcontext() as ctx:
 ctx.prec = prec + 10
 while x ** 2 - y  x * eps:
 x = (x + y/x) / 2
 return x

 print(sqrt(2))

At a quick glance, I believe x ** 2 is O(N²) and so the total complexity
should be O(N ** 2.5).


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Chris Angelico
On Wed, Mar 5, 2014 at 7:55 AM, Oscar Benjamin
oscar.j.benja...@gmail.com wrote:
 I don't quite follow your reasoning here. By cut-and-try do you mean
 bisection? If so it gives the first N decimal digits in N*log2(10)
 iterations. However each iteration requires a multiply and when the
 number of digits N becomes large the multiplication is worse than
 linear. So the result is something like N**2 log(N)log(log(N)),

By cut and try I'm talking about the really REALLY simple algorithm
for calculating square roots. It's basically brute force.

epsilon = 0.0001
def sqrt(n):
guess1, guess2 = 1, n
while abs(guess1-guess2)  epsilon:
guess1 = n/guess2
guess2 = (guess1 + guess2)/2
   return guess1

It's generally going to take roughly O(n*n) time to generate n digits,
give or take. That's the baseline against which anything else can be
compared. There are plenty of better ways to calculate them.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Oscar Benjamin
On 4 March 2014 21:18, Chris Angelico ros...@gmail.com wrote:
 On Wed, Mar 5, 2014 at 7:55 AM, Oscar Benjamin
 oscar.j.benja...@gmail.com wrote:
 I don't quite follow your reasoning here. By cut-and-try do you mean
 bisection? If so it gives the first N decimal digits in N*log2(10)
 iterations. However each iteration requires a multiply and when the
 number of digits N becomes large the multiplication is worse than
 linear. So the result is something like N**2 log(N)log(log(N)),

 By cut and try I'm talking about the really REALLY simple algorithm
 for calculating square roots. It's basically brute force.

 epsilon = 0.0001
 def sqrt(n):
 guess1, guess2 = 1, n
 while abs(guess1-guess2)  epsilon:
 guess1 = n/guess2
 guess2 = (guess1 + guess2)/2
return guess1

That's the exact same algorithm I showed! How on earth would you call
that brute force?

 It's generally going to take roughly O(n*n) time to generate n digits,
 give or take.

It does not take O(n*n) time. This is Newton iteration and for
well-behaved problems such as this it generates more than n digits
after n iterations. I modified my code to show the error (x**2 - y) at
each iteration:

$ python3.3 root.py
2
0.2
0.007
0.06
5E-12
3E-24
8E-49
8E-98
8E-196
9E-392
1E-783

The number of correct digits doubles at each iteration so after n
iterations you have 2**n digits (I misstated this as n**2 before).
This means that it takes log(N) iterations to get N digits. See here
for more:
http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Babylonian_method

See also the section below that:
http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Digit-by-digit_calculation

 That's the baseline against which anything else can be
 compared. There are plenty of better ways to calculate them.

Such as?


Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Oscar Benjamin
On 4 March 2014 21:05, Marko Rauhamaa ma...@pacujo.net wrote:
 Oscar Benjamin oscar.j.benja...@gmail.com:

 To me the obvious method is Newton iteration which takes O(sqrt(N))
 iterations to obtain N digits of precision. This brings the above
 complexity below quadratic:

 #!/usr/bin/env python

 from decimal import Decimal as D, localcontext

 def sqrt(y, prec=1000):
 '''Solve x**2 = y'''
 assert y  0
 eps = D(10) ** -(prec + 5)
 x = D(y)
 with localcontext() as ctx:
 ctx.prec = prec + 10
 while x ** 2 - y  x * eps:
 x = (x + y/x) / 2
 return x

 print(sqrt(2))

 At a quick glance, I believe x ** 2 is O(N²) and so the total complexity
 should be O(N ** 2.5).

x**2 is just a multiplication which can be done in better than O(N**2):
http://en.wikipedia.org/wiki/Multiplication_algorithm#Fast_multiplication_algorithms_for_large_inputs


Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Chris Angelico
On Wed, Mar 5, 2014 at 9:02 AM, Oscar Benjamin
oscar.j.benja...@gmail.com wrote:
 On 4 March 2014 21:18, Chris Angelico ros...@gmail.com wrote:
 On Wed, Mar 5, 2014 at 7:55 AM, Oscar Benjamin
 oscar.j.benja...@gmail.com wrote:
 I don't quite follow your reasoning here. By cut-and-try do you mean
 bisection? If so it gives the first N decimal digits in N*log2(10)
 iterations. However each iteration requires a multiply and when the
 number of digits N becomes large the multiplication is worse than
 linear. So the result is something like N**2 log(N)log(log(N)),

 By cut and try I'm talking about the really REALLY simple algorithm
 for calculating square roots. It's basically brute force.

 epsilon = 0.0001
 def sqrt(n):
 guess1, guess2 = 1, n
 while abs(guess1-guess2)  epsilon:
 guess1 = n/guess2
 guess2 = (guess1 + guess2)/2
return guess1

 That's the exact same algorithm I showed! How on earth would you call
 that brute force?

It uses a lot of division. There are various refinements that can be
done that replace some of that division with multiplication, but I'd
have to go do some research to figure it out.

This is the purest form of attempted-division algorithm. If you're
describing it on a blackboard, you would write it pretty much like
this. At each iteration, you have to divide by a number that's n
digits long, and then do some additional arithmetic.

 It's generally going to take roughly O(n*n) time to generate n digits,
 give or take.

 It does not take O(n*n) time. This is Newton iteration and for
 well-behaved problems such as this it generates more than n digits
 after n iterations. I modified my code to show the error (x**2 - y) at
 each iteration:

 $ python3.3 root.py
 2
 0.2
 0.007
 0.06
 5E-12
 3E-24
 8E-49
 8E-98
 8E-196
 9E-392
 1E-783

 The number of correct digits doubles at each iteration so after n
 iterations you have 2**n digits (I misstated this as n**2 before).
 This means that it takes log(N) iterations to get N digits.

It seems I'm partly mistaken, though not entirely. Let's compare two
versions. In the first, you set the precision (I'm talking in terms of
REXX's NUMERIC DIGITS statement - anything beyond this many digits
will be rounded (and represented exponentially, if necessary); I'm not
sure if decimal.Decimal precision works this way) such that you get 10
digits. Each iteration requires division by a 10-digit number, which
is an operation that takes a certain amount of time; and it's going to
take some number of iterations to get to the final answer.

Second version, you set the precision so you get 20 digits. Now, it's
going to take you approximately one more iteration to get to the final
answer. (This bit I was mistaken on. I thought it would take something
like 25% more or 50% more iterations.) But each iteration will take
longer. The complexity of division depends on the algorithm - grade
school long division would be O(n) with a fixed-length dividend, I
think, but you could probably pull that down a bit.

So that's why I said it'd be very roughly O(n*n) - because the
division in each step is O(n), and I thought it'd take O(n) steps.
Turns out it's O(n*log(n)), which is a lot better.

 That's the baseline against which anything else can be
 compared. There are plenty of better ways to calculate them.

 Such as?

Improved versions of the above, and I was under the impression that
there were some completely different techniques that converged a lot
more quickly. But it may be that I was mistaken, as I'd been expecting
this to converge in O(n) steps. Reducing the amount of division would
speed things up significantly, although it probably won't change the
algorithmic complexity.

So, put it down to misremembering and thinking the simple algorithm
was worse than it actually is :)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Oscar Benjamin
On 4 March 2014 22:18, Chris Angelico ros...@gmail.com wrote:
 On Wed, Mar 5, 2014 at 9:02 AM, Oscar Benjamin
 oscar.j.benja...@gmail.com wrote:
 On 4 March 2014 21:18, Chris Angelico ros...@gmail.com wrote:
 On Wed, Mar 5, 2014 at 7:55 AM, Oscar Benjamin
 oscar.j.benja...@gmail.com wrote:

 epsilon = 0.0001
 def sqrt(n):
 guess1, guess2 = 1, n
 while abs(guess1-guess2)  epsilon:
 guess1 = n/guess2
 guess2 = (guess1 + guess2)/2
return guess1

 That's the exact same algorithm I showed! How on earth would you call
 that brute force?

 It uses a lot of division. There are various refinements that can be
 done that replace some of that division with multiplication, but I'd
 have to go do some research to figure it out.

There's a description of such a method here:
http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Iterative_methods_for_reciprocal_square_roots

I don't know whether that would work out faster (when using decimal -
for float it would probably be slower).

 Let's compare two
 versions. In the first, you set the precision (I'm talking in terms of
 REXX's NUMERIC DIGITS statement

I have no idea what that is.

- anything beyond this many digits
 will be rounded (and represented exponentially, if necessary); I'm not
 sure if decimal.Decimal precision works this way) such that you get 10
 digits.

With the decimal module if you set the precision to 5 digits then it
basically represents the number in standard form with 5 digits .e.g:
1.2345 x 10**21.

 Each iteration requires division by a 10-digit number, which
 is an operation that takes a certain amount of time; and it's going to
 take some number of iterations to get to the final answer.

 Second version, you set the precision so you get 20 digits.

If we're talking about 10-20 digits then the decimal module is
overkill: just use float. The speed up from hardware arithmetic will
massively out-weigh any other performance considerations. My version
was intended to produce large numbers of digits which is when the
big-O comes in:

$ python3.3 -m timeit -s 'from root import sqrt' 'sqrt(2, 10)'
1 loops, best of 3: 22.4 usec per loop
$ python3.3 -m timeit -s 'from root import sqrt' 'sqrt(2, 100)'
1 loops, best of 3: 59.1 usec per loop
$ python3.3 -m timeit -s 'from root import sqrt' 'sqrt(2, 1000)'
1000 loops, best of 3: 1.15 msec per loop
$ python3.3 -m timeit -s 'from root import sqrt' 'sqrt(2, 1)'
10 loops, best of 3: 85.9 msec per loop
$ python3.3 -m timeit -s 'from root import sqrt' 'sqrt(2, 10)'
10 loops, best of 3: 1.59 sec per loop


Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Chris Angelico
On Wed, Mar 5, 2014 at 9:54 AM, Oscar Benjamin
oscar.j.benja...@gmail.com wrote:
 Let's compare two
 versions. In the first, you set the precision (I'm talking in terms of
 REXX's NUMERIC DIGITS statement

 I have no idea what that is.

- anything beyond this many digits
 will be rounded (and represented exponentially, if necessary); I'm not
 sure if decimal.Decimal precision works this way) such that you get 10
 digits.

 With the decimal module if you set the precision to 5 digits then it
 basically represents the number in standard form with 5 digits .e.g:
 1.2345 x 10**21.

That's how NUMERIC DIGITS works, so we're on the same page. I'm not
familiar enough with decimal.Decimal and how precision is configured,
but it seems to function the same way.

 Each iteration requires division by a 10-digit number, which
 is an operation that takes a certain amount of time; and it's going to
 take some number of iterations to get to the final answer.

 Second version, you set the precision so you get 20 digits.

 If we're talking about 10-20 digits then the decimal module is
 overkill: just use float. The speed up from hardware arithmetic will
 massively out-weigh any other performance considerations.

Yeah, I'm just digging into the algorithm. The same concept applies
when going from 100 to 200 digits, or 1000 to 2000, and in each case,
the division will get way slower, but the number of iterations won't
go up as fast as I thought it would.

In theory, it should be possible to do the first few divisions at
lower precision, and scale up as you have need. In practice, would the
churning of precisions mean that you lose all the benefit?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Dave Angel
 Oscar Benjamin oscar.j.benja...@gmail.com Wrote in message:
 On 4 March 2014 21:18, Chris Angelico ros...@gmail.com wrote:

 
 It does not take O(n*n) time. This is Newton iteration and for
 well-behaved problems such as this it generates more than n digits
 after n iterations. I modified my code to show the error (x**2 - y) at
 each iteration:
 
 $ python3.3 root.py
 2
 0.2
 0.007
 0.06
 5E-12
 3E-24
 8E-49
 8E-98
 8E-196
 9E-392
 1E-783
 
 The number of correct digits doubles at each iteration so after n
 iterations you have 2**n digits (I misstated this as n**2 before).
 This means that it takes log(N) iterations to get N digits. See here
 for more:
 http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Babylonian_method
 
 See also the section below that:
 http://en.wikipedia.org/wiki/Methods_of_computing_square_roots#Digit-by-digit_calculation
 
 That's the baseline against which anything else can be
 compared. There are plenty of better ways to calculate them.
 
 Such as?
 

One problem with complexity claims is that it's easy to miss some
 contributing time eaters. I haven't done any measuring on modern
 machines nor in python, but I'd assume that multiplies take
 *much* longer for large integers, and that divides are much
 worse. So counting iterations isn't the whole story.
 

On the assumption that division by 2 is very fast,  and that a
 general multiply isn't too bad, you could improve on Newton by
 observing that the slope is 2.

   err = n - guess * guess
guess += err/2

Some 37 years ago I microcoded  a math package which included
 square root.  All the math was decimal, and there was no hardware
 multiply or divide. The algorithm I came up with generated the
 answer one digit at a time, with no subsequent rounding needed.
 And it took just a little less time than two divides. For that
 architecture,  Newton's method would've been too
 slow.

Incidentally, the algorithm did no divides, not even by 2. No
 multiplies either.  Just repeated subtraction,  sorta like divide
 was done.

If anyone is curious,  I'll be glad to describe the algorithm; 
 I've never seen it published, before or since.  I got my
 inspiration from a method used in mechanical,  non-motorized, 
 adding machines.  My father had shown me that approach in the
 50's. 



-- 
DaveA

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-04 Thread Albert van der Horst
In article mailman.7702.1393932047.18130.python-l...@python.org,
Ian Kelly  ian.g.ke...@gmail.com wrote:
On Mon, Mar 3, 2014 at 11:35 PM, Chris Angelico ros...@gmail.com wrote:
 In constant space, that will produce the sum of two infinite sequences
 of digits. (And it's constant time, too, except when it gets a stream
 of nines. Adding three thirds together will produce an infinite loop
 as it waits to see if there'll be anything that triggers an infinite
 cascade of carries.) Now, if there's a way to do that for square
 rooting a number, then the CF notation has a distinct benefit over the
 decimal expansion used here. As far as I know, there's no simple way,
 in constant space and/or time, to progressively yield more digits of a
 number's square root, working in decimal.

The code for that looks like this:

def cf_sqrt(n):
Yield the terms of the square root of n as a continued fraction.
   m = 0
d = 1
a = a0 = floor_sqrt(n)
while True:
yield a
next_m = d * a - m
next_d = (n - next_m * next_m) // d
if next_d == 0:
break
next_a = (a0 + next_m) // next_d
m, d, a = next_m, next_d, next_a


def floor_sqrt(n):
Return the integer part of the square root of n.
n = int(n)
if n == 0: return 0
lower = 2 ** int(math.log(n, 2) // 2)
upper = lower * 2
while upper - lower  1:
mid = (upper + lower) // 2
if n  mid * mid:
upper = mid
else:
lower = mid
return lower


The floor_sqrt function is merely doing a simple binary search and
could probably be optimized, but then it's only called once during
initialization anyway.  The meat of the loop, as you can see, is just
a constant amount of integer arithmetic.  If it were desired to halt
once the continued fraction starts to repeat, that would just be a
matter of checking whether the triple (m, d, a) has been seen already.

Going back to your example of adding generated digits though, I don't
know how to add two continued fractions together without evaluating
them.

That is highly non-trivial indeed. See the gosper.txt reference
I gave in another post.

Groetjes Albert
-- 
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spearc.xs4all.nl =n http://home.hccnet.nl/a.w.m.van.der.horst

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-04 Thread Albert van der Horst
In article mailman.7687.1393902132.18130.python-l...@python.org,
Chris Angelico  ros...@gmail.com wrote:
On Tue, Mar 4, 2014 at 1:45 PM, Albert van der Horst
alb...@spenarnc.xs4all.nl wrote:
No, the Python built-in float type works with a subset of real numbers:

 To be more precise: a subset of the rational numbers, those with a 
 denominator
 that is a power of two.

And no more than N bits (53 in a 64-bit float) in the numerator, and
the denominator between the limits of the exponent. (Unless it's
subnormal. That adds another set of small numbers.) It's a pretty
tight set of restrictions, and yet good enough for so many purposes.

But it's a far cry from all real numbers. Even allowing for
continued fractions adds only some more; I don't think you can
represent surds that way.

Adding cf's adds all computable numbers in infinite precision.
However that is not even a drop in the ocean, as the computable
numbers have measure zero.
A cf object yielding its coefficients amounts to a program that generates
an infinite amount of data (in infinite time), so it is not
very surprising it can represent any computable number.

Pretty humbling really.


ChrisA

Groetjes Albert
-- 
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spearc.xs4all.nl =n http://home.hccnet.nl/a.w.m.van.der.horst

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Albert van der Horst
In article 87fvnm7q1n@elektro.pacujo.net,
Marko Rauhamaa  ma...@pacujo.net wrote:
Chris Angelico ros...@gmail.com:

 On Fri, Feb 14, 2014 at 1:00 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 Well, if your idealized, infinite, digital computer had ℵ₁ bytes of RAM
 and ran at ℵ₁ hertz and Python supported transfinite iteration, you
 could easily do reals:

 for x in continuum(0, max(1, y)):

 How exactly do you iterate over a continuum, with a digital computer?

How digital our idealized computers are is a matter for a debate.
However, iterating over the continuum is provably possible:

  http://en.wikipedia.org/wiki/Transfinite_induction

 it would take a finite amount of time to assign to x the next
 number, ergo your algorithm can't guarantee to finish in finite time.

My assumption was you could execute ℵ₁ statements per second. That
doesn't guarantee a finite finish time but would make it possible. That
is because

   ℵ₁ * ℵ₁ = ℵ₁ = ℵ₁ * 1

This computer is definitely more powerful than a Turing machine, which
only has ℵ₀ bytes of RAM and thus can't even store an arbitrary real
value in memory.

You're very much off the track here. A Turing machine is an abstraction
for a computer were the limitations of size are gone.
The most obvious feature of a Turing machine is an infinite tape.
A Turing machine happily calculates Ackerman functions long after
a real machine runs out of memory to represent it, with as a result
a number of ones on that tape.
But it only happens in the mathematicians mind.



Marko
-- 
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spearc.xs4all.nl =n http://home.hccnet.nl/a.w.m.van.der.horst

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-04 Thread Steven D'Aprano
On Wed, 05 Mar 2014 02:15:14 +, Albert van der Horst wrote:

 Adding cf's adds all computable numbers in infinite precision. However
 that is not even a drop in the ocean, as the computable numbers have
 measure zero.

On the other hand, it's not really clear that the non-computable numbers 
are useful or necessary for anything. They exist as mathematical 
abstractions, but they'll never be the result of any calculation or 
measurement that anyone might do.



-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-04 Thread Rustom Mody
On Wednesday, March 5, 2014 9:11:13 AM UTC+5:30, Steven D'Aprano wrote:
 On Wed, 05 Mar 2014 02:15:14 +, Albert van der Horst wrote:

  Adding cf's adds all computable numbers in infinite precision. However
  that is not even a drop in the ocean, as the computable numbers have
  measure zero.

 On the other hand, it's not really clear that the non-computable numbers 
 are useful or necessary for anything. They exist as mathematical 
 abstractions, but they'll never be the result of any calculation or 
 measurement that anyone might do.

There are even more extreme versions of this amounting to roughly this view:
Any infinity supposedly 'larger' than the natural numbers is a nonsensical 
notion.

See eg
http://en.wikipedia.org/wiki/Controversy_over_Cantor%27s_theory

and Weyl/Polya bet (pg 10 of 
http://research.microsoft.com/en-us/um/people/gurevich/Opera/123.pdf )

I cannot find the exact quote so from memory Weyl says something to this effect:

Cantor's diagonalization PROOF is not in question.
Its CONCLUSION very much is.
The classical/platonic mathematician (subject to wooly thinking) concludes that 
the real numbers are a superset of the integers

The constructvist mathematician (who supposedly thinks clearly) only concludes
the obvious, viz that real numbers cannot be enumerated

To go from 'cannot be enumerated' to 'is a proper superset of' requires the 
assumption of 'completed infinities' and that is not math but theology
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-04 Thread Roy Smith
In article c39d5b44-6c7b-40d1-bbb5-791a36af6...@googlegroups.com,
 Rustom Mody rustompm...@gmail.com wrote:

 I cannot find the exact quote so from memory Weyl says something to this 
 effect:
 
 Cantor's diagonalization PROOF is not in question.
 Its CONCLUSION very much is.
 The classical/platonic mathematician (subject to wooly thinking) concludes 
 that 
 the real numbers are a superset of the integers
 
 The constructvist mathematician (who supposedly thinks clearly) only 
 concludes
 the obvious, viz that real numbers cannot be enumerated
 
 To go from 'cannot be enumerated' to 'is a proper superset of' requires the 
 assumption of 'completed infinities' and that is not math but theology

I stopped paying attention to mathematicians when they tried to convince 
me that the sum of all natural numbers is -1/12.  Sure, you can 
manipulate the symbols in a way which is consistent with some set of 
rules that we believe govern the legal manipulation of symbols, but it 
just plain doesn't make sense.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Ben Finney
Roy Smith r...@panix.com writes:

 I stopped paying attention to mathematicians when they tried to convince 
 me that the sum of all natural numbers is -1/12.

I stopped paying attention to a particular person when they said “I
stopped paying attention to an entire field of study because one
position expressed by some practicioners was disagreeable to me”.

Would you think “I stopped listening to logicians when some of them
expressed Zeno's paradox of the impossibility of motion” to be a good
justification for ignoring the entire field of logic?

Rather, a more honest response is to say why that position is incorrect,
and not dismiss the entire field of study merely for a disagreement with
that position.

-- 
 \  “Life does not cease to be funny when people die any more than |
  `\  it ceases to be serious when people laugh.” —George Bernard Shaw |
_o__)  |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Rustom Mody
On Wednesday, March 5, 2014 10:07:44 AM UTC+5:30, Ben Finney wrote:
 Roy Smith writes:

  I stopped paying attention to mathematicians when they tried to convince 
  me that the sum of all natural numbers is -1/12.

 I stopped paying attention to a particular person when they said I
 stopped paying attention to an entire field of study because one
 position expressed by some practicioners was disagreeable to me.

In general this is a correct response
In this particular case (apart from Roy speaking tongue-in-cheek)
it (Roy's viewpoint) is more appropriate and central to our field than you
perhaps realize:

Nonsensical results believed in by a small minority (Cantor's time)
became full scale war between platonists (Hilbert) and constructivists
(Brouwer) a generation later.

Gödel staunchly in Hilbert camp made his incompleteness theorem to rebut the
constructivists

Turing unable to disagree with Gödel's result but disagreeing with platonic
philosophy made his 'machine'. The negative result that he did not like but
had to admit was uncomputability/undecidability. However he trumped Gödel in
making a 'universal' machine

And so we are here :-)

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-03-04 Thread Roy Smith
In article mailman.7792.1393994283.18130.python-l...@python.org,
 Ben Finney ben+pyt...@benfinney.id.au wrote:

 Roy Smith r...@panix.com writes:
 
  I stopped paying attention to mathematicians when they tried to convince 
  me that the sum of all natural numbers is -1/12.
 
 I stopped paying attention to a particular person when they said “I
 stopped paying attention to an entire field of study because one
 position expressed by some practicioners was disagreeable to me”.
 
 Would you think “I stopped listening to logicians when some of them
 expressed Zeno's paradox of the impossibility of motion” to be a good
 justification for ignoring the entire field of logic?
 
 Rather, a more honest response is to say why that position is incorrect,
 and not dismiss the entire field of study merely for a disagreement with
 that position.

I *was* partly joking (but only partly).

Still, there's lots of stuff mathematicians do which I don't understand.  
I cannot understand, for example, Andrew's Wiles's proof of Fermat's 
Last Theorm.  I can't even get past the first few paragraphs of the 
Wikipedia article.  But, that doesn't sour me on the proof.  I can 
accept that there are things I don't understand.  I don't know how to 
speak Chinese.  I don't know how to paint a flower.  I don't know how to 
run a mile in 4 minutes.  But I accept that there are people who do know 
how to do those things.

I can watch a friend pick up a piece of paper, a brush, and some 
watercolors and 5 minutes later, she's got a painting of a flower.  I 
watched her hands hold the brush and move it over the paper.  There's 
nothing mystical about what she did.  Her hands made no motions which 
are fundamentally impossible for my hands to make, yet I know that my 
attempt at reproducing her work would not result in a painting of a 
flower.

But, as I watch the -1/12 proof unfold, I don't get the same feeling.  I 
understand every step.  I wouldn't have thought to manipulate the 
symbols that way, but once I've seen it done, I can reproduce the steps 
myself.  It's all completely understandable.  The only problem is, it 
results in a conclusion which makes no sense.  I can *prove* that it 
makes no sense, by manipulating the symbols in different ways.  The sum 
of any two positive numbers must be positive.  I can group them and add 
them up any way I want and that's still true.

But, here I've got some guy telling me it's not true.  If you just slide 
this over that way, and add these parts up this way, it's -1/12.  That 
does not compute.  But it doesn't not compute in the sense of, that's 
so complicated, I have no idea what you did, but in the sense of thats 
so simple, I know exactly what you did, and it's bullshit :-)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-04 Thread Steven D'Aprano
On Tue, 04 Mar 2014 23:25:37 -0500, Roy Smith wrote:

 I stopped paying attention to mathematicians when they tried to convince
 me that the sum of all natural numbers is -1/12.  

I'm pretty sure they did not. Possibly a physicist may have tried to tell 
you that, but most mathematicians consider physicists to be lousy 
mathematicians, and the mere fact that they're results seem to actually 
work in practice is an embarrassment for the entire universe. A 
mathematician would probably have said that the sum of all natural 
numbers is divergent and therefore there is no finite answer.

Well, that is, apart from mathematicians like Euler and Ramanujan. When 
people like them tell you something, you better pay attention.

We have an intuitive understanding of the properties of addition. You 
can't add 1000 positive whole numbers and get a negative fraction, that's 
obvious. But that intuition only applies to *finite* sums. They don't 
even apply to infinite *convergent* series, and they're *easy*. Remember 
Zeno's Paradoxes? People doubted that the convergent series:

1/2 + 1/4 + 1/8 + 1/16 + ... 

added up to 1 for the longest time, even though they could see with their 
own eyes that it had to. Until they worked out what *infinite* sums 
actually meant, their intuitions were completely wrong. This is a good 
lesson for us all.

The sum of all the natural numbers is a divergent infinite series, so we 
shouldn't expect that our intuitions hold. We can't add it up as if it 
were a convergent series, because it's not convergent. Nobody disputes 
that. But perhaps there's another way?

Normally mathematicians will tell you that divergent series don't have a 
total. That's because often the total you get can vary depending on how 
you add them up. The classic example is summing the infinite series:

1 - 1 + 1 - 1 + 1 - ... 

Depending on how you group them, you can get:

(1 - 1) + (1 - 1) + (1 - 1) ...  
= 0 + 0 + 0 + ... = 0

or you can get:

1 - (1 - 1 + 1 - 1 + ... ) 
= 1 - (1 - 1) - (1 - 1) - ... )
= 1 - 0 - 0 - 0 ... 
= 1

Or you can do a neat little trick where we define the sum as x:

x = 1 - 1 + 1 - 1 + 1 - ... 
x = 1 - (1 - 1 + 1 - 1 + ... )
x = 1 - x
2x = 1
x = 1/2


So at first glance, summing a divergent series is like dividing by zero. 
You get contradictory results, at least in this case.

But that's not necessarily always the case. You do have to be careful 
when summing divergent series, but that doesn't always mean you can't do 
it and get a meaningful answer. Sometimes you can, sometimes you can't, 
it depends on the specific series. With the sum of the natural numbers, 
rather than getting three different results from three different methods, 
mathematicians keep getting the same -1/12 result using various methods. 
That's a good hint that there is something logically sound going on here, 
even if it seems unintuitive.

Remember Zeno's Paradoxes? Our intuitions about equality and plus and 
sums of numbers don't apply to infinite series. We should be at least 
open to the possibility that while all the *finite* sums:

1 + 2
1 + 2 + 3
1 + 2 + 3 + 4
...

and so on sum to positive whole numbers, that doesn't mean that the 
*infinite* sum has to total to a positive whole number. Maybe that's not 
how addition works. I don't know about you, but I've never personally 
added up an infinite number of every-increasing quantities to see what 
the result is. Maybe it is a negative fraction. (I'd say try it and 
see, but I don't have an infinite amount of time to spend on it.)

And in fact that's exactly what seems to be case here. Mathematicians can 
demonstrate an identity (that is, equality) between the divergent sum of 
the natural numbers with the zeta function ζ(-1), and *that* can be 
worked out independently, and equals -1/12.

So there are a bunch of different ways to show that the divergent sum 
adds up to -1/12, some of them are more vigorous than others. The zeta 
function method is about as vigorous as they come. The addition of an 
infinite number of things behaves differently than the addition of finite 
numbers of things.

More here:

http://scitation.aip.org/content/aip/magazine/physicstoday/news/10.1063/PT.5.8029

http://math.ucr.edu/home/baez/week126.html

http://en.wikipedia.org/wiki/1_+_2_+_3_+_4_+_%E2%8B%AF

and even here:

http://scientopia.org/blogs/goodmath/2014/01/20/oy-veh-power-series-analytic-continuations-and-riemann-zeta/

where a mathematician tries *really hard* to discredit the idea that the 
sum equals -1/12, but ends up proving that it does. So he simply plays a 
linguistic slight of hand and claims that despite the series and the zeta 
function being equal, they're not *actually* equal.

In effect, the author Mark Carrol-Chu in the GoodMath blog above wants 
to make the claim that the divergent sum is not equal to ζ(-1), but 
everywhere you find that divergent sum in your calculations you can rub 
it out and replace it with ζ(-1), which is -1/12. In other words, he's 

Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-03 Thread Albert van der Horst
In article mailman.6735.1392194885.18130.python-l...@python.org,
Chris Angelico  ros...@gmail.com wrote:
On Wed, Feb 12, 2014 at 7:17 PM, Ben Finney ben+pyt...@benfinney.id.au wrote:
 Chris Angelico ros...@gmail.com writes:

 I have yet to find any computer that works with the set of real
 numbers in any way. Never mind optimization, they simply cannot work
 with real numbers.

 Not *any* computer? Not in *any* way? The Python built-in ‘float’ type
 “works with the set of real numbers”, in a way.

No, the Python built-in float type works with a subset of real numbers:

To be more precise: a subset of the rational numbers, those with a denominator
that is a power of two.

 float(pi)
Traceback (most recent call last):
  File pyshell#1, line 1, in module
float(pi)
ValueError: could not convert string to float: 'pi'
 float(π)
Traceback (most recent call last):
  File pyshell#2, line 1, in module
float(π)
ValueError: could not convert string to float: 'π'

Same goes for fractions.Fraction and [c]decimal.Decimal. All of them
are restricted to some subset of rational numbers, not all reals.

 The URL:http://docs.python.org/2/library/numbers.html#numbers.Real ABC
 defines behaviours for types implementing the set of real numbers.

 What specific behaviour would, for you, qualify as “works with the set
 of real numbers in any way”?

Being able to represent surds, pi, e, etc, for a start. It'd
theoretically be possible with an algebraic notation (eg by carrying
through some representation like 2*pi rather than 6.28), but
otherwise, irrationals can't be represented with finite storage and a
digit-based system.

An interesting possibility is working with rules that generate the
continued fraction sequence of a real number. Say yield() gives the
next coefficient (or the next hex digit).
It was generally believed that summing two numbers in their cf representation
was totally impractical because it required conversion to a rational number.
OTOH if we consider a cf as an ongoing progress, the situation is much better.
Summing would be a process that yields coefficients of the sum, and you could
just stop when you've  enough precision. Fascinating stuff.

It is described in a self contained, type writer style document gosper.txt
that is found on the web in several places e.g.

http://home.strw.leidenuniv.nl/~gurkan/gosper.pdf
I have a gosper.txt, don't know from where.

It really is a cookbook, one could built a python implementation from
there, without being overly math savvy. I'd love to hear if
some one does it.

( in principle a coefficient of a cf can overflow machine precision,
that has never been observed in the wild. A considerable percentage
of the coefficients for a random number are ones or otherwise small.
The golden ratio has all ones.)

ChrisA

Groetjes Albert
-- 
Albert van der Horst, UTRECHT,THE NETHERLANDS
Economic growth -- being exponential -- ultimately falters.
albert@spearc.xs4all.nl =n http://home.hccnet.nl/a.w.m.van.der.horst

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-03 Thread Chris Angelico
On Tue, Mar 4, 2014 at 1:45 PM, Albert van der Horst
alb...@spenarnc.xs4all.nl wrote:
No, the Python built-in float type works with a subset of real numbers:

 To be more precise: a subset of the rational numbers, those with a denominator
 that is a power of two.

And no more than N bits (53 in a 64-bit float) in the numerator, and
the denominator between the limits of the exponent. (Unless it's
subnormal. That adds another set of small numbers.) It's a pretty
tight set of restrictions, and yet good enough for so many purposes.

But it's a far cry from all real numbers. Even allowing for
continued fractions adds only some more; I don't think you can
represent surds that way.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-03 Thread Rustom Mody
On Tuesday, March 4, 2014 8:32:01 AM UTC+5:30, Chris Angelico wrote:
 On Tue, Mar 4, 2014 at 1:45 PM, Albert van der Horst wrote:
 No, the Python built-in float type works with a subset of real numbers:
  To be more precise: a subset of the rational numbers, those with a 
  denominator
  that is a power of two.

 And no more than N bits (53 in a 64-bit float) in the numerator, and
 the denominator between the limits of the exponent. (Unless it's
 subnormal. That adds another set of small numbers.) It's a pretty
 tight set of restrictions, and yet good enough for so many purposes.

 But it's a far cry from all real numbers. Even allowing for
 continued fractions adds only some more; I don't think you can
 represent surds that way.

See

http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/cfINTRO.html#sqrts

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-03 Thread Chris Angelico
On Tue, Mar 4, 2014 at 2:13 PM, Rustom Mody rustompm...@gmail.com wrote:
 But it's a far cry from all real numbers. Even allowing for
 continued fractions adds only some more; I don't think you can
 represent surds that way.

 See

 http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/cfINTRO.html#sqrts

That's neat, didn't know that. Is there an efficient way to figure
out, for any integer N, what its sqrt's CF sequence is? And what about
the square roots of non-integers - can you represent √π that way? I
suspect, though I can't prove, that there will be numbers that can't
be represented even with an infinite series - or at least numbers
whose series can't be easily calculated.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-03 Thread Rustom Mody
On Tuesday, March 4, 2014 9:16:25 AM UTC+5:30, Chris Angelico wrote:
 On Tue, Mar 4, 2014 at 2:13 PM, Rustom Mody  wrote:
  But it's a far cry from all real numbers. Even allowing for
  continued fractions adds only some more; I don't think you can
  represent surds that way.
  See
  http://www.maths.surrey.ac.uk/hosted-sites/R.Knott/Fibonacci/cfINTRO.html#sqrts

 That's neat, didn't know that. Is there an efficient way to figure
 out, for any integer N, what its sqrt's CF sequence is? And what about
 the square roots of non-integers - can you represent √π that way? I
 suspect, though I can't prove, that there will be numbers that can't
 be represented even with an infinite series - or at least numbers
 whose series can't be easily calculated.

You are now asking questions that are really (real-ly?) outside my capacities.

What I know (which may be quite off the mark :-) )

Just as all real numbers almost by definition have a decimal form (may
be infinite eg 1/3 becomes 0.3...) all real numbers likewise have a CF form

For some mathematical (aka arcane) reasons the CF form is actually better.

Furthermore:

1. Transcendental numbers like e and pi have non-repeating infinite CF forms
2. Algebraic numbers (aka surds) have repeating maybe finite(?) forms
3. For some numbers its not known whether they are transcendental or not
(vague recollection pi^sqrt(pi) is one such)
4 Since e^ipi is very much an integer, above question is surprisingly 
non-trivial
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-03 Thread Steven D'Aprano
On Tue, 04 Mar 2014 14:46:25 +1100, Chris Angelico wrote:

 That's neat, didn't know that. Is there an efficient way to figure out,
 for any integer N, what its sqrt's CF sequence is? And what about the
 square roots of non-integers - can you represent √π that way? I suspect,
 though I can't prove, that there will be numbers that can't be
 represented even with an infinite series - or at least numbers whose
 series can't be easily calculated.

Every rational number can be written as a continued fraction with a 
finite number of terms[1]. Every irrational number can be written as a 
continued fraction with an infinite number of terms, just as every 
irrational number can be written as a decimal number with an infinite 
number of digits. Most of them (to be precise: an uncountably infinite 
number of them) will have no simple or obvious pattern.


[1] To be pedantic, written as *two* continued fractions, one ending with 
the term 1, and one with one less term which isn't 1. That is:

[a; b, c, d, ..., z, 1] == [a; b, c, d, ..., z+1]


Any *finite* CF ending with one can be simplified to use one fewer term. 
Infinite CFs of course don't have a last term.



-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-03-03 Thread Chris Angelico
On Tue, Mar 4, 2014 at 4:53 PM, Steven D'Aprano st...@pearwood.info wrote:
 On Tue, 04 Mar 2014 14:46:25 +1100, Chris Angelico wrote:

 That's neat, didn't know that. Is there an efficient way to figure out,
 for any integer N, what its sqrt's CF sequence is? And what about the
 square roots of non-integers - can you represent √π that way? I suspect,
 though I can't prove, that there will be numbers that can't be
 represented even with an infinite series - or at least numbers whose
 series can't be easily calculated.

 Every irrational number can be written as a
 continued fraction with an infinite number of terms, just as every
 irrational number can be written as a decimal number with an infinite
 number of digits.

It's easy enough to have that kind of expansion, I'm wondering if it's
possible to identify it directly. To render the decimal expansion of a
square root by the cut-and-try method, you effectively keep dividing
until you find that you're close enough; that means you (a) have to
keep the entire number around for each step, and (b) need to do a few
steps to find that the digits aren't changing. But if you can take a
CF (finite or infinite) and do an O(n) transformation on it to produce
that number's square root, then you have an effective means of
representing square roots. Suppose I make a generator function that
represents a fraction:

def one_third():
while True:
yield 3

def one_seventh():
while True:
yield 1; yield 4; yield 2; yield 8; yield 5; yield 7

I could then make a generator that returns the sum of those two:

def add_without_carry(x, y):
whiile True:
yield next(x)+next(y)

Okay, that's broken for nearly any case, but with a bit more sophistication:

def add(x, y):
prev=None
nines=0
while True:
xx,yy=next(x),next(y)
tot=xx+yy
if tot==9:
nines+=1
continue
if tot9:
if prev is None: raise OverflowError(exceeds 1.0)
yield prev+1
tot-=10
for _ in range(nines): yield 0
nines=0
else:
if prev is not None: yield prev
prev=tot

def show(n):
return ''.join(str(_) for _ in itertools.islice(n,20))

 show(add(one_third(),one_seventh()))
'47619047619047619047'
 show(add(add(add(one_seventh(),one_seventh()),add(one_seventh(),one_seventh())),add(one_seventh(),one_seventh(
'85714285714285714285'

In constant space, that will produce the sum of two infinite sequences
of digits. (And it's constant time, too, except when it gets a stream
of nines. Adding three thirds together will produce an infinite loop
as it waits to see if there'll be anything that triggers an infinite
cascade of carries.) Now, if there's a way to do that for square
rooting a number, then the CF notation has a distinct benefit over the
decimal expansion used here. As far as I know, there's no simple way,
in constant space and/or time, to progressively yield more digits of a
number's square root, working in decimal.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-14 Thread Gregory Ewing

Devin Jeanpierre wrote:

There is no way to iterate over all the reals one at a time, no matter
how fast you execute instructions. If you could, it would be trivial
to show that the reals have the same cardinality as the positive
integers: correspond n with the whatever is returned by the nth call
to it.next.


You're assuming that the calls to it.next are discrete
events separated by some nonzero time interval.

A decent transfinite processor would make a continuum
of calls, and execute uncountably many of them in any
finite period of time.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-14 Thread Dave Angel
 Chris Angelico ros...@gmail.com Wrote in message:
 On Fri, Feb 14, 2014 at 5:37 PM, Gregory Ewing



 If it's a quantum computer, it may be able to execute
 all branches of the iteration in parallel. But it
 would only have a probability of returning the right
 answer (in other cases it would kill your cat).
 
 Oh, that's fine, he's not my cat anyway. Go ahead, build it.
 

That cat has got to be at least 79 by now. Probably starved by now.

-- 
DaveA

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-14 Thread Rustom Mody
On Friday, February 14, 2014 12:14:31 PM UTC+5:30, Chris Angelico wrote:

 Oh, that's fine, he's not my cat anyway. Go ahead, build it.

Now Now! I figured you were the cat out here!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-14 Thread Grant Edwards
On 2014-02-14, Gregory Ewing greg.ew...@canterbury.ac.nz wrote:

 If it's a quantum computer, it may be able to execute
 all branches of the iteration in parallel. But it
 would only have a probability of returning the right
 answer (in other cases it would kill your cat).

I know somebody who would claim that _is_ the right answer.

-- 
Grant Edwards   grant.b.edwardsYow! Hello, GORRY-O!!
  at   I'm a GENIUS from HARVARD!!
  gmail.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-14 Thread Devin Jeanpierre
On Fri, Feb 14, 2014 at 3:30 AM, Gregory Ewing
greg.ew...@canterbury.ac.nz wrote:
 Devin Jeanpierre wrote:
 There is no way to iterate over all the reals one at a time, no matter
 how fast you execute instructions. If you could, it would be trivial
 to show that the reals have the same cardinality as the positive
 integers: correspond n with the whatever is returned by the nth call
 to it.next.


 You're assuming that the calls to it.next are discrete
 events separated by some nonzero time interval.

I'm not. If you want to imagine infinitely fast computers, you must
allow for things to precede each other in the program's execution,
or else you can't execute any program at all.

In such an infinitely fast computer, if iteration works by calling a
.next() method repeatedly, it can't iterate uncountably many times, by
construction. If you're executing uncountably many instructions per
second, the loop would terminate immediately, having executed
countably infinitely many iterations.

 A decent transfinite processor would make a continuum
 of calls, and execute uncountably many of them in any
 finite period of time.

Yes, you could imagine a computer that does a thing for every real. My
issue is that the cited thing is transfinite induction, for which the
induction covers countably many values; handling of the rest of the
values is a second step. This is also the implication of such a word
as iteration.

I suppose this was accidental on the part of the poster, and I
shouldn't have disagreed so strongly. I suspect they meant what you
are getting at, instead, which is that there is no successor to any
iteration, and between any two iterations there are uncountably many
iterations that happened. Operations occur in an order, but not
sequentially. (Of course, such a machine would be alien and absurd.)

-- Devin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Oscar Benjamin
On 12 February 2014 10:07, Ben Finney ben+pyt...@benfinney.id.au wrote:
 Chris Angelico ros...@gmail.com writes:

 On Wed, Feb 12, 2014 at 7:56 PM, Ben Finney ben+pyt...@benfinney.id.au 
 wrote:
  So, if I understand you right, you want to say that you've not found
  a computer that works with the *complete* set of real numbers. Yes?

 Correct. [...] My point is that computers *do not* work with real
 numbers, but only ever with some subset thereof [...]

 You've done it again: by saying that computers *do not* work with real
 numbers, that if I find a real number - e.g. the number 4 - your
 position is that, since it's a real number, computers don't work with
 that number.

 That's why I think you need to be clear that your point isn't computers
 don't work with real numbers, but rather computers work only with a
 limited subset of real numbers.

I think Chris' statement above is pretty clear. Also I didn't find the
original statement confusing and it is a reasonable point to make.

While computers can (with some limitations) do a pretty good job of
integers and rational numbers they cannot truly represent real
computation. Other people have mentioned that there are computer
algebra systems that can handle surds and other algebraic numbers or
some transcendental numbers but none of these comes close to the set
of reals.

This isn't even a question of resource constraints: a digital computer
with infinite memory and computing power would still be limited to
working with countable sets, and the real numbers are just not
countable. The fundamentally discrete nature of digital computers
prevents them from being able to truly handle real numbers and real
computation.

A hypothetical idealised analogue computer would be able to truly do
real arithmetic (but I think in practice the errors would be worse
than single precision floating point).


Oscar
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Marko Rauhamaa
Oscar Benjamin oscar.j.benja...@gmail.com:

 This isn't even a question of resource constraints: a digital computer
 with infinite memory and computing power would still be limited to
 working with countable sets, and the real numbers are just not
 countable. The fundamentally discrete nature of digital computers
 prevents them from being able to truly handle real numbers and real
 computation.

Well, if your idealized, infinite, digital computer had ℵ₁ bytes of RAM
and ran at ℵ₁ hertz and Python supported transfinite iteration, you
could easily do reals:

def real_sqrt(y):
for x in continuum(0, max(1, y)):
# Note: x is not traversed in the  order but some other
# well-ordering, which has been proved to exist.
if x * x == y:
return x
assert False

The function could well return in finite time with a precise result for
any given nonnegative real argument.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Ben Finney
Oscar Benjamin oscar.j.benja...@gmail.com writes:

 I think Chris' statement above is pretty clear.

I disagree, as explained.

 Also I didn't find the original statement confusing

I'm happy for you.

 and it is a reasonable point to make.

Yes, and I was not addressing that.

-- 
 \  “It is well to remember that the entire universe, with one |
  `\   trifling exception, is composed of others.” —John Andrew Holmes |
_o__)  |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Chris Angelico
On Fri, Feb 14, 2014 at 1:00 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 Well, if your idealized, infinite, digital computer had ℵ₁ bytes of RAM
 and ran at ℵ₁ hertz and Python supported transfinite iteration, you
 could easily do reals:

 def real_sqrt(y):
 for x in continuum(0, max(1, y)):
 # Note: x is not traversed in the  order but some other
 # well-ordering, which has been proved to exist.
 if x * x == y:
 return x
 assert False

How exactly do you iterate over a continuum, with a digital computer?
Even adding to your requirements that it have an ℵ₁ Hz bus (which, by
the way, I *totally* want - the uses are endless), it would take a
finite amount of time to assign to x the next number, ergo your
algorithm can't guarantee to finish in finite time.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Marko Rauhamaa
Chris Angelico ros...@gmail.com:

 On Fri, Feb 14, 2014 at 1:00 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 Well, if your idealized, infinite, digital computer had ℵ₁ bytes of RAM
 and ran at ℵ₁ hertz and Python supported transfinite iteration, you
 could easily do reals:

 for x in continuum(0, max(1, y)):

 How exactly do you iterate over a continuum, with a digital computer?

How digital our idealized computers are is a matter for a debate.
However, iterating over the continuum is provably possible:

  http://en.wikipedia.org/wiki/Transfinite_induction

 it would take a finite amount of time to assign to x the next
 number, ergo your algorithm can't guarantee to finish in finite time.

My assumption was you could execute ℵ₁ statements per second. That
doesn't guarantee a finite finish time but would make it possible. That
is because

   ℵ₁ * ℵ₁ = ℵ₁ = ℵ₁ * 1

This computer is definitely more powerful than a Turing machine, which
only has ℵ₀ bytes of RAM and thus can't even store an arbitrary real
value in memory.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Chris Angelico
On Fri, Feb 14, 2014 at 6:47 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 My assumption was you could execute ℵ₁ statements per second. That
 doesn't guarantee a finite finish time but would make it possible. That
 is because

ℵ₁ * ℵ₁ = ℵ₁ = ℵ₁ * 1

Hmm. I never actually covered this stuff in grade school - the
construction of infinite computing power didn't exactly come up. But
as I understand it, just calculating the next number requires ℵ₁ RAM
operations, and you have to do that ℵ₁ times per second. So what
you're saying is that it's possible to do that? You can execute ℵ₁
operations where each operation has to wait for ℵ₁ bus actions before
continuing?

This would do my head in if I hadn't already thoroughly broken my
brain on other insanities.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Rotwang
What's this? A discussion about angels dancing on a the head of a pin? 
Great, I'm in.


On 13/02/2014 14:00, Marko Rauhamaa wrote:

Oscar Benjamin oscar.j.benja...@gmail.com:


This isn't even a question of resource constraints: a digital computer
with infinite memory and computing power would still be limited to
working with countable sets, and the real numbers are just not
countable. The fundamentally discrete nature of digital computers
prevents them from being able to truly handle real numbers and real
computation.


Well, if your idealized, infinite, digital computer had ℵ₁ bytes of RAM
and ran at ℵ₁ hertz and Python supported transfinite iteration, you
could easily do reals:

 def real_sqrt(y):
 for x in continuum(0, max(1, y)):
 # Note: x is not traversed in the  order but some other
 # well-ordering, which has been proved to exist.
 if x * x == y:
 return x
 assert False

The function could well return in finite time with a precise result for
any given nonnegative real argument.



Minor point: ℵ₁ does not mean the cardinality c of the continuum, it 
means the smallest cardinal larger than ℵ₀. It has been proved that the 
question of whether ℵ₁ == c is independent of ZFC, so it is in a sense 
unanswerable.


More importantly, though, such a computer could not complete the above 
iteration in finite time unless time itself is not real-valued. That's 
because if k is an uncountable ordinal then there is no strictly 
order-preserving function from k to the unit interval [0, 1]. For 
suppose otherwise, and let f be such a function. Let S denote the set of 
successor ordinals in k, and let L denote the set of limit ordinals in 
k. Then lambda x: x + 1 is an injective function from L (or L with a 
single point removed if k is the successor of a limit ordinal) to S, so 
that S is at least as large as L and since k == S | L it follows that S 
is uncountable.


For each x + 1 in S, let g(x + 1) = f(x + 1) - f(x)  0. Let F be any 
finite subset of S and let y = max(F). It is clear that f(y) = sum(g(x) 
for x in F). Since also f(y) = 1, we have sum(g(x) for x in F) if = 1 
for all finite F. In particular, for any integer n  0, the set S_n = {x 
for x in S if g(x)  1/n} has len(S_n)  n. But then S is the union of 
the countable collection {S_n for n in N} of finite sets, so is 
countable; a contradiction.



On 13/02/2014 19:47, Marko Rauhamaa wrote:

My assumption was you could execute ℵ₁ statements per second. That
doesn't guarantee a finite finish time but would make it possible. That
is because

ℵ₁ * ℵ₁ = ℵ₁ = ℵ₁ * 1


I don't think that's enough - assuming the operations of your processor 
during a second can be indexed by some ordinal k with len(k) == c, if 
each of the c operations per iteration must be complete before the next 
step of the for loop is complete then you need an injective function 
from c * c to k that preserves the lexicographic ordering. I don't know 
whether such a function exists for arbitrary such k, but k can be chosen 
in advance so that it does.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Marko Rauhamaa
Rotwang sg...@hotmail.co.uk:

  for x in continuum(0, max(1, y)):
  # Note: x is not traversed in the  order but some other
  # well-ordering, which has been proved to exist.
  if x * x == y:
  return x

 [...]

 More importantly, though, such a computer could not complete the above
 iteration in finite time unless time itself is not real-valued. That's
 because if k is an uncountable ordinal then there is no strictly
 order-preserving function from k to the unit interval [0, 1].

If you read the code comment above, the transfinite iterator yields the
whole continuum, not in the  order (which is impossible), but in some
other well-ordering (which is known to exist). Thus, we can exhaust the
continuum in ℵ₁ discrete steps.

(Yes, the continuum hypothesis was used to make the notation easier to
read.)


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Rotwang

On 13/02/2014 22:00, Marko Rauhamaa wrote:

Rotwang sg...@hotmail.co.uk:


  for x in continuum(0, max(1, y)):
  # Note: x is not traversed in the  order but some other
  # well-ordering, which has been proved to exist.
  if x * x == y:
  return x


[...]


Restoring for context:


The function could well return in finite time with a precise result
for any given nonnegative real argument.




More importantly, though, such a computer could not complete the above
iteration in finite time unless time itself is not real-valued. That's
because if k is an uncountable ordinal then there is no strictly
order-preserving function from k to the unit interval [0, 1].


If you read the code comment above, the transfinite iterator yields the
whole continuum, not in the  order (which is impossible), but in some
other well-ordering (which is known to exist). Thus, we can exhaust the
continuum in ℵ₁ discrete steps.


Yes, I understood that. But my point was that it can't carry out those 
ℵ₁ discrete steps in finite time (assuming that time is real-valued), 
because there's no way to embed them in any time interval without 
changing their order. Note that this is different to the case of 
iterating over a countable set, since the unit interval does have 
countable well-ordered subsets.

--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Marko Rauhamaa
Rotwang sg...@hotmail.co.uk:

 But my point was that it can't carry out those ℵ₁ discrete steps in
 finite time (assuming that time is real-valued), because there's no
 way to embed them in any time interval without changing their order.

I'd have to think so I take your word for it.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Gregory Ewing

Dave Angel wrote:

Actually, the particular example you use can be done.  When
 printing the infinite sum of two infinite decimal streams,  you
 can simply hold back whenever you get one or more nines.


But you only have a finite amount of space for keeping
track of how many nines you've seen, so there will be
some inputs with more nines than you can handle.
(Infinitely many such inputs, in fact!)

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Devin Jeanpierre
On Thu, Feb 13, 2014 at 11:47 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 Chris Angelico ros...@gmail.com:

 On Fri, Feb 14, 2014 at 1:00 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 Well, if your idealized, infinite, digital computer had ℵ₁ bytes of RAM
 and ran at ℵ₁ hertz and Python supported transfinite iteration, you
 could easily do reals:

 for x in continuum(0, max(1, y)):

 How exactly do you iterate over a continuum, with a digital computer?

 How digital our idealized computers are is a matter for a debate.
 However, iterating over the continuum is provably possible:

   http://en.wikipedia.org/wiki/Transfinite_induction

You missed the most important point on that page, which is the limit case.

There is no way to iterate over all the reals one at a time, no matter
how fast you execute instructions. If you could, it would be trivial
to show that the reals have the same cardinality as the positive
integers: correspond n with the whatever is returned by the nth call
to it.next.

It doesn't matter if you call your magical iterator transfinite,
that doesn't make it so.

-- Devin
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Gregory Ewing

Chris Angelico wrote:

Even adding to your requirements that it have an ℵ₁ Hz bus (which, by
the way, I *totally* want - the uses are endless), it would take a
finite amount of time to assign to x the next number, ergo your
algorithm can't guarantee to finish in finite time.


If it's a quantum computer, it may be able to execute
all branches of the iteration in parallel. But it
would only have a probability of returning the right
answer (in other cases it would kill your cat).

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-13 Thread Chris Angelico
On Fri, Feb 14, 2014 at 5:37 PM, Gregory Ewing
greg.ew...@canterbury.ac.nz wrote:
 Chris Angelico wrote:

 Even adding to your requirements that it have an ℵ₁ Hz bus (which, by
 the way, I *totally* want - the uses are endless), it would take a

 finite amount of time to assign to x the next number, ergo your
 algorithm can't guarantee to finish in finite time.


 If it's a quantum computer, it may be able to execute
 all branches of the iteration in parallel. But it
 would only have a probability of returning the right
 answer (in other cases it would kill your cat).

Oh, that's fine, he's not my cat anyway. Go ahead, build it.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-02-12 Thread wxjmfauth
Integers are integers. (1)
Characters are characters. (2)

(1) is a unique natural set.

(2) is an artificial construct working
with 3 sets (unicode).

jmf
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-02-12 Thread Chris Angelico
On Wed, Feb 12, 2014 at 7:17 PM, Ben Finney ben+pyt...@benfinney.id.au wrote:
 Chris Angelico ros...@gmail.com writes:

 I have yet to find any computer that works with the set of real
 numbers in any way. Never mind optimization, they simply cannot work
 with real numbers.

 Not *any* computer? Not in *any* way? The Python built-in ‘float’ type
 “works with the set of real numbers”, in a way.

No, the Python built-in float type works with a subset of real numbers:

 float(pi)
Traceback (most recent call last):
  File pyshell#1, line 1, in module
float(pi)
ValueError: could not convert string to float: 'pi'
 float(π)
Traceback (most recent call last):
  File pyshell#2, line 1, in module
float(π)
ValueError: could not convert string to float: 'π'

Same goes for fractions.Fraction and [c]decimal.Decimal. All of them
are restricted to some subset of rational numbers, not all reals.

 The URL:http://docs.python.org/2/library/numbers.html#numbers.Real ABC
 defines behaviours for types implementing the set of real numbers.

 What specific behaviour would, for you, qualify as “works with the set
 of real numbers in any way”?

Being able to represent surds, pi, e, etc, for a start. It'd
theoretically be possible with an algebraic notation (eg by carrying
through some representation like 2*pi rather than 6.28), but
otherwise, irrationals can't be represented with finite storage and a
digit-based system.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-02-12 Thread wxjmfauth
Le mercredi 12 février 2014 09:35:38 UTC+1, wxjm...@gmail.com a écrit :
 Integers are integers. (1)
 
 Characters are characters. (2)
 
 
 
 (1) is a unique natural set.
 
 
 
 (2) is an artificial construct working
 
 with 3 sets (unicode).
 
 
 
 jmf

Addendum: One should not confuse unicode and the implementation
of unicode.

jmf
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Ben Finney
wxjmfa...@gmail.com writes:

 (2) is an artificial construct working
 with 3 sets (unicode).

jmf, you are being exceedingly disruptive: attempting to derail
unrelated discussions for your favourite hobby-horse topic. Please stop.

Everyone else: Please don't engage these attempts; instead, avoid
engaging with this trollish behaviour.

-- 
 \   “… one of the main causes of the fall of the Roman Empire was |
  `\that, lacking zero, they had no way to indicate successful |
_o__)  termination of their C programs.” —Robert Firth |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Ben Finney
Chris Angelico ros...@gmail.com writes:

 On Wed, Feb 12, 2014 at 7:17 PM, Ben Finney ben+pyt...@benfinney.id.au 
 wrote:
  Chris Angelico ros...@gmail.com writes:
 
  I have yet to find any computer that works with the set of real
  numbers in any way. Never mind optimization, they simply cannot
  work with real numbers.
 
  Not *any* computer? Not in *any* way? The Python built-in ‘float’
  type “works with the set of real numbers”, in a way.

 No, the Python built-in float type works with a subset of real numbers

So, “works with a subset of real numbers” is not satisfactory, then. Okay.

 Same goes for fractions.Fraction and [c]decimal.Decimal. All of them
 are restricted to some subset of rational numbers, not all reals.

  What specific behaviour would, for you, qualify as “works with the
  set of real numbers in any way”?

 Being able to represent surds, pi, e, etc, for a start.

So, if I understand you right, you want to say that you've not found a
computer that works with the *complete* set of real numbers. Yes?

-- 
 \   “To have the choice between proprietary software packages, is |
  `\  being able to choose your master. Freedom means not having a |
_o__)master.” —Richard M. Stallman, 2007-05-16 |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Chris Angelico
On Wed, Feb 12, 2014 at 7:56 PM, Ben Finney ben+pyt...@benfinney.id.au wrote:
 So, if I understand you right, you want to say that you've not found a
 computer that works with the *complete* set of real numbers. Yes?

Correct. When jmf referred to real numbers, he implied that there are
no optimizations done for natural numbers, that everything's just as
efficient for any real number as for any other. My point is that
computers *do not* work with real numbers, but only ever with some
subset thereof, and that certain subsets (integers usually) are
optimized for in ways that other subsets aren't.

A true real number type might be useful in a few extremely narrow
situations, but for the most part, I'd much rather have the optimized
implementation that works with a subset thereof, and actually runs
within reasonable time/space complexity. (Though, that said, I think a
lot of programmers could do with some education on exactly _what_
subset of real numbers they're working with. The classic IEEE
double-precision floating point type is good enough with low numbers
that lots of people seem to think it stores reals, which it doesn't.)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-02-12 Thread Jussi Piitulainen
Chris Angelico writes:
 On Wed, Feb 12, 2014 at 7:17 PM, Ben Finney wrote:
  What specific behaviour would, for you, qualify as “works with the
  set of real numbers in any way”?
 
 Being able to represent surds, pi, e, etc, for a start. It'd
 theoretically be possible with an algebraic notation (eg by carrying
 through some representation like 2*pi rather than 6.28), but
 otherwise, irrationals can't be represented with finite storage and
 a digit-based system.

I've seen papers on exact computable reals that would, in effect,
generate more precision when needed for some operation. It wasn't
symbolic like 2pi, more like 6.28... with a promise to delve into the
ellipsis, and some notable operations not supported.

Equality testing was missing, I think, and I think it could not be
known in general whether such a number is positive, zero or negative,
so even approximate printing in the usual digit notation would not be
possible. (Interval arithmetic, I hear, has a similar problem about
not knowing the sign of a number.)

In stark contrast, exact rationals work nicely, up to efficiency
considerations.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Ben Finney
Chris Angelico ros...@gmail.com writes:

 On Wed, Feb 12, 2014 at 7:56 PM, Ben Finney ben+pyt...@benfinney.id.au 
 wrote:
  So, if I understand you right, you want to say that you've not found
  a computer that works with the *complete* set of real numbers. Yes?

 Correct. […] My point is that computers *do not* work with real
 numbers, but only ever with some subset thereof […]

You've done it again: by saying that “computers *do not* work with real
numbers”, that if I find a real number – e.g. the number 4 – your
position is that, since it's a real number, computers don't work with
that number.

That's why I think you need to be clear that your point isn't “computers
don't work with real numbers”, but rather “computers work only with a
limited subset of real numbers”.

-- 
 \“We have to go forth and crush every world view that doesn't |
  `\believe in tolerance and free speech.” —David Brin |
_o__)  |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Chris Angelico
On Wed, Feb 12, 2014 at 9:07 PM, Ben Finney ben+pyt...@benfinney.id.au wrote:
 Chris Angelico ros...@gmail.com writes:

 On Wed, Feb 12, 2014 at 7:56 PM, Ben Finney ben+pyt...@benfinney.id.au 
 wrote:
  So, if I understand you right, you want to say that you've not found
  a computer that works with the *complete* set of real numbers. Yes?

 Correct. […] My point is that computers *do not* work with real
 numbers, but only ever with some subset thereof […]

 You've done it again: by saying that “computers *do not* work with real
 numbers”, that if I find a real number – e.g. the number 4 – your
 position is that, since it's a real number, computers don't work with
 that number.

 That's why I think you need to be clear that your point isn't “computers
 don't work with real numbers”, but rather “computers work only with a
 limited subset of real numbers”.

Hmm, I'm not sure that my statement is false. If a computer can work
with real numbers, then I would expect it to be able to work with
any real number. In C, I can declare an 'int' variable, which can hold
the real number 4 - does that mean that that variable stores real
numbers? No, and it's not useful to say that it does. It doesn't store
rationals either, even though 4 is a rational. The fact that computers
can work with some subset of real numbers does not disprove my
statement that computers don't work with real numbers as a class.
Program X works with text files, but it fails if the file contains
U+003C; can I feed it this thing, which is a text file? No, I can't,
because it works only with a subset of text files.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread wxjmfauth
The fascinating aspect of this FSR lies
in its mathematical absurdity.

jmf
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Ben Finney
Chris Angelico ros...@gmail.com writes:

 On Wed, Feb 12, 2014 at 9:07 PM, Ben Finney ben+pyt...@benfinney.id.au 
 wrote:
  That's why I think you need to be clear that your point isn't
  “computers don't work with real numbers”, but rather “computers work
  only with a limited subset of real numbers”.

 Hmm, I'm not sure that my statement is false. If a computer can work
 with real numbers, then I would expect it to be able to work with
 any real number.

Likewise, if you claim that a computer *does not* work with real
numbers, then I would expect that for any real number, the computer
would fail to work with that number.

Which is why neither of those is a good statement of your position, IMO,
and you're better off saying the *limitations* you're describing.

-- 
 \  “An expert is a man who has made all the mistakes which can be |
  `\ made in a very narrow field.” —Niels Bohr |
_o__)  |
Ben Finney

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Ned Batchelder

On 2/12/14 5:55 AM, wxjmfa...@gmail.com wrote:

The fascinating aspect of this FSR lies
in its mathematical absurdity.

jmf



Stop.

--
Ned Batchelder, http://nedbatchelder.com

--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Chris Angelico
On Wed, Feb 12, 2014 at 10:44 PM, Ben Finney ben+pyt...@benfinney.id.au wrote:
 Chris Angelico ros...@gmail.com writes:

 On Wed, Feb 12, 2014 at 9:07 PM, Ben Finney ben+pyt...@benfinney.id.au 
 wrote:
  That's why I think you need to be clear that your point isn't
  “computers don't work with real numbers”, but rather “computers work
  only with a limited subset of real numbers”.

 Hmm, I'm not sure that my statement is false. If a computer can work
 with real numbers, then I would expect it to be able to work with
 any real number.

 Likewise, if you claim that a computer *does not* work with real
 numbers, then I would expect that for any real number, the computer
 would fail to work with that number.

 Which is why neither of those is a good statement of your position, IMO,
 and you're better off saying the *limitations* you're describing.

I think we're using different words to say the same thing here :)

What I mean is that one cannot accurately say that a computer works
with real numbers, because it cannot work with them all. Of course a
computer can work with _some_ real numbers; but only some. (An awful
lot of them, of course. A ridiculously huge number of numbers. More
numbers than you could read in a lifetime! While the number is
extremely large, it still falls pitifully short of infinity.[1]) And
so we do have optimizations for some subset of reals: in approximate
order of performance, an arbitrary-precision integer type, a limited
precision floating point type, and two types that handle fractions
(vulgar and decimal). They're all, in a sense, optimizations. In pure
theory, we could have a single real number type and do everything
with that; all the other types are approximations to that.

[1] http://tools.ietf.org/search/rfc2795
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Marko Rauhamaa
Chris Angelico ros...@gmail.com:

 Hmm, I'm not sure that my statement is false. If a computer can work
 with real numbers, then I would expect it to be able to work with
 any real number. In C, I can declare an 'int' variable, which can hold
 the real number 4 - does that mean that that variable stores real
 numbers? No, and it's not useful to say that it does. It doesn't store
 rationals either, even though 4 is a rational. The fact that computers
 can work with some subset of real numbers does not disprove my
 statement that computers don't work with real numbers as a class.
 Program X works with text files, but it fails if the file contains
 U+003C; can I feed it this thing, which is a text file? No, I can't,
 because it works only with a subset of text files.

According to your definition, there's no computer in the world that can
work with integers or text files.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Chris Angelico
On Wed, Feb 12, 2014 at 11:48 PM, Marko Rauhamaa ma...@pacujo.net wrote:
 Chris Angelico ros...@gmail.com:

 Hmm, I'm not sure that my statement is false. If a computer can work
 with real numbers, then I would expect it to be able to work with
 any real number. In C, I can declare an 'int' variable, which can hold
 the real number 4 - does that mean that that variable stores real
 numbers? No, and it's not useful to say that it does. It doesn't store
 rationals either, even though 4 is a rational. The fact that computers
 can work with some subset of real numbers does not disprove my
 statement that computers don't work with real numbers as a class.
 Program X works with text files, but it fails if the file contains
 U+003C; can I feed it this thing, which is a text file? No, I can't,
 because it works only with a subset of text files.

 According to your definition, there's no computer in the world that can
 work with integers or text files.

Integers as far as RAM will allow, usually (which is the same caveat
as is used when describing a programming language as Turing complete
- strictly, that term is valid only if it has infinite memory
available), but yes, technically that's a subset of integers. However,
that subset is bounded by something other than the code, algorithms,
or even hardware - it's theoretically possible to add two numbers
larger than will fit in memory, by reading them in (even over the
network), adding segments, and writing them out again.

Text files. Since there's already no such thing as a text file
unless you know what its encoding is, I don't see a problem with this.
There's no such thing as an integer in memory, either, unless you know
how it's encoded (those same bits could be a floating point number, or
a pointer, or anything). If you know that the bytes in the file are,
say, a UTF-8 stream, then the file is a text file, just as it could be
a bash script, or an MS-DOS .COM file, if you've been told to decode
it in that way. Once your encoding is declared (out of band), the file
consists of a series of ASCII characters, or Unicode codepoints, or
whatever else it is. A fully functional program should be able to
process that file regardless of what sequence of codepoints it
carries. Say you want to search a file for a particular string, for
instance. You want to know whether or not foobar occurs in a file.
(I'll leave aside the question of word boundaries and say you're
looking for that string of six characters.) The program should be able
to determine the presence or absence of foobar regardless of what
other characters (or codepoints) are around it. Having U+001A
shouldn't stop the search there; nor should U+ cause problems, nor
U+003C, nor any other value. Doing otherwise would be a restriction:
this program supports only a subset of text files (those not
containing these problem characters). It might not be a bug, per se
(maybe text inside angle_brackets is considered to be an XML tag and
is deemed to be not what you're looking for), but it's still a
restriction. An inability to represent the integer 9007199254740993
(but able to represent ...992 and ...994) is a restriction.
Restrictions aren't necessarily bad, but they need to be acknowledged.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Marko Rauhamaa
Chris Angelico ros...@gmail.com:

 On Wed, Feb 12, 2014 at 11:48 PM, Marko Rauhamaa ma...@pacujo.net wrote:
 According to your definition, there's no computer in the world that can
 work with integers or text files.

 Integers as far as RAM will allow, usually (which is the same caveat
 as is used when describing a programming language as Turing complete
 - strictly, that term is valid only if it has infinite memory
 available), but yes, technically that's a subset of integers. However,
 that subset is bounded by something other than the code, algorithms,
 or even hardware - it's theoretically possible to add two numbers
 larger than will fit in memory, by reading them in (even over the
 network), adding segments, and writing them out again.

 Text files. Since there's already no such thing as a text file
 unless you know what its encoding is, I don't see a problem with this.

Text files suffer from the same caveat as integers: there's a limit to
how much you can store on the physical computer.

A similar caveat prevents computers from dealing with real numbers. In
the case of integers, you have a finite subset of ℵ₀. In the case of
reals, you have a finite subset of ℵ₁.

Yes, integers are algorithmically much more tractable than reals.
However, in practice integer math is often computationally much harder
than real math. Take cryptography vs calculus as an example.


Marko
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Rustom Mody
On Wednesday, February 12, 2014 3:37:04 PM UTC+5:30, Ben Finney wrote:
 Chris Angelico writes:

  On Wed, Feb 12, 2014 at 7:56 PM, Ben Finney  wrote:
   So, if I understand you right, you want to say that you've not found
   a computer that works with the *complete* set of real numbers. Yes?
  Correct. [...] My point is that computers *do not* work with real
  numbers, but only ever with some subset thereof [...]

 You've done it again: by saying that computers *do not* work with real
 numbers, that if I find a real number - e.g. the number 4 - your
 position is that, since it's a real number, computers don't work with
 that number.

There is a convention in logic called the implicit universal quantifier
convention: when a bald unqualified reference is in a statement it means 
it is universally quantified. eg
A triangle is a polygon with 3 sides
really means
ALL polygons with 3 sides are triangles ie the ALL is implied

Now when for-all is inverted by de Morgan it becomes for-some not...

So computers work with real numbers really means computers work with
all real numbers and that is not true

 That's why I think you need to be clear that your point isn't computers
 don't work with real numbers, but rather computers work only with a
 limited subset of real numbers.

Yes both these statements are true by above.

In fact computers cannot work with real numbers because the real number 
set is undecidable/uncomputable. In particular, trivial operations like
equality on reals -- IN GENERAL -- is undecidable.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-02-12 Thread Grant Edwards
On 2014-02-12, Ben Finney ben+pyt...@benfinney.id.au wrote:
 Chris Angelico ros...@gmail.com writes:

 I have yet to find any computer that works with the set of real
 numbers in any way. Never mind optimization, they simply cannot work
 with real numbers.

 Not *any* computer? Not in *any* way? The Python built-in float
 type works with the set of real numbers, in a way.

The only people who think that are people who don't actualy _use_
floating point types on computers.

 What specific behaviour would, for you, qualify as works with the
 set of real numbers in any way

There's a whole laundry list of things (some of them rather nasty and
difficult) you have to worry about when using FP that simply don't
apply to real numbers.

-- 
Grant Edwards   grant.b.edwardsYow! HUGH BEAUMONT died
  at   in 1982!!
  gmail.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers (was: Finding size of Variable)

2014-02-12 Thread Gisle Vanem

Grant Edwards wrote:


Not *any* computer? Not in *any* way? The Python built-in float
type works with the set of real numbers, in a way.


The only people who think that are people who don't actualy _use_
floating point types on computers.


FPU parsing the IEEE spec, or?. I didn't quite parse what *you* wrote. 
To paraphrase:

 #include math.h
 there are FP_NORMAL and FP_SUBNORMAL people in the world; 
  those who understand IEEE 754 and those who don't.  ..


--gv
--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Chris Angelico
On Thu, Feb 13, 2014 at 1:13 AM, Marko Rauhamaa ma...@pacujo.net wrote:
 Text files suffer from the same caveat as integers: there's a limit to
 how much you can store on the physical computer.

Sure, but nobody said the text file had to be _stored_ anywhere :)
Computers are quite capable of working with streams of incoming data
that are potentially infinite in size. But again, this is the same
caveat as the Turing machine. If you wrote a Python interpreter for a
Turing machine, it would be - without any changes to Python - capable
of handling any integer and any text file.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Ian Kelly
On Wed, Feb 12, 2014 at 7:11 AM, Rustom Mody rustompm...@gmail.com wrote:
 On Wednesday, February 12, 2014 3:37:04 PM UTC+5:30, Ben Finney wrote:
 Chris Angelico writes:

  On Wed, Feb 12, 2014 at 7:56 PM, Ben Finney  wrote:
   So, if I understand you right, you want to say that you've not found
   a computer that works with the *complete* set of real numbers. Yes?
  Correct. [...] My point is that computers *do not* work with real
  numbers, but only ever with some subset thereof [...]

 You've done it again: by saying that computers *do not* work with real
 numbers, that if I find a real number - e.g. the number 4 - your
 position is that, since it's a real number, computers don't work with
 that number.

 There is a convention in logic called the implicit universal quantifier
 convention: when a bald unqualified reference is in a statement it means
 it is universally quantified. eg
 A triangle is a polygon with 3 sides
 really means
 ALL polygons with 3 sides are triangles ie the ALL is implied

 Now when for-all is inverted by de Morgan it becomes for-some not...

 So computers work with real numbers really means computers work with
 all real numbers and that is not true

I take exception whenever I see somebody trying to use predicate logic
to determine the meaning of an English sentence.  English does not
follow the rules of predicate logic, and English sentences do not map
consistently to logical sentences.

To me, the meaning of computers do not work with X depends upon the
domain of X.  Computers do not work with real numbers implies that
computers do not work with the set of real numbers (but implies
nothing about subsets).  Computers do not work with keyboards on the
other hand would imply that no computer works with any keyboard (which
of course is demonstrably false).
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Gregory Ewing

Ben Finney wrote:

That's why I think you need to be clear that your point isn't “computers
don't work with real numbers”, but rather “computers work only with a
limited subset of real numbers”.


They actually work with a subset of *rational* numbers.
All floats representable by a computer are rational.

The rationals happen to be a subset of the reals, but
that's kind of beside the point given that a float
can't represent *any* real number that isn't also
a rational.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Gregory Ewing

Chris Angelico wrote:

Sure, but nobody said the text file had to be _stored_ anywhere :)
Computers are quite capable of working with streams of incoming data
that are potentially infinite in size.


However, they *can't* work with arbitrary real numbers in an
exact way, even if they are represented by infinitely long
digit streams, and you're willing to run the program for
an infinitely long time to get the result.

Consider adding two of these numbers, for example. You have
to do it starting at the big end, because the small end is
infinitely far away. And you only have a limited amount of
buffer space, so you need to start writing out result
digits before you've seen all the input digits.

But you can't do that, because it's possible that some
pair of input digits you haven't seen yet will cause a
carry-over that ripples back and affects something you've
already written out.

This is where schemes such as computable reals get into
trouble. Doing arithmetic with them gets extremely messy.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Gregory Ewing

Chris Angelico wrote:

Of course a
computer can work with _some_ real numbers; but only some. (An awful
lot of them, of course. A ridiculously huge number of numbers. More
numbers than you could read in a lifetime! While the number is
extremely large, it still falls pitifully short of infinity.[1])


The number of integers it can work with is also
vanishingly small compared to the total number
of integers.

However, the number of reals is vastly greater
than the number of integers, so the proportion
of reals it can work with is even *more*
vanishingly small. In some sense.

--
Greg
--
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Dave Angel
 Gregory Ewing greg.ew...@canterbury.ac.nz Wrote in message:
 Chris Angelico wrote:
 Sure, but nobody said the text file had to be _stored_ anywhere :)
 Computers are quite capable of working with streams of incoming data
 that are potentially infinite in size.
 
 However, they *can't* work with arbitrary real numbers in an
 exact way, even if they are represented by infinitely long
 digit streams, and you're willing to run the program for
 an infinitely long time to get the result.
 
 Consider adding two of these numbers, for example. You have
 to do it starting at the big end, because the small end is
 infinitely far away. And you only have a limited amount of
 buffer space, so you need to start writing out result
 digits before you've seen all the input digits.
 
 But you can't do that, because it's possible that some
 pair of input digits you haven't seen yet will cause a
 carry-over that ripples back and affects something you've
 already written out.
 
 
Actually, the particular example you use can be done.  When
 printing the infinite sum of two infinite decimal streams,  you
 can simply hold back whenever you get one or more nines.  Instead
 keep a count of how many nines you're holding,  and emit them all
 only when you get something other than 9. I've done something
 analogous doing run length encoding of an rs232 stream.

But you're right in general. 


-- 
DaveA

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Grant Edwards
On 2014-02-12, Gregory Ewing greg.ew...@canterbury.ac.nz wrote:
 Chris Angelico wrote:

 Of course a computer can work with _some_ real numbers; but only
 some. (An awful lot of them, of course. A ridiculously huge number of
 numbers. More numbers than you could read in a lifetime! While the
 number is extremely large, it still falls pitifully short of
 infinity.[1])

 The number of integers it can work with is also vanishingly small
 compared to the total number of integers.

 However, the number of reals is vastly greater than the number of
 integers, so the proportion of reals it can work with is even *more*
 vanishingly small. In some sense.

More importantly, Computers can generally work with a subset of
integers consisting of all integers between a min value and a max
value. The min and max may be known and fixed at compile time (e.g. C
int on a 32-bit machine), or it may depend on how much memory and
time you have.  But knowing that you can represent all values in some
range makes life pretty easy.

OTOH, no matter how small the magnitude of the range of real numbers
you pick, computer FP can only represent a very tiny subset of the
rational numbers which are an even tinier subset of the real numbers
within whatever range you care to pick.  If you pick your range and
representation intelligently, you can still do some pretty useful
stuff. But, if you pretend you're actually working with real numbers
you will come a cropper.

-- 
Grant Edwards   grant.b.edwardsYow! Why is it that when
  at   you DIE, you can't take
  gmail.comyour HOME ENTERTAINMENT
   CENTER with you??
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Rustom Mody
On Thursday, February 13, 2014 2:15:28 AM UTC+5:30, Ian wrote:
 On Wed, Feb 12, 2014 at 7:11 AM, Rustom Mody  wrote:
  On Wednesday, February 12, 2014 3:37:04 PM UTC+5:30, Ben Finney wrote:
  Chris Angelico writes:
   On Wed, Feb 12, 2014 at 7:56 PM, Ben Finney  wrote:
So, if I understand you right, you want to say that you've not found
a computer that works with the *complete* set of real numbers. Yes?
   Correct. [...] My point is that computers *do not* work with real
   numbers, but only ever with some subset thereof [...]
  You've done it again: by saying that computers *do not* work with real
  numbers, that if I find a real number - e.g. the number 4 - your
  position is that, since it's a real number, computers don't work with
  that number.
  There is a convention in logic called the implicit universal quantifier
  convention: when a bald unqualified reference is in a statement it means
  it is universally quantified. eg
  A triangle is a polygon with 3 sides
  really means
  ALL polygons with 3 sides are triangles ie the ALL is implied
  Now when for-all is inverted by de Morgan it becomes for-some not...
  So computers work with real numbers really means computers work with
  all real numbers and that is not true

 I take exception whenever I see somebody trying to use predicate logic
 to determine the meaning of an English sentence.

Ok See below.

 English does not follow the rules of predicate logic,

Agreed

 and English sentences do not map consistently to logical sentences.

Agreed

 To me, the meaning of computers do not work with X depends upon the
 domain of X.

Agreed

 Computers do not work with real numbers implies that
 computers do not work with the set of real numbers (but implies
 nothing about subsets).

How come?

 Computers do not work with keyboards on the
 other hand would imply that no computer works with any keyboard (which
 of course is demonstrably false).

The example is the other way. If one says:
Computers have keyboards
and then we have the demonstratation of say
- a cloud server
- a android phone

which are computers that have no keyboards, then that demonstrates that
(ALL) computers have keyboards is false

Two things therefore come into play here:
1. All computers have keyboards is falsified by predicate logic
2. Modelling the English Computers have keyboards to the above sentence
needs: grammar, context, good-sense, good-will and a lot of other
good (and soft) stuff.

tl;dr Predicate logic can help to gain some clarity about where
the implied but unstated quantifiers lie.
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Working with the set of real numbers

2014-02-12 Thread Steven D'Aprano
On Wed, 12 Feb 2014 21:07:04 +1100, Ben Finney wrote:

 Chris Angelico ros...@gmail.com writes:
 
 On Wed, Feb 12, 2014 at 7:56 PM, Ben Finney
 ben+pyt...@benfinney.id.au wrote:
  So, if I understand you right, you want to say that you've not found
  a computer that works with the *complete* set of real numbers. Yes?

 Correct. […] My point is that computers *do not* work with real
 numbers, but only ever with some subset thereof […]
 
 You've done it again: by saying that “computers *do not* work with real
 numbers”, that if I find a real number – e.g. the number 4 – your
 position is that, since it's a real number, computers don't work with
 that number.

That answer relies on the assumption that computers do not work with X 
implies:

for each element x in X:
it is true that computers do not work with x

that is to say, a single counter-example of computers working with an 
element of X, even if it is a fluke, is enough to disprove the rule.

To give a real-world, non-programming example:

The former South African apartheid government did not respect the 
Universal Human Rights of blacks.

Under your strict interpretation, we would have to say that even a single 
example of the apartheid government respecting even a single human rights 
of a single black person would be sufficient to disprove the claim.

But there's another interpretation available to us, one which is more 
suited to natural language statements as made by Chris: we interpret 
computers do not work with X as meaning:

there is at least one element x, such that it is true that
computers do not work with x


In the case of real numbers, there is an *uncountably infinite* number of 
such elements x. In fact, we can pick any two distinct numbers, no matter 
how close together, say:

1
1.0001

and be sure that there are an uncountably infinite number of real numbers 
which computers do not work with between those two values.

For the record, uncountable infinite is not just me emphasising that 
infinity is too big to count. It's a technical term from mathematics. In 
a nutshell it means that not only are there too many elements to count, 
but even in an infinite amount of time you couldn't count them all, not 
even if you counted infinitely fast.

In fact, it isn't just that there are *specific* real numbers which 
computers cannot represent (say, irrationals like pi or e, really tiny 
numbers like 1/(googleplex**googleplex**googleplex), or really huge ones 
like Graham's Number), but that the fundamental mathematical laws of the 
reals are violated by computers.

For example, it is not true that for every number x, 1/1(x)) == x.

py 1/(1/93.0) == 93.0
False


Nor is it always true that a*(b+c) equals a*b + a*c, or that a+b+c is 
necessarily equal to b+c+a.


So it isn't even that floats are merely a subset of reals. They're 
actually not reals at all, since the fundamental properties of real 
numbers do not always apply to floating point calculations.



-- 
Steven
-- 
https://mail.python.org/mailman/listinfo/python-list


  1   2   >