Re: Numeric root-finding in Python

2012-02-20 Thread David Monaghan
On Sun, 12 Feb 2012 10:20:17 -0500, inq1ltd inq1...@inqvista.com wrote:



I don't know the first thing about this math problem however,

if I were to code this I might try ;

   except ZeroDivisionError:
 assert w = -1

rather than;

   except ZeroDivisionError:
 assert w == -1

Why?

DM
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric root-finding in Python

2012-02-20 Thread Mark Lawrence

On 21/02/2012 01:33, David Monaghan wrote:

On Sun, 12 Feb 2012 10:20:17 -0500, inq1ltdinq1...@inqvista.com  wrote:




I don't know the first thing about this math problem however,

if I were to code this I might try ;

   except ZeroDivisionError:
 assert w = -1

rather than;

   except ZeroDivisionError:
 assert w == -1


Why?

DM


Why not is probably a better question, given that Dennis Lee Bieber and 
Dave Angel have already pointed out that this is not legal Python, it'll 
give a syntax error.


--
Cheers.

Mark Lawrence.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric root-finding in Python

2012-02-12 Thread Eelco
On Feb 12, 7:41 am, Steven D'Aprano steve
+comp.lang.pyt...@pearwood.info wrote:
 This is only peripherally a Python problem, but in case anyone has any
 good ideas I'm going to ask it.

 I have a routine to calculate an approximation of Lambert's W function,
 and then apply a root-finding technique to improve the approximation.
 This mostly works well, but sometimes the root-finder gets stuck in a
 cycle.

 Here's my function:

 import math
 def improve(x, w, exp=math.exp):
     Use Halley's method to improve an estimate of W(x) given
     an initial estimate w.
     
     try:
         for i in range(36):  # Max number of iterations.
             ew = exp(w)
             a = w*ew - x
             b = ew*(w + 1)
             err = -a/b  # Estimate of the error in the current w.
             if abs(err) = 1e-16:
                 break
             print '%d: w= %r err= %r' % (i, w, err)
             # Make a better estimate.
             c = (w + 2)*a/(2*w + 2)
             delta = a/(b - c)
             w -= delta
         else:
             raise RuntimeError('calculation failed to converge', err)
     except ZeroDivisionError:
         assert w == -1
     return w

 Here's an example where improve() converges very quickly:

 py improve(-0.36, -1.222769842388856)
 0: w= -1.222769842388856 err= -2.9158979924038895e-07
 1: w= -1.2227701339785069 err= 8.4638038491998997e-16
 -1.222770133978506

 That's what I expect: convergence in only a few iterations.

 Here's an example where it gets stuck in a cycle, bouncing back and forth
 between two values:

 py improve(-0.36787344117144249, -1.0057222396915309)
 0: w= -1.0057222396915309 err= 2.6521238905750239e-14
 1: w= -1.0057222396915044 err= -2.6521238905872001e-14
 2: w= -1.0057222396915309 err= 2.6521238905750239e-14
 3: w= -1.0057222396915044 err= -2.6521238905872001e-14
 4: w= -1.0057222396915309 err= 2.6521238905750239e-14
 5: w= -1.0057222396915044 err= -2.6521238905872001e-14
 [...]
 32: w= -1.0057222396915309 err= 2.6521238905750239e-14
 33: w= -1.0057222396915044 err= -2.6521238905872001e-14
 34: w= -1.0057222396915309 err= 2.6521238905750239e-14
 35: w= -1.0057222396915044 err= -2.6521238905872001e-14
 Traceback (most recent call last):
   File stdin, line 1, in module
   File stdin, line 19, in improve
 RuntimeError: ('calculation failed to converge', -2.6521238905872001e-14)

 (The correct value for w is approximately -1.00572223991.)

 I know that Newton's method is subject to cycles, but I haven't found any
 discussion about Halley's method and cycles, nor do I know what the best
 approach for breaking them would be. None of the papers on calculating
 the Lambert W function that I have found mentions this.

 Does anyone have any advice for solving this?

 --
 Steven

Looks like floating point issues to me, rather than something
intrinsic to the iterative algorithm. Surely there is not complex
chaotic behavior to be found in this fairly smooth function in a +/-
1e-14 window. Otoh, there is a lot of floating point significant bit
loss issues to be suspected in the kind of operations you are
performing (exp(x) + something, always a tricky one).

I would start by asking: How accurate is good enough? If its not good
enough, play around the the ordering of your operations, try solving a
transformed problem less sensitive to loss of significance; and begin
by trying different numeric types to see if the problem is sensitive
thereto to begin with.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric root-finding in Python

2012-02-12 Thread 88888 Dihedral
在 2012年2月12日星期日UTC+8下午2时41分20秒,Steven D#39;Aprano写道:
 This is only peripherally a Python problem, but in case anyone has any 
 good ideas I'm going to ask it.
 
 I have a routine to calculate an approximation of Lambert's W function, 
 and then apply a root-finding technique to improve the approximation. 
 This mostly works well, but sometimes the root-finder gets stuck in a 
 cycle.
 
 Here's my function:
 
 import math
 def improve(x, w, exp=math.exp):
 Use Halley's method to improve an estimate of W(x) given 
 an initial estimate w.
 
 try:
 for i in range(36):  # Max number of iterations.
 ew = exp(w)
 a = w*ew - x

W*EXP(W) can converge for negative values of W

 b = ew*(w + 1)
b=exp(W)*W+W
 err = -a/b  # Estimate of the error in the current w.

What's X not expalained?

 if abs(err) = 1e-16:
 break
 print '%d: w= %r err= %r' % (i, w, err)
 # Make a better estimate.
 c = (w + 2)*a/(2*w + 2)
 delta = a/(b - c)
 w -= delta
 else:
 raise RuntimeError('calculation failed to converge', err)
 except ZeroDivisionError:
 assert w == -1
 return w
 
 
 Here's an example where improve() converges very quickly:
 
 py improve(-0.36, -1.222769842388856)
 0: w= -1.222769842388856 err= -2.9158979924038895e-07
 1: w= -1.2227701339785069 err= 8.4638038491998997e-16
 -1.222770133978506
 
 That's what I expect: convergence in only a few iterations.
 
 Here's an example where it gets stuck in a cycle, bouncing back and forth
 between two values:
 
 py improve(-0.36787344117144249, -1.0057222396915309)
 0: w= -1.0057222396915309 err= 2.6521238905750239e-14
 1: w= -1.0057222396915044 err= -2.6521238905872001e-14
 2: w= -1.0057222396915309 err= 2.6521238905750239e-14
 3: w= -1.0057222396915044 err= -2.6521238905872001e-14
 4: w= -1.0057222396915309 err= 2.6521238905750239e-14
 5: w= -1.0057222396915044 err= -2.6521238905872001e-14
 [...]
 32: w= -1.0057222396915309 err= 2.6521238905750239e-14
 33: w= -1.0057222396915044 err= -2.6521238905872001e-14
 34: w= -1.0057222396915309 err= 2.6521238905750239e-14
 35: w= -1.0057222396915044 err= -2.6521238905872001e-14
 Traceback (most recent call last):
   File stdin, line 1, in module
   File stdin, line 19, in improve
 RuntimeError: ('calculation failed to converge', -2.6521238905872001e-14)
 
 (The correct value for w is approximately -1.00572223991.)
 
 I know that Newton's method is subject to cycles, but I haven't found any 
 discussion about Halley's method and cycles, nor do I know what the best 
 approach for breaking them would be. None of the papers on calculating
 the Lambert W function that I have found mentions this.
 
 Does anyone have any advice for solving this?
 
 
 
 -- 
 Steven

I sugest you can use  Taylor's series expansion to speed up w*exp(w)
for negative values of w.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric root-finding in Python

2012-02-12 Thread Mark Lawrence

On 12/02/2012 10:10, Eelco wrote:

On Feb 12, 7:41 am, Steven D'Apranosteve
+comp.lang.pyt...@pearwood.info  wrote:

This is only peripherally a Python problem, but in case anyone has any
good ideas I'm going to ask it.

I have a routine to calculate an approximation of Lambert's W function,
and then apply a root-finding technique to improve the approximation.
This mostly works well, but sometimes the root-finder gets stuck in a
cycle.

Here's my function:

import math
def improve(x, w, exp=math.exp):
 Use Halley's method to improve an estimate of W(x) given
 an initial estimate w.
 
 try:
 for i in range(36):  # Max number of iterations.
 ew = exp(w)
 a = w*ew - x
 b = ew*(w + 1)
 err = -a/b  # Estimate of the error in the current w.
 if abs(err)= 1e-16:
 break
 print '%d: w= %r err= %r' % (i, w, err)
 # Make a better estimate.
 c = (w + 2)*a/(2*w + 2)
 delta = a/(b - c)
 w -= delta
 else:
 raise RuntimeError('calculation failed to converge', err)
 except ZeroDivisionError:
 assert w == -1
 return w

Here's an example where improve() converges very quickly:

py  improve(-0.36, -1.222769842388856)
0: w= -1.222769842388856 err= -2.9158979924038895e-07
1: w= -1.2227701339785069 err= 8.4638038491998997e-16
-1.222770133978506

That's what I expect: convergence in only a few iterations.

Here's an example where it gets stuck in a cycle, bouncing back and forth
between two values:

py  improve(-0.36787344117144249, -1.0057222396915309)
0: w= -1.0057222396915309 err= 2.6521238905750239e-14
1: w= -1.0057222396915044 err= -2.6521238905872001e-14
2: w= -1.0057222396915309 err= 2.6521238905750239e-14
3: w= -1.0057222396915044 err= -2.6521238905872001e-14
4: w= -1.0057222396915309 err= 2.6521238905750239e-14
5: w= -1.0057222396915044 err= -2.6521238905872001e-14
[...]
32: w= -1.0057222396915309 err= 2.6521238905750239e-14
33: w= -1.0057222396915044 err= -2.6521238905872001e-14
34: w= -1.0057222396915309 err= 2.6521238905750239e-14
35: w= -1.0057222396915044 err= -2.6521238905872001e-14
Traceback (most recent call last):
   File stdin, line 1, inmodule
   File stdin, line 19, in improve
RuntimeError: ('calculation failed to converge', -2.6521238905872001e-14)

(The correct value for w is approximately -1.00572223991.)

I know that Newton's method is subject to cycles, but I haven't found any
discussion about Halley's method and cycles, nor do I know what the best
approach for breaking them would be. None of the papers on calculating
the Lambert W function that I have found mentions this.

Does anyone have any advice for solving this?

--
Steven


Looks like floating point issues to me, rather than something
intrinsic to the iterative algorithm. Surely there is not complex
chaotic behavior to be found in this fairly smooth function in a +/-
1e-14 window. Otoh, there is a lot of floating point significant bit
loss issues to be suspected in the kind of operations you are
performing (exp(x) + something, always a tricky one).

I would start by asking: How accurate is good enough? If its not good
enough, play around the the ordering of your operations, try solving a
transformed problem less sensitive to loss of significance; and begin
by trying different numeric types to see if the problem is sensitive
thereto to begin with.


HTH.

c:\Users\Mark\Pythontype sda.py
import decimal
def improve(x, w, exp=decimal.Decimal.exp):
Use Halley's method to improve an estimate of W(x) given
an initial estimate w.

try:
for i in range(36):  # Max number of iterations.
ew = exp(w)
a = w*ew - x
b = ew*(w + 1)
err = -a/b  # Estimate of the error in the current w.
if abs(err) = 1e-16:
break
print '%d: w= %r err= %r' % (i, w, err)
# Make a better estimate.
c = (w + 2)*a/(2*w + 2)
delta = a/(b - c)
w -= delta
else:
raise RuntimeError('calculation failed to converge', err)
print '%d: w= %r err= %r' % (i, w, err)
except ZeroDivisionError:
assert w == -1
return w
improve(decimal.Decimal('-0.36'), decimal.Decimal('-1.222769842388856'))
improve(decimal.Decimal('-0.36787344117144249'), 
decimal.Decimal('-1.0057222396915309'))

c:\Users\Mark\Pythonsda.py
0: w= Decimal('-1.222769842388856') err= 
Decimal('-2.915897982757542086414504607E-7')
1: w= Decimal('-1.222770133978505953034526059') err= 
Decimal('-1.084120148360381932277303211E-19')
0: w= Decimal('-1.0057222396915309') err= 
Decimal('5.744538819905061986438230561E-15')

1: w= Decimal('-1.005722239691525155461180092') err= Decimal('-0E+2')

--
Cheers.

Mark Lawrence.

--
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric root-finding in Python

2012-02-12 Thread Robert Kern

On 2/12/12 6:41 AM, Steven D'Aprano wrote:

This is only peripherally a Python problem, but in case anyone has any
good ideas I'm going to ask it.

I have a routine to calculate an approximation of Lambert's W function,
and then apply a root-finding technique to improve the approximation.
This mostly works well, but sometimes the root-finder gets stuck in a
cycle.


I don't have any advice for fixing your code, per se, but I would just grab 
mpmath and use their lambertw function:


http://mpmath.googlecode.com/svn/trunk/doc/build/functions/powers.html#lambert-w-function

--
Robert Kern

I have come to believe that the whole world is an enigma, a harmless enigma
 that is made terrible by our own mad attempt to interpret it as though it had
 an underlying truth.
  -- Umberto Eco

--
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric root-finding in Python

2012-02-12 Thread inq1ltd


I don't know the first thing about this math problem however,

if I were to code this I might try ;

   except ZeroDivisionError:
 assert w = -1

rather than;

   except ZeroDivisionError:
 assert w == -1


jimonlinux



On Sunday, February 12, 2012 06:41:20 AM Steven D'Aprano wrote:
 This is only peripherally a Python problem, but in case anyone has any
 good ideas I'm going to ask it.
 
 I have a routine to calculate an approximation of Lambert's W function,
 and then apply a root-finding technique to improve the approximation.
 This mostly works well, but sometimes the root-finder gets stuck in a
 cycle.
 
 Here's my function:
 
 import math
 def improve(x, w, exp=math.exp):
 Use Halley's method to improve an estimate of W(x) given
 an initial estimate w.
 
 try:
 for i in range(36):  # Max number of iterations.
 ew = exp(w)
 a = w*ew - x
 b = ew*(w + 1)
 err = -a/b  # Estimate of the error in the current w.
 if abs(err) = 1e-16:
 break
 print '%d: w= %r err= %r' % (i, w, err)
 # Make a better estimate.
 c = (w + 2)*a/(2*w + 2)
 delta = a/(b - c)
 w -= delta
 else:
 raise RuntimeError('calculation failed to converge', err)
 except ZeroDivisionError:
 assert w == -1
 return w
 
 
 Here's an example where improve() converges very quickly:
 
 py improve(-0.36, -1.222769842388856)
 0: w= -1.222769842388856 err= -2.9158979924038895e-07
 1: w= -1.2227701339785069 err= 8.4638038491998997e-16
 -1.222770133978506
 
 That's what I expect: convergence in only a few iterations.
 
 Here's an example where it gets stuck in a cycle, bouncing back and forth
 between two values:
 
 py improve(-0.36787344117144249, -1.0057222396915309)
 0: w= -1.0057222396915309 err= 2.6521238905750239e-14
 1: w= -1.0057222396915044 err= -2.6521238905872001e-14
 2: w= -1.0057222396915309 err= 2.6521238905750239e-14
 3: w= -1.0057222396915044 err= -2.6521238905872001e-14
 4: w= -1.0057222396915309 err= 2.6521238905750239e-14
 5: w= -1.0057222396915044 err= -2.6521238905872001e-14
 [...]
 32: w= -1.0057222396915309 err= 2.6521238905750239e-14
 33: w= -1.0057222396915044 err= -2.6521238905872001e-14
 34: w= -1.0057222396915309 err= 2.6521238905750239e-14
 35: w= -1.0057222396915044 err= -2.6521238905872001e-14
 Traceback (most recent call last):
   File stdin, line 1, in module
   File stdin, line 19, in improve
 RuntimeError: ('calculation failed to converge', -2.6521238905872001e-14)
 
 (The correct value for w is approximately -1.00572223991.)
 
 I know that Newton's method is subject to cycles, but I haven't found any
 discussion about Halley's method and cycles, nor do I know what the best
 approach for breaking them would be. None of the papers on calculating
 the Lambert W function that I have found mentions this.
 
 Does anyone have any advice for solving this?
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric root-finding in Python

2012-02-12 Thread Dave Angel

On 02/12/2012 10:20 AM, inq1ltd wrote:


I don't know the first thing about this math problem however,

if I were to code this I might try ;

except ZeroDivisionError:
  assert w = -1
You top-posted.   Please type your response after whatever you're 
quoting.  In my case, I only need a portion of what you said, and my 
remarks are following it.


assert takes an expression, so the one above is just wrong.  
Fortunately, Python would tell you with

  SyntaxError: invalid syntax





--

DaveA

--
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric root-finding in Python

2012-02-12 Thread Mark Dickinson
On Feb 12, 6:41 am, Steven D'Aprano steve
+comp.lang.pyt...@pearwood.info wrote:

             err = -a/b  # Estimate of the error in the current w.
             if abs(err) = 1e-16:
                 break

If the result you're expecting is around -1.005, this exit condition
is rather optimistic:  the difference between the two Python floats
either side of this value is already 2.22e-16, so you're asking for
less than half a ulp of error!

As to the rest;  your error estimate simply doesn't have enough
precision.  The main problem is in the computation of a, where you're
subtracting two almost identical values.  The absolute error incurred
in computing w*exp(w) is of the same order of magnitude as the
difference 'w*exp(w) - x' itself, so err has lost essentially all of
its significant bits, and is at best only a crude indicator of the
size of the error.  The solution would be to compute the quantities
'exp(w), w*exp(w), and w*exp(w) - x' all with extended precision.  For
the other quantities, there shouldn't be any major issues---after all,
you only need a few significant bits of 'delta' for it to be useful,
but with the subtraction that generates a, you don't even get those
few significant bits.

 (The correct value for w is approximately -1.00572223991.)

Are you sure?  Wolfram Alpha gives me the following value for W(-1,
-0.36787344117144249455719773322925902903079986572265625):

-1.005722239691522978...

so it looks as though the values you're getting are at least
alternating around the exact value.


--
Mark
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric root-finding in Python

2012-02-12 Thread Terry Reedy

On 2/12/2012 5:10 AM, Eelco wrote:

On Feb 12, 7:41 am, Steven D'Apranosteve
+comp.lang.pyt...@pearwood.info  wrote:

This is only peripherally a Python problem, but in case anyone has any
good ideas I'm going to ask it.

I have a routine to calculate an approximation of Lambert's W function,
and then apply a root-finding technique to improve the approximation.
This mostly works well, but sometimes the root-finder gets stuck in a
cycle.

Here's my function:

import math
def improve(x, w, exp=math.exp):
 Use Halley's method to improve an estimate of W(x) given
 an initial estimate w.
 
 try:
 for i in range(36):  # Max number of iterations.
 ew = exp(w)
 a = w*ew - x
 b = ew*(w + 1)
 err = -a/b  # Estimate of the error in the current w.
 if abs(err)= 1e-16:
 break
 print '%d: w= %r err= %r' % (i, w, err)
 # Make a better estimate.
 c = (w + 2)*a/(2*w + 2)
 delta = a/(b - c)
 w -= delta
 else:
 raise RuntimeError('calculation failed to converge', err)
 except ZeroDivisionError:
 assert w == -1
 return w

Here's an example where improve() converges very quickly:

py  improve(-0.36, -1.222769842388856)
0: w= -1.222769842388856 err= -2.9158979924038895e-07
1: w= -1.2227701339785069 err= 8.4638038491998997e-16
-1.222770133978506

That's what I expect: convergence in only a few iterations.

Here's an example where it gets stuck in a cycle, bouncing back and forth
between two values:

py  improve(-0.36787344117144249, -1.0057222396915309)
0: w= -1.0057222396915309 err= 2.6521238905750239e-14
1: w= -1.0057222396915044 err= -2.6521238905872001e-14
2: w= -1.0057222396915309 err= 2.6521238905750239e-14
3: w= -1.0057222396915044 err= -2.6521238905872001e-14
4: w= -1.0057222396915309 err= 2.6521238905750239e-14



35: w= -1.0057222396915044 err= -2.6521238905872001e-14
Traceback (most recent call last):
   File stdin, line 1, inmodule
   File stdin, line 19, in improve
RuntimeError: ('calculation failed to converge', -2.6521238905872001e-14)

(The correct value for w is approximately -1.00572223991.)

I know that Newton's method is subject to cycles, but I haven't found any
discussion about Halley's method and cycles, nor do I know what the best
approach for breaking them would be. None of the papers on calculating
the Lambert W function that I have found mentions this.

Does anyone have any advice for solving this?



Looks like floating point issues to me, rather than something
intrinsic to the iterative algorithm. Surely there is not complex
chaotic behavior to be found in this fairly smooth function in a +/-
1e-14 window. Otoh, there is a lot of floating point significant bit
loss issues to be suspected in the kind of operations you are
performing (exp(x) + something, always a tricky one).


To investigate this, I would limit the iterations to 2 or 3 and print 
ew, a,b,c, and delta, maybe in binary(hex) form



I would start by asking: How accurate is good enough? If its not good
enough, play around the the ordering of your operations, try solving a
transformed problem less sensitive to loss of significance; and begin
by trying different numeric types to see if the problem is sensitive
thereto to begin with.


--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric root-finding in Python

2012-02-12 Thread Steven D'Aprano
On Sun, 12 Feb 2012 13:52:48 +, Robert Kern wrote:

 I don't have any advice for fixing your code, per se, but I would just
 grab mpmath and use their lambertw function:

That's no fun!

I'd never see mpmath before, it looks like it is worth investigating. 
Nevertheless, I still intend working on my lambert function, as it's a 
good learning exercise.

I did look into SciPy's lambert too, and was put off by this warning in 
the docs:

In some corner cases, lambertw might currently fail to converge

http://docs.scipy.org/doc/scipy/reference/generated/
scipy.special.lambertw.html

Naturally I thought I can do better than that. Looks like I can't :)



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric root-finding in Python

2012-02-12 Thread Steven D'Aprano
On Sun, 12 Feb 2012 12:18:15 -0800, Mark Dickinson wrote:

 On Feb 12, 6:41 am, Steven D'Aprano steve
 +comp.lang.pyt...@pearwood.info wrote:
 
             err = -a/b  # Estimate of the error in the current w.
             if abs(err) = 1e-16:
                 break
 
 If the result you're expecting is around -1.005, this exit condition is
 rather optimistic:  the difference between the two Python floats either
 side of this value is already 2.22e-16, so you're asking for less than
 half a ulp of error!

I was gradually coming to the conclusion on my own that I was being 
overly optimistic with my error condition, although I couldn't put it 
into words *why*. Thanks for this Mark, this is exactly the sort of thing 
I need to learn -- as is obvious, I'm no expert on numeric programming.


 As to the rest;  your error estimate simply doesn't have enough
 precision.  The main problem is in the computation of a, where you're
 subtracting two almost identical values.  The absolute error incurred in
 computing w*exp(w) is of the same order of magnitude as the difference
 'w*exp(w) - x' itself, so err has lost essentially all of its
 significant bits, and is at best only a crude indicator of the size of
 the error.  The solution would be to compute the quantities 'exp(w),
 w*exp(w), and w*exp(w) - x' all with extended precision.

Other than using Decimal, there's no way to do that in pure Python, is 
there? We have floats (double) and that's it.


 For the other
 quantities, there shouldn't be any major issues---after all, you only
 need a few significant bits of 'delta' for it to be useful, but with the
 subtraction that generates a, you don't even get those few significant
 bits.
 
 (The correct value for w is approximately -1.00572223991.)
 
 Are you sure?  Wolfram Alpha gives me the following value for W(-1,
 -0.36787344117144249455719773322925902903079986572265625):
 
 -1.005722239691522978...


I did say *approximately*. The figure I quote comes from my HP-48GX, and 
seems to be accurate to the precision offered by the HP.



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Numeric root-finding in Python

2012-02-12 Thread Dave Angel

On 02/12/2012 06:05 PM, Steven D'Aprano wrote:

On Sun, 12 Feb 2012 12:18:15 -0800, Mark Dickinson wrote:


On Feb 12, 6:41 am, Steven D'Apranosteve
+comp.lang.pyt...@pearwood.info  wrote:


 err = -a/b  # Estimate of the error in the current w.
 if abs(err)= 1e-16:
 break

If the result you're expecting is around -1.005, this exit condition is
rather optimistic:  the difference between the two Python floats either
side of this value is already 2.22e-16, so you're asking for less than
half a ulp of error!

I was gradually coming to the conclusion on my own that I was being
overly optimistic with my error condition, although I couldn't put it
into words *why*. Thanks for this Mark, this is exactly the sort of thing
I need to learn -- as is obvious, I'm no expert on numeric programming.



me either.  But comments below.

As to the rest;  your error estimate simply doesn't have enough
precision.  The main problem is in the computation of a, where you're
subtracting two almost identical values.SNIP

SNIP

Two pieces of my history that come to mind.  40+ years ago I got a 
letter from a user of our computer stating that our math seemed to be 
imprecise in certain places.   He was very polte about it, and admitted 
to maybe needing a different algorithm.  The letter was so polite that I 
(as author of the math microcode) worked on his problem, and found the 
difficulty, as well as a solution.


The problem was figuring out the difference in a machining table between 
being level every place on its surface (in which case it would be 
slightly curved to match the earth), or being perfectly flat (in which 
case some parts of the table would be further from the earth's center 
than others)  The table was 200 feet long, and we were talking 
millionths of an inch.  He solved it three ways, and got three different 
answers.  The first two differed in the 3rd place, which he thought far 
too big an error, and the third answer was just about exactly half the 
others.


Well the 2:1 discrepancy just happens when you change your assumption of 
what part of the flat table is level.  If the center is level, then the 
edges are only 100 feet out, while if the edge is level, the other edge 
is 200 feet out.


But the other solution was very interesting.  Turns out he sketched a 
right triangle, with narrow angle at the center of the earth, side 
opposite being 200 feet.  He then calculated the difference between the 
other two sides.  one 8000 miles, and the other 8000 miles plus a few 
microinches.  He got that distance by subtracting the sine from the 
tangent, or something similar to that.  I had microcoded both those 
functions, and was proud of their accuracy.  But if you subtract two 13 
digit numbers that only differ in the last 3, you only get 3 digits 
worth of accuracy, best case.


Solution was to apply some similar triangles, and some trivial 
approximations, and the problem turned out not to need trig at all, and 
accurate to at least 12 places.  if I recall, it was something like  
8000mi is to 200 feet, as 200 feet is to X.  Cross multiply and it's 
just arithmetic.



The other problem was even earlier.  It was high school physics, and the 
challenge was to experimentally determine the index of refraction of air 
to 5 places.  Problem is our measurements can't be that accurate.  So 
this is the same thing in reverse.  Find a way to measure the difference 
of the index of refraction of air and vacuum, to one or two places, and 
add that to 1.


taken together with lots of other experience, i try to avoid commiting 
an algorithm to code before thinking about errors, convergence, and 
exceptional conditions.  I've no experience with Lambert, but I suspect 
it can be attacked similarly.






--

DaveA

--
http://mail.python.org/mailman/listinfo/python-list


Numeric root-finding in Python

2012-02-11 Thread Steven D'Aprano
This is only peripherally a Python problem, but in case anyone has any 
good ideas I'm going to ask it.

I have a routine to calculate an approximation of Lambert's W function, 
and then apply a root-finding technique to improve the approximation. 
This mostly works well, but sometimes the root-finder gets stuck in a 
cycle.

Here's my function:

import math
def improve(x, w, exp=math.exp):
Use Halley's method to improve an estimate of W(x) given 
an initial estimate w.

try:
for i in range(36):  # Max number of iterations.
ew = exp(w)
a = w*ew - x
b = ew*(w + 1)
err = -a/b  # Estimate of the error in the current w.
if abs(err) = 1e-16:
break
print '%d: w= %r err= %r' % (i, w, err)
# Make a better estimate.
c = (w + 2)*a/(2*w + 2)
delta = a/(b - c)
w -= delta
else:
raise RuntimeError('calculation failed to converge', err)
except ZeroDivisionError:
assert w == -1
return w


Here's an example where improve() converges very quickly:

py improve(-0.36, -1.222769842388856)
0: w= -1.222769842388856 err= -2.9158979924038895e-07
1: w= -1.2227701339785069 err= 8.4638038491998997e-16
-1.222770133978506

That's what I expect: convergence in only a few iterations.

Here's an example where it gets stuck in a cycle, bouncing back and forth
between two values:

py improve(-0.36787344117144249, -1.0057222396915309)
0: w= -1.0057222396915309 err= 2.6521238905750239e-14
1: w= -1.0057222396915044 err= -2.6521238905872001e-14
2: w= -1.0057222396915309 err= 2.6521238905750239e-14
3: w= -1.0057222396915044 err= -2.6521238905872001e-14
4: w= -1.0057222396915309 err= 2.6521238905750239e-14
5: w= -1.0057222396915044 err= -2.6521238905872001e-14
[...]
32: w= -1.0057222396915309 err= 2.6521238905750239e-14
33: w= -1.0057222396915044 err= -2.6521238905872001e-14
34: w= -1.0057222396915309 err= 2.6521238905750239e-14
35: w= -1.0057222396915044 err= -2.6521238905872001e-14
Traceback (most recent call last):
  File stdin, line 1, in module
  File stdin, line 19, in improve
RuntimeError: ('calculation failed to converge', -2.6521238905872001e-14)

(The correct value for w is approximately -1.00572223991.)

I know that Newton's method is subject to cycles, but I haven't found any 
discussion about Halley's method and cycles, nor do I know what the best 
approach for breaking them would be. None of the papers on calculating
the Lambert W function that I have found mentions this.

Does anyone have any advice for solving this?



-- 
Steven
-- 
http://mail.python.org/mailman/listinfo/python-list