Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
"Are there any good indefinite (or much higher) precision packages" A simple search on "arbitrary precision arithmetic in R" would have immediately gotten you to the Rmpfr package. See also: https://cran.r-project.org/web/packages/Ryacas/vignettes/arbitrary-precision.html -- Bert On Fri, Aug 18, 2023 at 4:58 PM wrote: > This discussion is sooo familiar. > > If you want indefinite precision arithmetic, feel free to use a language > and data type that supports it. > > Otherwise, only do calculations that fit in a safe zone. > > This is not just about this scenario. Floating point can work well when > adding (or subtracting) two numbers of about the same size. But if one > number is .123456789... and another is the same except raised to the -45th > power of ten, then adding them effectively throws away the second number. > > This is a well-known problem for any finite binary representation of > numbers. In the example given, yes, the smaller the number is, the worse > the behavior in this case tends to be. > > There are many solutions and some are fairly expensive in terms of > computation time and sometimes memory usage. > > Are there any good indefinite (or much higher) precision packages out > there that would not only support the data type needed but also properly be > used and passed along to the functions used to do complex calculations? No, > I am not asking for indefinite precision complex numbers, but generally > that would be a tuple of such numbers. > > > -Original Message- > From: R-help On Behalf Of Bert Gunter > Sent: Friday, August 18, 2023 7:06 PM > To: Leonard Mada > Cc: R-help Mailing List ; Martin Maechler < > maech...@stat.math.ethz.ch> > Subject: Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2 > > "The ugly thing is that the error only gets worse as x decreases. The > value neither drops to 0, nor does it blow up to infinity; but it gets > worse in a continuous manner." > > If I understand you correctly, this is wrong: > > > x <- 2^(-20) ## considerably less then 1e-4 !! > > y <- 1 - x^2/2; > > 1/(1 - y) - 2/x^2 > [1] 0 > > It's all about the accuracy of the binary approximation of floating point > numbers (and their arithmetic) > > Cheers, > Bert > > > On Fri, Aug 18, 2023 at 3:25 PM Leonard Mada via R-help < > r-help@r-project.org> wrote: > > > I have added some clarifications below. > > > > On 8/18/2023 10:20 PM, Leonard Mada wrote: > > > [...] > > > After more careful thinking, I believe that it is a limitation due to > > > floating points: > > > [...] > > > > > > The problem really stems from the representation of 1 - x^2/2 as shown > > > below: > > > x = 1E-4 > > > print(1 - x^2/2, digits=20) > > > print(0.5, digits=20) # fails > > > # 0.50003039 > > > > The floating point representation of 1 - x^2/2 is the real culprit: > > # 0.50003039 > > > > The 3039 at the end is really an error due to the floating point > > representation. However, this error blows up when inverting the value: > > x = 1E-4; > > y = 1 - x^2/2; > > 1/(1 - y) - 2/x^2 > > # 1.215494 > > # should be 1/(x^2/2) - 2/x^2 = 0 > > > > > > The ugly thing is that the error only gets worse as x decreases. The > > value neither drops to 0, nor does it blow up to infinity; but it gets > > worse in a continuous manner. At least the reason has become now clear. > > > > > > > > > > Maybe some functions of type cos1p and cos1n would be handy for such > > > computations (to replace the manual series expansion): > > > cos1p(x) = 1 + cos(x) > > > cos1n(x) = 1 - cos(x) > > > Though, I do not have yet the big picture. > > > > > > > Sincerely, > > > > > > Leonard > > > > > > > > > > > On 8/17/2023 1:57 PM, Martin Maechler wrote: > > >>>>>>> Leonard Mada > > >>>>>>> on Wed, 16 Aug 2023 20:50:52 +0300 writes: > > >> > Dear Iris, > > >> > Dear Martin, > > >> > > >> > Thank you very much for your replies. I add a few comments. > > >> > > >> > 1.) Correct formula > > >> > The formula in the Subject Title was correct. A small glitch > > >> swept into > > >> > the last formula: > > >> > - 1/(cos(x) - 1) - 2/x^2 > > >> > or > > >> > 1/(1 - c
Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
This discussion is sooo familiar. If you want indefinite precision arithmetic, feel free to use a language and data type that supports it. Otherwise, only do calculations that fit in a safe zone. This is not just about this scenario. Floating point can work well when adding (or subtracting) two numbers of about the same size. But if one number is .123456789... and another is the same except raised to the -45th power of ten, then adding them effectively throws away the second number. This is a well-known problem for any finite binary representation of numbers. In the example given, yes, the smaller the number is, the worse the behavior in this case tends to be. There are many solutions and some are fairly expensive in terms of computation time and sometimes memory usage. Are there any good indefinite (or much higher) precision packages out there that would not only support the data type needed but also properly be used and passed along to the functions used to do complex calculations? No, I am not asking for indefinite precision complex numbers, but generally that would be a tuple of such numbers. -Original Message- From: R-help On Behalf Of Bert Gunter Sent: Friday, August 18, 2023 7:06 PM To: Leonard Mada Cc: R-help Mailing List ; Martin Maechler Subject: Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2 "The ugly thing is that the error only gets worse as x decreases. The value neither drops to 0, nor does it blow up to infinity; but it gets worse in a continuous manner." If I understand you correctly, this is wrong: > x <- 2^(-20) ## considerably less then 1e-4 !! > y <- 1 - x^2/2; > 1/(1 - y) - 2/x^2 [1] 0 It's all about the accuracy of the binary approximation of floating point numbers (and their arithmetic) Cheers, Bert On Fri, Aug 18, 2023 at 3:25 PM Leonard Mada via R-help < r-help@r-project.org> wrote: > I have added some clarifications below. > > On 8/18/2023 10:20 PM, Leonard Mada wrote: > > [...] > > After more careful thinking, I believe that it is a limitation due to > > floating points: > > [...] > > > > The problem really stems from the representation of 1 - x^2/2 as shown > > below: > > x = 1E-4 > > print(1 - x^2/2, digits=20) > > print(0.5, digits=20) # fails > > # 0.50003039 > > The floating point representation of 1 - x^2/2 is the real culprit: > # 0.50003039 > > The 3039 at the end is really an error due to the floating point > representation. However, this error blows up when inverting the value: > x = 1E-4; > y = 1 - x^2/2; > 1/(1 - y) - 2/x^2 > # 1.215494 > # should be 1/(x^2/2) - 2/x^2 = 0 > > > The ugly thing is that the error only gets worse as x decreases. The > value neither drops to 0, nor does it blow up to infinity; but it gets > worse in a continuous manner. At least the reason has become now clear. > > > > > > Maybe some functions of type cos1p and cos1n would be handy for such > > computations (to replace the manual series expansion): > > cos1p(x) = 1 + cos(x) > > cos1n(x) = 1 - cos(x) > > Though, I do not have yet the big picture. > > > > Sincerely, > > > Leonard > > > > > > > On 8/17/2023 1:57 PM, Martin Maechler wrote: > >>>>>>> Leonard Mada > >>>>>>> on Wed, 16 Aug 2023 20:50:52 +0300 writes: > >> > Dear Iris, > >> > Dear Martin, > >> > >> > Thank you very much for your replies. I add a few comments. > >> > >> > 1.) Correct formula > >> > The formula in the Subject Title was correct. A small glitch > >> swept into > >> > the last formula: > >> > - 1/(cos(x) - 1) - 2/x^2 > >> > or > >> > 1/(1 - cos(x)) - 2/x^2 # as in the subject title; > >> > >> > 2.) log1p > >> > Actually, the log-part behaves much better. And when it fails, > >> it fails > >> > completely (which is easy to spot!). > >> > >> > x = 1E-6 > >> > log(x) -log(1 - cos(x))/2 > >> > # 0.3465291 > >> > >> > x = 1E-8 > >> > log(x) -log(1 - cos(x))/2 > >> > # Inf > >> > log(x) - log1p(- cos(x))/2 > >> > # Inf => fails as well! > >> > # although using only log1p(cos(x)) seems to do the trick; > >> > log1p(cos(x)); log(2)/2; > >> > >> > 3.) 1/(1 - cos(x)) - 2/x^2 > >> > It is possible to convert the formula to one which is > >> numerically more > >> &
Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
Dear Bert, On 8/19/2023 2:47 AM, Bert Gunter wrote: > "Values of type 2^(-n) (and its binary complement) are exactly > represented as floating point numbers and do not generate the error. > However, values away from such special x-values will generate errors:" > > That was exactly my point: The size of errors depends on the accuracy > of binary representation of floating point numbers and their arithmetic. > > But you previously said: > "The ugly thing is that the error only gets worse as x decreases. The > value neither drops to 0, nor does it blow up to infinity; but it gets > worse in a continuous manner." > > That is wrong and disagrees with what you say above. > > -- Bert On "average", the error increases. But it does NOT increase monotonically: x = 2^(-20) * 1.1 # is still relatively close to the exact value! y <- 1 - x^2/2; 1/(1 - y) - 2/x^2 # 58672303, not 0, nor close to 0; Sincerely, Leonard > > On Fri, Aug 18, 2023 at 4:34 PM Leonard Mada wrote: > > Dear Bert, > > > Values of type 2^(-n) (and its binary complement) are exactly > represented as floating point numbers and do not generate the > error. However, values away from such special x-values will > generate errors: > > > # exactly represented: > x = 9.53674316406250e-07 > y <- 1 - x^2/2; > 1/(1 - y) - 2/x^2 > > # almost exact: > x = 9.536743164062502e-07 > y <- 1 - x^2/2; > 1/(1 - y) - 2/x^2 > > x = 9.536743164062498e-07 > y <- 1 - x^2/2; > 1/(1 - y) - 2/x^2 > > # the result behaves far better around values > # which can be represented exactly, > # but fails drastically for other values! > x = 2^(-20) * 1.1 > y <- 1 - x^2/2; > 1/(1 - y) - 2/x^2 > # 58672303 instead of 0! > > > Sincerely, > > > Leonard > > > On 8/19/2023 2:06 AM, Bert Gunter wrote: >> "The ugly thing is that the error only gets worse as x decreases. >> The >> value neither drops to 0, nor does it blow up to infinity; but it >> gets >> worse in a continuous manner." >> >> If I understand you correctly, this is wrong: >> >> > x <- 2^(-20) ## considerably less then 1e-4 !! >> > y <- 1 - x^2/2; >> > 1/(1 - y) - 2/x^2 >> [1] 0 >> >> It's all about the accuracy of the binary approximation of >> floating point numbers (and their arithmetic) >> >> Cheers, >> Bert >> >> >> On Fri, Aug 18, 2023 at 3:25 PM Leonard Mada via R-help >> wrote: >> >> I have added some clarifications below. >> >> On 8/18/2023 10:20 PM, Leonard Mada wrote: >> > [...] >> > After more careful thinking, I believe that it is a >> limitation due to >> > floating points: >> > [...] >> > >> > The problem really stems from the representation of 1 - >> x^2/2 as shown >> > below: >> > x = 1E-4 >> > print(1 - x^2/2, digits=20) >> > print(0.5, digits=20) # fails >> > # 0.50003039 >> >> The floating point representation of 1 - x^2/2 is the real >> culprit: >> # 0.50003039 >> >> The 3039 at the end is really an error due to the floating point >> representation. However, this error blows up when inverting >> the value: >> x = 1E-4; >> y = 1 - x^2/2; >> 1/(1 - y) - 2/x^2 >> # 1.215494 >> # should be 1/(x^2/2) - 2/x^2 = 0 >> >> >> The ugly thing is that the error only gets worse as x >> decreases. The >> value neither drops to 0, nor does it blow up to infinity; >> but it gets >> worse in a continuous manner. At least the reason has become >> now clear. >> >> >> > >> > Maybe some functions of type cos1p and cos1n would be handy >> for such >> > computations (to replace the manual series expansion): >> > cos1p(x) = 1 + cos(x) >> > cos1n(x) = 1 - cos(x) >> > Though, I do not have yet the big picture. >> > >> >> Sincerely, >> >> >> Leonard >> >> > >> > >> > On 8/17/2023 1:57 PM, Martin Maechler wrote: >> >>> Leonard Mada >> >>> on Wed, 16 Aug 2023 20:50:52 +0300 writes: >> >> > Dear Iris, >> >> > Dear Martin, >> >> >> >> > Thank you very much for your replies. I add a few >> comments. >> >> >> >> > 1.) Correct formula >> >> > The formula in the Subject Title was correct. A >> small glitch >> >> swept into >> >> > the last formula: >> >> > - 1/(cos(x) - 1) - 2/x^2 >> >> > or >> >> > 1/(1 - cos(x)) - 2/x^2 # as in the subject title; >> >> >> >> > 2.) log1p >> >> > Actually, the log-part behaves much better. And >> when it fails, >>
Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
"Values of type 2^(-n) (and its binary complement) are exactly represented as floating point numbers and do not generate the error. However, values away from such special x-values will generate errors:" That was exactly my point: The size of errors depends on the accuracy of binary representation of floating point numbers and their arithmetic. But you previously said: "The ugly thing is that the error only gets worse as x decreases. The value neither drops to 0, nor does it blow up to infinity; but it gets worse in a continuous manner." That is wrong and disagrees with what you say above. -- Bert On Fri, Aug 18, 2023 at 4:34 PM Leonard Mada wrote: > Dear Bert, > > > Values of type 2^(-n) (and its binary complement) are exactly represented > as floating point numbers and do not generate the error. However, values > away from such special x-values will generate errors: > > > # exactly represented: > x = 9.53674316406250e-07 > y <- 1 - x^2/2; > 1/(1 - y) - 2/x^2 > > # almost exact: > x = 9.536743164062502e-07 > y <- 1 - x^2/2; > 1/(1 - y) - 2/x^2 > > x = 9.536743164062498e-07 > y <- 1 - x^2/2; > 1/(1 - y) - 2/x^2 > > # the result behaves far better around values > # which can be represented exactly, > # but fails drastically for other values! > x = 2^(-20) * 1.1 > y <- 1 - x^2/2; > 1/(1 - y) - 2/x^2 > # 58672303 instead of 0! > > > Sincerely, > > > Leonard > > > On 8/19/2023 2:06 AM, Bert Gunter wrote: > > "The ugly thing is that the error only gets worse as x decreases. The > value neither drops to 0, nor does it blow up to infinity; but it gets > worse in a continuous manner." > > If I understand you correctly, this is wrong: > > > x <- 2^(-20) ## considerably less then 1e-4 !! > > y <- 1 - x^2/2; > > 1/(1 - y) - 2/x^2 > [1] 0 > > It's all about the accuracy of the binary approximation of floating point > numbers (and their arithmetic) > > Cheers, > Bert > > > On Fri, Aug 18, 2023 at 3:25 PM Leonard Mada via R-help < > r-help@r-project.org> wrote: > >> I have added some clarifications below. >> >> On 8/18/2023 10:20 PM, Leonard Mada wrote: >> > [...] >> > After more careful thinking, I believe that it is a limitation due to >> > floating points: >> > [...] >> > >> > The problem really stems from the representation of 1 - x^2/2 as shown >> > below: >> > x = 1E-4 >> > print(1 - x^2/2, digits=20) >> > print(0.5, digits=20) # fails >> > # 0.50003039 >> >> The floating point representation of 1 - x^2/2 is the real culprit: >> # 0.50003039 >> >> The 3039 at the end is really an error due to the floating point >> representation. However, this error blows up when inverting the value: >> x = 1E-4; >> y = 1 - x^2/2; >> 1/(1 - y) - 2/x^2 >> # 1.215494 >> # should be 1/(x^2/2) - 2/x^2 = 0 >> >> >> The ugly thing is that the error only gets worse as x decreases. The >> value neither drops to 0, nor does it blow up to infinity; but it gets >> worse in a continuous manner. At least the reason has become now clear. >> >> >> > >> > Maybe some functions of type cos1p and cos1n would be handy for such >> > computations (to replace the manual series expansion): >> > cos1p(x) = 1 + cos(x) >> > cos1n(x) = 1 - cos(x) >> > Though, I do not have yet the big picture. >> > >> >> Sincerely, >> >> >> Leonard >> >> > >> > >> > On 8/17/2023 1:57 PM, Martin Maechler wrote: >> >>> Leonard Mada >> >>> on Wed, 16 Aug 2023 20:50:52 +0300 writes: >> >> > Dear Iris, >> >> > Dear Martin, >> >> >> >> > Thank you very much for your replies. I add a few comments. >> >> >> >> > 1.) Correct formula >> >> > The formula in the Subject Title was correct. A small glitch >> >> swept into >> >> > the last formula: >> >> > - 1/(cos(x) - 1) - 2/x^2 >> >> > or >> >> > 1/(1 - cos(x)) - 2/x^2 # as in the subject title; >> >> >> >> > 2.) log1p >> >> > Actually, the log-part behaves much better. And when it fails, >> >> it fails >> >> > completely (which is easy to spot!). >> >> >> >> > x = 1E-6 >> >> > log(x) -log(1 - cos(x))/2 >> >> > # 0.3465291 >> >> >> >> > x = 1E-8 >> >> > log(x) -log(1 - cos(x))/2 >> >> > # Inf >> >> > log(x) - log1p(- cos(x))/2 >> >> > # Inf => fails as well! >> >> > # although using only log1p(cos(x)) seems to do the trick; >> >> > log1p(cos(x)); log(2)/2; >> >> >> >> > 3.) 1/(1 - cos(x)) - 2/x^2 >> >> > It is possible to convert the formula to one which is >> >> numerically more >> >> > stable. It is also possible to compute it manually, but it >> >> involves much >> >> > more work and is also error prone: >> >> >> >> > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) >> >> > And applying L'Hospital: >> >> > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x)) >> >> > # and a 2nd & 3rd & 4th time >> >> > 1/6 >> >> >> >> > The big problem was that I did not expect it to fail for x = >> >> 1E-4. I >> >> > thought it is more
Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
Dear Bert, Values of type 2^(-n) (and its binary complement) are exactly represented as floating point numbers and do not generate the error. However, values away from such special x-values will generate errors: # exactly represented: x = 9.53674316406250e-07 y <- 1 - x^2/2; 1/(1 - y) - 2/x^2 # almost exact: x = 9.536743164062502e-07 y <- 1 - x^2/2; 1/(1 - y) - 2/x^2 x = 9.536743164062498e-07 y <- 1 - x^2/2; 1/(1 - y) - 2/x^2 # the result behaves far better around values # which can be represented exactly, # but fails drastically for other values! x = 2^(-20) * 1.1 y <- 1 - x^2/2; 1/(1 - y) - 2/x^2 # 58672303 instead of 0! Sincerely, Leonard On 8/19/2023 2:06 AM, Bert Gunter wrote: > "The ugly thing is that the error only gets worse as x decreases. The > value neither drops to 0, nor does it blow up to infinity; but it gets > worse in a continuous manner." > > If I understand you correctly, this is wrong: > > > x <- 2^(-20) ## considerably less then 1e-4 !! > > y <- 1 - x^2/2; > > 1/(1 - y) - 2/x^2 > [1] 0 > > It's all about the accuracy of the binary approximation of floating > point numbers (and their arithmetic) > > Cheers, > Bert > > > On Fri, Aug 18, 2023 at 3:25 PM Leonard Mada via R-help > wrote: > > I have added some clarifications below. > > On 8/18/2023 10:20 PM, Leonard Mada wrote: > > [...] > > After more careful thinking, I believe that it is a limitation > due to > > floating points: > > [...] > > > > The problem really stems from the representation of 1 - x^2/2 as > shown > > below: > > x = 1E-4 > > print(1 - x^2/2, digits=20) > > print(0.5, digits=20) # fails > > # 0.50003039 > > The floating point representation of 1 - x^2/2 is the real culprit: > # 0.50003039 > > The 3039 at the end is really an error due to the floating point > representation. However, this error blows up when inverting the value: > x = 1E-4; > y = 1 - x^2/2; > 1/(1 - y) - 2/x^2 > # 1.215494 > # should be 1/(x^2/2) - 2/x^2 = 0 > > > The ugly thing is that the error only gets worse as x decreases. The > value neither drops to 0, nor does it blow up to infinity; but it > gets > worse in a continuous manner. At least the reason has become now > clear. > > > > > > Maybe some functions of type cos1p and cos1n would be handy for > such > > computations (to replace the manual series expansion): > > cos1p(x) = 1 + cos(x) > > cos1n(x) = 1 - cos(x) > > Though, I do not have yet the big picture. > > > > Sincerely, > > > Leonard > > > > > > > On 8/17/2023 1:57 PM, Martin Maechler wrote: > >>> Leonard Mada > >>> on Wed, 16 Aug 2023 20:50:52 +0300 writes: > >> > Dear Iris, > >> > Dear Martin, > >> > >> > Thank you very much for your replies. I add a few comments. > >> > >> > 1.) Correct formula > >> > The formula in the Subject Title was correct. A small > glitch > >> swept into > >> > the last formula: > >> > - 1/(cos(x) - 1) - 2/x^2 > >> > or > >> > 1/(1 - cos(x)) - 2/x^2 # as in the subject title; > >> > >> > 2.) log1p > >> > Actually, the log-part behaves much better. And when it > fails, > >> it fails > >> > completely (which is easy to spot!). > >> > >> > x = 1E-6 > >> > log(x) -log(1 - cos(x))/2 > >> > # 0.3465291 > >> > >> > x = 1E-8 > >> > log(x) -log(1 - cos(x))/2 > >> > # Inf > >> > log(x) - log1p(- cos(x))/2 > >> > # Inf => fails as well! > >> > # although using only log1p(cos(x)) seems to do the trick; > >> > log1p(cos(x)); log(2)/2; > >> > >> > 3.) 1/(1 - cos(x)) - 2/x^2 > >> > It is possible to convert the formula to one which is > >> numerically more > >> > stable. It is also possible to compute it manually, but it > >> involves much > >> > more work and is also error prone: > >> > >> > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) > >> > And applying L'Hospital: > >> > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x)) > >> > # and a 2nd & 3rd & 4th time > >> > 1/6 > >> > >> > The big problem was that I did not expect it to fail for > x = > >> 1E-4. I > >> > thought it is more robust and works maybe until 1E-5. > >> > x = 1E-5 > >> > 2/x^2 - 2E+10 > >> > # -3.814697e-06 > >> > >> > This is the reason why I believe that there is room for > >> improvement. > >> > >> > Sincerely, > >> > Leonard > >> > >> Thank you, Leonard. > >> Yes, I agree that it is amazing how much your formula suffers from > >> (a generalization of) "cancellation" --- leading you
Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
"The ugly thing is that the error only gets worse as x decreases. The value neither drops to 0, nor does it blow up to infinity; but it gets worse in a continuous manner." If I understand you correctly, this is wrong: > x <- 2^(-20) ## considerably less then 1e-4 !! > y <- 1 - x^2/2; > 1/(1 - y) - 2/x^2 [1] 0 It's all about the accuracy of the binary approximation of floating point numbers (and their arithmetic) Cheers, Bert On Fri, Aug 18, 2023 at 3:25 PM Leonard Mada via R-help < r-help@r-project.org> wrote: > I have added some clarifications below. > > On 8/18/2023 10:20 PM, Leonard Mada wrote: > > [...] > > After more careful thinking, I believe that it is a limitation due to > > floating points: > > [...] > > > > The problem really stems from the representation of 1 - x^2/2 as shown > > below: > > x = 1E-4 > > print(1 - x^2/2, digits=20) > > print(0.5, digits=20) # fails > > # 0.50003039 > > The floating point representation of 1 - x^2/2 is the real culprit: > # 0.50003039 > > The 3039 at the end is really an error due to the floating point > representation. However, this error blows up when inverting the value: > x = 1E-4; > y = 1 - x^2/2; > 1/(1 - y) - 2/x^2 > # 1.215494 > # should be 1/(x^2/2) - 2/x^2 = 0 > > > The ugly thing is that the error only gets worse as x decreases. The > value neither drops to 0, nor does it blow up to infinity; but it gets > worse in a continuous manner. At least the reason has become now clear. > > > > > > Maybe some functions of type cos1p and cos1n would be handy for such > > computations (to replace the manual series expansion): > > cos1p(x) = 1 + cos(x) > > cos1n(x) = 1 - cos(x) > > Though, I do not have yet the big picture. > > > > Sincerely, > > > Leonard > > > > > > > On 8/17/2023 1:57 PM, Martin Maechler wrote: > >>> Leonard Mada > >>> on Wed, 16 Aug 2023 20:50:52 +0300 writes: > >> > Dear Iris, > >> > Dear Martin, > >> > >> > Thank you very much for your replies. I add a few comments. > >> > >> > 1.) Correct formula > >> > The formula in the Subject Title was correct. A small glitch > >> swept into > >> > the last formula: > >> > - 1/(cos(x) - 1) - 2/x^2 > >> > or > >> > 1/(1 - cos(x)) - 2/x^2 # as in the subject title; > >> > >> > 2.) log1p > >> > Actually, the log-part behaves much better. And when it fails, > >> it fails > >> > completely (which is easy to spot!). > >> > >> > x = 1E-6 > >> > log(x) -log(1 - cos(x))/2 > >> > # 0.3465291 > >> > >> > x = 1E-8 > >> > log(x) -log(1 - cos(x))/2 > >> > # Inf > >> > log(x) - log1p(- cos(x))/2 > >> > # Inf => fails as well! > >> > # although using only log1p(cos(x)) seems to do the trick; > >> > log1p(cos(x)); log(2)/2; > >> > >> > 3.) 1/(1 - cos(x)) - 2/x^2 > >> > It is possible to convert the formula to one which is > >> numerically more > >> > stable. It is also possible to compute it manually, but it > >> involves much > >> > more work and is also error prone: > >> > >> > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) > >> > And applying L'Hospital: > >> > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x)) > >> > # and a 2nd & 3rd & 4th time > >> > 1/6 > >> > >> > The big problem was that I did not expect it to fail for x = > >> 1E-4. I > >> > thought it is more robust and works maybe until 1E-5. > >> > x = 1E-5 > >> > 2/x^2 - 2E+10 > >> > # -3.814697e-06 > >> > >> > This is the reason why I believe that there is room for > >> improvement. > >> > >> > Sincerely, > >> > Leonard > >> > >> Thank you, Leonard. > >> Yes, I agree that it is amazing how much your formula suffers from > >> (a generalization of) "cancellation" --- leading you to think > >> there was a problem with cos() or log() or .. in R. > >> But really R uses the system builtin libmath library, and the > >> problem is really the inherent instability of your formula. > >> > >> Indeed your first approximation was not really much more stable: > >> > >> ## 3.) 1/(1 - cos(x)) - 2/x^2 > >> ## It is possible to convert the formula to one which is numerically > >> more > >> ## stable. It is also possible to compute it manually, but it > >> involves much > >> ## more work and is also error prone: > >> ## (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) > >> ## MM: but actually, that approximation does not seem better (close > >> to the breakdown region): > >> f1 <- \(x) 1/(1 - cos(x)) - 2/x^2 > >> f2 <- \(x) (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) > >> curve(f1, 1e-8, 1e-1, log="xy" n=2^10) > >> curve(f2, add = TRUE, col=2, n=2^10) > >> ## Zoom in: > >> curve(f1, 1e-4, 1e-1, log="xy",n=2^9) > >> curve(f2, add = TRUE, col=2, n=2^9) > >> ## Zoom in much more in y-direction: > >> yl <- 1/6 + c(-5, 20)/10 > >> curve(f1, 1e-4, 1e-1, log="x", ylim=yl, n=2^9) > >> abline(h = 1/6, lty=3, col="gray") > >> curve(f2, add
Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
I have added some clarifications below. On 8/18/2023 10:20 PM, Leonard Mada wrote: [...] After more careful thinking, I believe that it is a limitation due to floating points: [...] The problem really stems from the representation of 1 - x^2/2 as shown below: x = 1E-4 print(1 - x^2/2, digits=20) print(0.5, digits=20) # fails # 0.50003039 The floating point representation of 1 - x^2/2 is the real culprit: # 0.50003039 The 3039 at the end is really an error due to the floating point representation. However, this error blows up when inverting the value: x = 1E-4; y = 1 - x^2/2; 1/(1 - y) - 2/x^2 # 1.215494 # should be 1/(x^2/2) - 2/x^2 = 0 The ugly thing is that the error only gets worse as x decreases. The value neither drops to 0, nor does it blow up to infinity; but it gets worse in a continuous manner. At least the reason has become now clear. Maybe some functions of type cos1p and cos1n would be handy for such computations (to replace the manual series expansion): cos1p(x) = 1 + cos(x) cos1n(x) = 1 - cos(x) Though, I do not have yet the big picture. Sincerely, Leonard On 8/17/2023 1:57 PM, Martin Maechler wrote: Leonard Mada on Wed, 16 Aug 2023 20:50:52 +0300 writes: > Dear Iris, > Dear Martin, > Thank you very much for your replies. I add a few comments. > 1.) Correct formula > The formula in the Subject Title was correct. A small glitch swept into > the last formula: > - 1/(cos(x) - 1) - 2/x^2 > or > 1/(1 - cos(x)) - 2/x^2 # as in the subject title; > 2.) log1p > Actually, the log-part behaves much better. And when it fails, it fails > completely (which is easy to spot!). > x = 1E-6 > log(x) -log(1 - cos(x))/2 > # 0.3465291 > x = 1E-8 > log(x) -log(1 - cos(x))/2 > # Inf > log(x) - log1p(- cos(x))/2 > # Inf => fails as well! > # although using only log1p(cos(x)) seems to do the trick; > log1p(cos(x)); log(2)/2; > 3.) 1/(1 - cos(x)) - 2/x^2 > It is possible to convert the formula to one which is numerically more > stable. It is also possible to compute it manually, but it involves much > more work and is also error prone: > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) > And applying L'Hospital: > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x)) > # and a 2nd & 3rd & 4th time > 1/6 > The big problem was that I did not expect it to fail for x = 1E-4. I > thought it is more robust and works maybe until 1E-5. > x = 1E-5 > 2/x^2 - 2E+10 > # -3.814697e-06 > This is the reason why I believe that there is room for improvement. > Sincerely, > Leonard Thank you, Leonard. Yes, I agree that it is amazing how much your formula suffers from (a generalization of) "cancellation" --- leading you to think there was a problem with cos() or log() or .. in R. But really R uses the system builtin libmath library, and the problem is really the inherent instability of your formula. Indeed your first approximation was not really much more stable: ## 3.) 1/(1 - cos(x)) - 2/x^2 ## It is possible to convert the formula to one which is numerically more ## stable. It is also possible to compute it manually, but it involves much ## more work and is also error prone: ## (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) ## MM: but actually, that approximation does not seem better (close to the breakdown region): f1 <- \(x) 1/(1 - cos(x)) - 2/x^2 f2 <- \(x) (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) curve(f1, 1e-8, 1e-1, log="xy" n=2^10) curve(f2, add = TRUE, col=2, n=2^10) ## Zoom in: curve(f1, 1e-4, 1e-1, log="xy",n=2^9) curve(f2, add = TRUE, col=2, n=2^9) ## Zoom in much more in y-direction: yl <- 1/6 + c(-5, 20)/10 curve(f1, 1e-4, 1e-1, log="x", ylim=yl, n=2^9) abline(h = 1/6, lty=3, col="gray") curve(f2, add = TRUE, n=2^9, col=adjustcolor(2, 1/2)) Now, you can use the Rmpfr package (interface to the GNU MPFR multiple-precision C library) to find out more : if(!requireNamespace("Rmpfr")) install.packages("Rmpfr") M <- function(x, precBits=128) Rmpfr::mpfr(x, precBits) (xM <- M(1e-8))# yes, only ~ 16 dig accurate ## 1.20922560830128472675327e-8 M(10, 128)^-8 # would of course be more accurate, ## but we want the calculation for the double precision number 1e-8 ## Now you can draw "the truth" into the above plots: curve(f1, 1e-4, 1e-1, log="xy",n=2^9) curve(f2, add = TRUE, col=2, n=2^9) ## correct: curve(f1(M(x, 256)), add = TRUE, col=4, lwd=2, n=2^9) abline(h = 1/6, lty=3, col="gray") But, indeed we take note how much it is the formula instability: Also MPFR needs a lot of extra bits precision before it gets to the correct numbers: xM <- c(M(1e-8, 80), M(1e-8, 96), M(1e-8, 112), M(1e-8, 128), M(1e-8, 180), M(1e-8, 256)) ## to and round back to 70 bits for display: R <- \(x) Rmpfr::roundMpfr(x, 70)
Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
Dear Martin, Thank you very much for your analysis. I add only a small comment: - the purpose of the modified formula was to apply l'Hospital; - there are other ways to transform the formula; although applying l'Hospital once is probably more robust than simple transformations (but the computations are also more tedious and error prone); After more careful thinking, I believe that it is a limitation due to floating points: x = 1E-4 1/(-x^2/2 - x^4/24) + 2/x^2 1/6 y = 1 - x^2/2 - x^4/24; 1/(cos(x) - 1) + 2/x^2 1/(y - 1) + 2/x^2 # -1.215494 # correct: 1/6 We need the 3rd term for the correct computation of cos(x) in this problem: but this is x^4 / 24, which for 1E-4 requires precision at least up to 1E-16 / 24, or ~ 1E-18. I did not thought initially about that. The trigonometric functions skip one term, and are therefore much uglier than the log. The problem really stems from the representation of 1 - x^2/2 as shown below: x = 1E-4 print(1 - x^2/2, digits=20) print(0.5, digits=20) # fails # 0.50003039 Maybe some functions of type cos1p and cos1n would be handy for such computations (to replace the manual series expansion): cos1p(x) = 1 + cos(x) cos1n(x) = 1 - cos(x) Though, I do not have yet the big picture. Sincerely, Leonard On 8/17/2023 1:57 PM, Martin Maechler wrote: Leonard Mada on Wed, 16 Aug 2023 20:50:52 +0300 writes: > Dear Iris, > Dear Martin, > Thank you very much for your replies. I add a few comments. > 1.) Correct formula > The formula in the Subject Title was correct. A small glitch swept into > the last formula: > - 1/(cos(x) - 1) - 2/x^2 > or > 1/(1 - cos(x)) - 2/x^2 # as in the subject title; > 2.) log1p > Actually, the log-part behaves much better. And when it fails, it fails > completely (which is easy to spot!). > x = 1E-6 > log(x) -log(1 - cos(x))/2 > # 0.3465291 > x = 1E-8 > log(x) -log(1 - cos(x))/2 > # Inf > log(x) - log1p(- cos(x))/2 > # Inf => fails as well! > # although using only log1p(cos(x)) seems to do the trick; > log1p(cos(x)); log(2)/2; > 3.) 1/(1 - cos(x)) - 2/x^2 > It is possible to convert the formula to one which is numerically more > stable. It is also possible to compute it manually, but it involves much > more work and is also error prone: > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) > And applying L'Hospital: > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x)) > # and a 2nd & 3rd & 4th time > 1/6 > The big problem was that I did not expect it to fail for x = 1E-4. I > thought it is more robust and works maybe until 1E-5. > x = 1E-5 > 2/x^2 - 2E+10 > # -3.814697e-06 > This is the reason why I believe that there is room for improvement. > Sincerely, > Leonard Thank you, Leonard. Yes, I agree that it is amazing how much your formula suffers from (a generalization of) "cancellation" --- leading you to think there was a problem with cos() or log() or .. in R. But really R uses the system builtin libmath library, and the problem is really the inherent instability of your formula. Indeed your first approximation was not really much more stable: ## 3.) 1/(1 - cos(x)) - 2/x^2 ## It is possible to convert the formula to one which is numerically more ## stable. It is also possible to compute it manually, but it involves much ## more work and is also error prone: ## (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) ## MM: but actually, that approximation does not seem better (close to the breakdown region): f1 <- \(x) 1/(1 - cos(x)) - 2/x^2 f2 <- \(x) (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) curve(f1, 1e-8, 1e-1, log="xy" n=2^10) curve(f2, add = TRUE, col=2, n=2^10) ## Zoom in: curve(f1, 1e-4, 1e-1, log="xy",n=2^9) curve(f2, add = TRUE, col=2, n=2^9) ## Zoom in much more in y-direction: yl <- 1/6 + c(-5, 20)/10 curve(f1, 1e-4, 1e-1, log="x", ylim=yl, n=2^9) abline(h = 1/6, lty=3, col="gray") curve(f2, add = TRUE, n=2^9, col=adjustcolor(2, 1/2)) Now, you can use the Rmpfr package (interface to the GNU MPFR multiple-precision C library) to find out more : if(!requireNamespace("Rmpfr")) install.packages("Rmpfr") M <- function(x, precBits=128) Rmpfr::mpfr(x, precBits) (xM <- M(1e-8))# yes, only ~ 16 dig accurate ## 1.20922560830128472675327e-8 M(10, 128)^-8 # would of course be more accurate, ## but we want the calculation for the double precision number 1e-8 ## Now you can draw "the truth" into the above plots: curve(f1, 1e-4, 1e-1, log="xy",n=2^9) curve(f2, add = TRUE, col=2, n=2^9) ## correct: curve(f1(M(x, 256)), add = TRUE, col=4, lwd=2, n=2^9) abline(h = 1/6, lty=3, col="gray") But, indeed we take note how much it is the formula instability: Also MPFR needs a lot of extra bits precision before it gets to the correct numbers: xM <- c(M(1e-8, 80), M(1e-8, 96), M(1e-8, 112),
Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
> Leonard Mada > on Wed, 16 Aug 2023 20:50:52 +0300 writes: > Dear Iris, > Dear Martin, > Thank you very much for your replies. I add a few comments. > 1.) Correct formula > The formula in the Subject Title was correct. A small glitch swept into > the last formula: > - 1/(cos(x) - 1) - 2/x^2 > or > 1/(1 - cos(x)) - 2/x^2 # as in the subject title; > 2.) log1p > Actually, the log-part behaves much better. And when it fails, it fails > completely (which is easy to spot!). > x = 1E-6 > log(x) -log(1 - cos(x))/2 > # 0.3465291 > x = 1E-8 > log(x) -log(1 - cos(x))/2 > # Inf > log(x) - log1p(- cos(x))/2 > # Inf => fails as well! > # although using only log1p(cos(x)) seems to do the trick; > log1p(cos(x)); log(2)/2; > 3.) 1/(1 - cos(x)) - 2/x^2 > It is possible to convert the formula to one which is numerically more > stable. It is also possible to compute it manually, but it involves much > more work and is also error prone: > (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) > And applying L'Hospital: > (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x)) > # and a 2nd & 3rd & 4th time > 1/6 > The big problem was that I did not expect it to fail for x = 1E-4. I > thought it is more robust and works maybe until 1E-5. > x = 1E-5 > 2/x^2 - 2E+10 > # -3.814697e-06 > This is the reason why I believe that there is room for improvement. > Sincerely, > Leonard Thank you, Leonard. Yes, I agree that it is amazing how much your formula suffers from (a generalization of) "cancellation" --- leading you to think there was a problem with cos() or log() or .. in R. But really R uses the system builtin libmath library, and the problem is really the inherent instability of your formula. Indeed your first approximation was not really much more stable: ## 3.) 1/(1 - cos(x)) - 2/x^2 ## It is possible to convert the formula to one which is numerically more ## stable. It is also possible to compute it manually, but it involves much ## more work and is also error prone: ## (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) ## MM: but actually, that approximation does not seem better (close to the breakdown region): f1 <- \(x) 1/(1 - cos(x)) - 2/x^2 f2 <- \(x) (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) curve(f1, 1e-8, 1e-1, log="xy" n=2^10) curve(f2, add = TRUE, col=2, n=2^10) ## Zoom in: curve(f1, 1e-4, 1e-1, log="xy",n=2^9) curve(f2, add = TRUE, col=2, n=2^9) ## Zoom in much more in y-direction: yl <- 1/6 + c(-5, 20)/10 curve(f1, 1e-4, 1e-1, log="x", ylim=yl, n=2^9) abline(h = 1/6, lty=3, col="gray") curve(f2, add = TRUE, n=2^9, col=adjustcolor(2, 1/2)) Now, you can use the Rmpfr package (interface to the GNU MPFR multiple-precision C library) to find out more : if(!requireNamespace("Rmpfr")) install.packages("Rmpfr") M <- function(x, precBits=128) Rmpfr::mpfr(x, precBits) (xM <- M(1e-8))# yes, only ~ 16 dig accurate ## 1.20922560830128472675327e-8 M(10, 128)^-8 # would of course be more accurate, ## but we want the calculation for the double precision number 1e-8 ## Now you can draw "the truth" into the above plots: curve(f1, 1e-4, 1e-1, log="xy",n=2^9) curve(f2, add = TRUE, col=2, n=2^9) ## correct: curve(f1(M(x, 256)), add = TRUE, col=4, lwd=2, n=2^9) abline(h = 1/6, lty=3, col="gray") But, indeed we take note how much it is the formula instability: Also MPFR needs a lot of extra bits precision before it gets to the correct numbers: xM <- c(M(1e-8, 80), M(1e-8, 96), M(1e-8, 112), M(1e-8, 128), M(1e-8, 180), M(1e-8, 256)) ## to and round back to 70 bits for display: R <- \(x) Rmpfr::roundMpfr(x, 70) R(f1(xM)) R(f2(xM)) ## [1] 0 0 0.15407439555097885670915 ## [4] 0.1746653133802175779 0.1749979 0.1750001 ## 1. f1() is even worse than f2() {here at x=1e-8} ## 2. Indeed, even 96 bits precision is *not* sufficient at all, ... ##which is amazing to me as well !! Best regards, Martin __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
Dear Iris, Dear Martin, Thank you very much for your replies. I add a few comments. 1.) Correct formula The formula in the Subject Title was correct. A small glitch swept into the last formula: - 1/(cos(x) - 1) - 2/x^2 or 1/(1 - cos(x)) - 2/x^2 # as in the subject title; 2.) log1p Actually, the log-part behaves much better. And when it fails, it fails completely (which is easy to spot!). x = 1E-6 log(x) -log(1 - cos(x))/2 # 0.3465291 x = 1E-8 log(x) -log(1 - cos(x))/2 # Inf log(x) - log1p(- cos(x))/2 # Inf => fails as well! # although using only log1p(cos(x)) seems to do the trick; log1p(cos(x)); log(2)/2; 3.) 1/(1 - cos(x)) - 2/x^2 It is possible to convert the formula to one which is numerically more stable. It is also possible to compute it manually, but it involves much more work and is also error prone: (x^2 - 2 + 2*cos(x)) / (x^2 * (1 - cos(x))) And applying L'Hospital: (2*x - 2*sin(x)) / (2*x * (1 - cos(x)) + x^2*sin(x)) # and a 2nd & 3rd & 4th time 1/6 The big problem was that I did not expect it to fail for x = 1E-4. I thought it is more robust and works maybe until 1E-5. x = 1E-5 2/x^2 - 2E+10 # -3.814697e-06 This is the reason why I believe that there is room for improvement. Sincerely, Leonard On 8/16/2023 9:51 AM, Iris Simmons wrote: > You could rewrite > > 1 - cos(x) > > as > > 2 * sin(x/2)^2 > > and that might give you more precision? > > On Wed, Aug 16, 2023, 01:50 Leonard Mada via R-help > wrote: > > Dear R-Users, > > I tried to compute the following limit: > x = 1E-3; > (-log(1 - cos(x)) - 1/(cos(x)-1)) / 2 - 1/(x^2) + log(x) > # 0.4299226 > log(2)/2 + 1/12 > # 0.4299069 > > However, the result diverges as x decreases: > x = 1E-4 > (-log(1 - cos(x)) - 1/(cos(x)-1)) / 2 - 1/(x^2) + log(x) > # 0.9543207 > # correct: 0.4299069 > > I expected the precision to remain good with x = 1E-4 or x = 1E-5. > > This part blows up - probably some significant loss of precision of > cos(x) when x -> 0: > 1/(cos(x) - 1) - 2/x^2 > > Maybe there is some room for improvement. > > Sincerely, > > Leonard > == > The limit was part of the integral: > up = pi/5; > integrate(function(x) 1 / sin(x)^3 - 1/x^3 - 1/(2*x), 0, up) > (log( (1 - cos(up)) / (1 + cos(up)) ) + > + 1/(cos(up) - 1) + 1/(cos(up) + 1) + 2*log(2) - 1/3) / 4 + > + (1/(2*up^2) - log(up)/2); > > # see: > > https://github.com/discoleo/R/blob/master/Math/Integrals.Trig.Fractions.Poly.R > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > > https://eu01.z.antigena.com/l/boSAjwics773HEe0HFHDZf3m1AU7fmWr4bglOgXO3sMyE9zLHAAMytf-SnATeHdnKJyeFbBsM6nXG-uPpd0NTW30ooAzNgYV5uhwnlhwxr4i8i21qKJUC~IrUTz2X1a5ioWqOWtHPlgzUrOid926sUOri-_H8XkLDcodDRWb > > PLEASE do read the posting guide > > https://eu01.z.antigena.com/l/AUS87vWM-isc3qtDXhJTp4jyQv7tuxdolKFlpY6mWcDOjbSlNzcD~~GORwHJFcX866fJF~qQmKc9R6LV9upRYcB4CBlSnLN0U_X8fIqLyhOIiPzDjYTVLEgiilZrKGuUqfW72b_50MVi~TaTlnE_R7fz8zXoZWGrKmGA > > and provide commented, minimal, self-contained, reproducible code. > [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
> Iris Simmons > on Wed, 16 Aug 2023 02:57:48 -0400 writes: > You might also be able to rewrite > log(1 - cos(x)) > as > log1p(-cos(x)) > On Wed, Aug 16, 2023, 02:51 Iris Simmons wrote: >> You could rewrite >> >> 1 - cos(x) >> >> as >> >> 2 * sin(x/2)^2 >> >> and that might give you more precision? Thank you, Iris, yes, both suggestions are excellent *and* necessary. Note that this is part "numerical analysis 101" and is called "cancellation": Direct evaluation of 1 - cos(x) for small 'x' *cannot* ever be numerically accurate and suffers from cancellation. log(1+x) for small x is slightly more subtle than pure cancellation, but exactly the same reason we introduced log1p() {and expm1(); then even cospi(), sinpi() etc} into R even before it was widely adopted in C math libraries. Martin >> On Wed, Aug 16, 2023, 01:50 Leonard Mada via R-help >> wrote: >> >>> Dear R-Users, >>> >>> I tried to compute the following limit: >>> x = 1E-3; >>> (-log(1 - cos(x)) - 1/(cos(x)-1)) / 2 - 1/(x^2) + log(x) >>> # 0.4299226 >>> log(2)/2 + 1/12 >>> # 0.4299069 >>> >>> However, the result diverges as x decreases: >>> x = 1E-4 >>> (-log(1 - cos(x)) - 1/(cos(x)-1)) / 2 - 1/(x^2) + log(x) >>> # 0.9543207 >>> # correct: 0.4299069 >>> >>> I expected the precision to remain good with x = 1E-4 or x = 1E-5. >>> >>> This part blows up - probably some significant loss of precision of >>> cos(x) when x -> 0: >>> 1/(cos(x) - 1) - 2/x^2 >>> >>> Maybe there is some room for improvement. >>> >>> Sincerely, >>> >>> Leonard >>> == >>> The limit was part of the integral: >>> up = pi/5; >>> integrate(function(x) 1 / sin(x)^3 - 1/x^3 - 1/(2*x), 0, up) >>> (log( (1 - cos(up)) / (1 + cos(up)) ) + >>> + 1/(cos(up) - 1) + 1/(cos(up) + 1) + 2*log(2) - 1/3) / 4 + >>> + (1/(2*up^2) - log(up)/2); >>> >>> # see: >>> >>> https://github.com/discoleo/R/blob/master/Math/Integrals.Trig.Fractions.Poly.R >>> >>> __ >>> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >>> https://stat.ethz.ch/mailman/listinfo/r-help >>> PLEASE do read the posting guide >>> http://www.R-project.org/posting-guide.html >>> and provide commented, minimal, self-contained, reproducible code. >>> >> > [[alternative HTML version deleted]] > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
You might also be able to rewrite log(1 - cos(x)) as log1p(-cos(x)) On Wed, Aug 16, 2023, 02:51 Iris Simmons wrote: > You could rewrite > > 1 - cos(x) > > as > > 2 * sin(x/2)^2 > > and that might give you more precision? > > On Wed, Aug 16, 2023, 01:50 Leonard Mada via R-help > wrote: > >> Dear R-Users, >> >> I tried to compute the following limit: >> x = 1E-3; >> (-log(1 - cos(x)) - 1/(cos(x)-1)) / 2 - 1/(x^2) + log(x) >> # 0.4299226 >> log(2)/2 + 1/12 >> # 0.4299069 >> >> However, the result diverges as x decreases: >> x = 1E-4 >> (-log(1 - cos(x)) - 1/(cos(x)-1)) / 2 - 1/(x^2) + log(x) >> # 0.9543207 >> # correct: 0.4299069 >> >> I expected the precision to remain good with x = 1E-4 or x = 1E-5. >> >> This part blows up - probably some significant loss of precision of >> cos(x) when x -> 0: >> 1/(cos(x) - 1) - 2/x^2 >> >> Maybe there is some room for improvement. >> >> Sincerely, >> >> Leonard >> == >> The limit was part of the integral: >> up = pi/5; >> integrate(function(x) 1 / sin(x)^3 - 1/x^3 - 1/(2*x), 0, up) >> (log( (1 - cos(up)) / (1 + cos(up)) ) + >> + 1/(cos(up) - 1) + 1/(cos(up) + 1) + 2*log(2) - 1/3) / 4 + >> + (1/(2*up^2) - log(up)/2); >> >> # see: >> >> https://github.com/discoleo/R/blob/master/Math/Integrals.Trig.Fractions.Poly.R >> >> __ >> R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see >> https://stat.ethz.ch/mailman/listinfo/r-help >> PLEASE do read the posting guide >> http://www.R-project.org/posting-guide.html >> and provide commented, minimal, self-contained, reproducible code. >> > [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.
Re: [R] Numerical stability of: 1/(1 - cos(x)) - 2/x^2
You could rewrite 1 - cos(x) as 2 * sin(x/2)^2 and that might give you more precision? On Wed, Aug 16, 2023, 01:50 Leonard Mada via R-help wrote: > Dear R-Users, > > I tried to compute the following limit: > x = 1E-3; > (-log(1 - cos(x)) - 1/(cos(x)-1)) / 2 - 1/(x^2) + log(x) > # 0.4299226 > log(2)/2 + 1/12 > # 0.4299069 > > However, the result diverges as x decreases: > x = 1E-4 > (-log(1 - cos(x)) - 1/(cos(x)-1)) / 2 - 1/(x^2) + log(x) > # 0.9543207 > # correct: 0.4299069 > > I expected the precision to remain good with x = 1E-4 or x = 1E-5. > > This part blows up - probably some significant loss of precision of > cos(x) when x -> 0: > 1/(cos(x) - 1) - 2/x^2 > > Maybe there is some room for improvement. > > Sincerely, > > Leonard > == > The limit was part of the integral: > up = pi/5; > integrate(function(x) 1 / sin(x)^3 - 1/x^3 - 1/(2*x), 0, up) > (log( (1 - cos(up)) / (1 + cos(up)) ) + > + 1/(cos(up) - 1) + 1/(cos(up) + 1) + 2*log(2) - 1/3) / 4 + > + (1/(2*up^2) - log(up)/2); > > # see: > > https://github.com/discoleo/R/blob/master/Math/Integrals.Trig.Fractions.Poly.R > > __ > R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide > http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. > [[alternative HTML version deleted]] __ R-help@r-project.org mailing list -- To UNSUBSCRIBE and more, see https://stat.ethz.ch/mailman/listinfo/r-help PLEASE do read the posting guide http://www.R-project.org/posting-guide.html and provide commented, minimal, self-contained, reproducible code.