On Sat, 2010-07-03 at 17:27 -0700, Roger Deangelis wrote:
> Hi Bernado,
>
> In many financial applications if you convert the dollars and cents to
> pennies( ie $1.10 to 110) and divide by 100 at the vary end you can get
> maintain higher precision. This applies primarily to sums. This is similar
Hi Bernado,
In many financial applications if you convert the dollars and cents to
pennies( ie $1.10 to 110) and divide by 100 at the vary end you can get
maintain higher precision. This applies primarily to sums. This is similar
to keeping track of the decimal fractions which have exact represen
On Fri, 2010-07-02 at 21:23 -0700, Roger Deangelis wrote:
>
> Although it does not apply to your series and is impractical, it seems to
> me that the most accurate algorithm might be to add all the rational numbers
> whose sum and components can be represented without error in binary first,
>
Although it does not apply to your series and is impractical, it seems to
me that the most accurate algorithm might be to add all the rational numbers
whose sum and components can be represented without error in binary first,
ie 2.5 + .5 or 1/16 + 1/16 + 1/8.
You could also get very clever
On 24/06/2010 4:57 PM, Peter Langfelder wrote:
On Thu, Jun 24, 2010 at 1:50 PM, Duncan Murdoch
wrote:
On 24/06/2010 4:39 PM, Peter Langfelder wrote:
AFAIK the optimal way of summing a large number of positive numbers is
to always add the two smallest numbers
Isn't that what I
On Thu, Jun 24, 2010 at 4:08 PM, Lasse Kliemann
wrote:
> What is the best way in R to compute a sum while avoiding
> cancellation effects?
>
See ?sum.exact in the caTools package.
__
R-help@r-project.org mailing list
https://stat.ethz.ch/mailman/listin
On Thu, Jun 24, 2010 at 1:50 PM, Duncan Murdoch
wrote:
> On 24/06/2010 4:39 PM, Peter Langfelder wrote:
>
>> AFAIK the optimal way of summing a large number of positive numbers is
>> to always add the two smallest numbers
>
> Isn't that what I said?
I understood that you suggested to linearly sum
On 24/06/2010 4:39 PM, Peter Langfelder wrote:
On Thu, Jun 24, 2010 at 1:26 PM, Duncan Murdoch
wrote:
On 24/06/2010 4:08 PM, Lasse Kliemann wrote:
What is the best way in R to compute a sum while avoiding cancellation
effects?
Use sum(). If it's not good enough, then do
On Jun 24, 2010, at 4:08 PM, Lasse Kliemann wrote:
a <- 0 ; for(i in (1:2)) a <- a + 1/i
b <- 0 ; for(i in (2:1)) b <- b + 1/i
c <- sum(1/(1:2))
d <- sum(1/(2:1))
order(c(a,b,c,d))
[1] 1 2 4 3
b
[1] TRUE
c==d
[1] FALSE
I'd expected b being the largest, si
On Thu, Jun 24, 2010 at 1:26 PM, Duncan Murdoch
wrote:
> On 24/06/2010 4:08 PM, Lasse Kliemann wrote:
>> What is the best way in R to compute a sum while avoiding cancellation
>> effects?
>>
>
> Use sum(). If it's not good enough, then do it in C, accumulating in
> extended precision (which is w
On 24/06/2010 4:08 PM, Lasse Kliemann wrote:
> a <- 0 ; for(i in (1:2)) a <- a + 1/i
> b <- 0 ; for(i in (2:1)) b <- b + 1/i
> c <- sum(1/(1:2))
> d <- sum(1/(2:1))
> order(c(a,b,c,d))
[1] 1 2 4 3
> b c==d
[1] FALSE
I'd expected b being the largest
> a <- 0 ; for(i in (1:2)) a <- a + 1/i
> b <- 0 ; for(i in (2:1)) b <- b + 1/i
> c <- sum(1/(1:2))
> d <- sum(1/(2:1))
> order(c(a,b,c,d))
[1] 1 2 4 3
> b c==d
[1] FALSE
I'd expected b being the largest, since we sum up the smallest
numbers first.
12 matches
Mail list logo