Re: [Rd] Modification-proposal for %% (modulo) when supplied with double
Duncan, I think Emil realizes that the floating point format isn't able to represent certain numbers, that's why he is suggesting this change rather than complaining about our arithmetic being broken. However, I agree with you that we should not adopt his proposal. It would not make things more "user friendly" for people. Everyone has a different application and a different use of %% and they just need to keep in mind that they are talking to a computer and not a blackboard. Here is an example of a feature that was meant to help users get more intuitive results with floating point numbers, but which actually caused headaches instead: https://github.com/Rdatatable/data.table/issues/1642 It is a slightly different scenario to this one, but I think it is still a good example of how we can end up creating unforeseen problems for people if we change core functionality to do unsolicited rounding behind the scenes. Best wishes, Frederick On Tue, Sep 11, 2018 at 12:11:29PM -0400, Duncan Murdoch wrote: > On 11/09/2018 11:23 AM, Emil Bode wrote: > > Hi all, > > > > > > > > Could we modify the "%%" (modulo)-operator to include some tolerance for > > rounding-errors when supplied with doubles? > > > > It's not much work (patch supplied on the bottom), and I don't think it > > would break anything, only if you were really interested in analysing > > rounding differences. > > > > Any ideas about implementing this and overwriting base::`%%`, or would we > > want another method (as I've done for the moment)? > > I think this is a bad idea. Your comments say "The > \code{\link[base:Arithmetic]{`\%\%`}} operator calculates the modulo, but > sometimes has rounding errors, e.g. "\code{(9.1/.1) \%\% 1}" gives ~ 1, > instead of 0." > > This is false. The %% calculation is exactly correct. The rounding error > happened in your input: 9.1/0.1 is not equal to 91, it is a little bit > less: > > > options(digits=20) > > 9.1/.1 > [1] 90.985789 > > And %% did not return 1, it returned the correct value: > > > (9.1/.1) %% 1 > [1] 0.98578915 > > So it makes no sense to change %%. > > You might argue that the division 9.1/.1 is giving the wrong answer, but in > fact that answer is correct too. The real problem is that in double > precision floating point the numbers 9.1 and .1 can't be represented > exactly. This is well known, it's in the FAQ (question 7.31). > > Duncan Murdoch > > > > > > > > > Background > > > > I was writing some code where something has to happen at a certain > > interval, with progress indicated, something like this: > > > > interval <- .001 > > > > progress <- .1 > > > > for(i in 1:1000*interval) {myFun(i); Sys.sleep(interval); if(i %% progress, > > 0))) cat(i, '\n')} > > > > without interval and progress being known in advance. I could work around > > it and make i integer, or do something like > > > > isTRUE(all.equal(i %% progress,0)) || isTRUE(all.equal(i %% progress, > > progress), > > > > but I think my code is clearer as it is. And I like the idea behind > > all.equal: we want double to approximately identical. > > > > > > > > So my patch (with roxygen2-markup): > > > > #' Modulo-operator with near-equality > > > > #' > > > > #' The \code{\link[base:Arithmetic]{`\%\%`}} operator calculates the > > modulo, but sometimes has rounding errors, e.g. "\code{(9.1/.1) \%\% 1}" > > gives ~ 1, instead of 0.\cr > > > > #' Comparable to what all.equal does, this operator has some tolerance for > > small rounding errors.\cr > > > > #' If the answer would be equal to the divisor within a small tolerance, 0 > > is returned instead. > > > > #' > > > > #' For integer x and y, the normal \%\%-operator is used > > > > #' > > > > #' @usage `\%mod\%`(x, y, tolerance = sqrt(.Machine$double.eps)) > > > > #' x \%mod\% y > > > > #' @param x,y numeric vectors, similar to those passed on to \%\% > > > > #' @param tolerance numeric, maximum difference, see > > \code{\link[base]{all.equal}}. The default is ~ \code{1.5e-8} > > > > #' @return identical to the result for \%\%, unless the answer would be > > really close to y, in which case 0 is returned > > > > #' @note To specify tolerance, use the call \code{`\%mod\%`(x,y,tolerance)} > > > > #' @note The precedence for \code{\%mod\%} is the same as that for > > \code{\%\%} > > > > #' > > > > #' @name mod > > > > #' @rdname mod > > > > #' > > > > #' @export > > > > `%mod%` <- function(x,y, tolerance = sqrt(.Machine$double.eps)) { > > > >stopifnot(is.numeric(x), is.numeric(y), is.numeric(tolerance), > > > > !is.na(tolerance), length(tolerance)==1, tolerance>=0) > > > >if(is.integer(x) && is.integer(y)) { > > > > return(x %% y) > > > >} else { > > > > ans <- x %% y > > > > return(ifelse(abs(ans-y) > > >} > > > > } > > > > > > > > Best regards, > > > > Emil Bode > > > > [[alternative HTML version deleted]] > > > > __
[Rd] var() with 0-length vector -- docs inconsistent with result
R 3.5.1 on Windows 7 The documentation for 'var' says: "These functions return 'NA' when there is only one observation (whereas S-PLUS has been returning 'NaN'), and fail if 'x' has length zero." The function 'sd' (based on 'var') has similar documentation. However, I get: var(numeric(0)) [1] NA rather than an error. Personally I prefer that basic summary functions like 'var' not throw errors even in corner cases. But either way, the result and the docs are inconsistent. Richard Raubertas > sessionInfo() R version 3.5.1 (2018-07-02) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 7 x64 (build 7601) Service Pack 1 Matrix products: default locale: [1] LC_COLLATE=English_United States.1252 [2] LC_CTYPE=English_United States.1252 [3] LC_MONETARY=English_United States.1252 [4] LC_NUMERIC=C [5] LC_TIME=English_United States.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base loaded via a namespace (and not attached): [1] compiler_3.5.1 tools_3.5.1 > Notice: This e-mail message, together with any attachme...{{dropped:13}} __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
Re: [Rd] Modification-proposal for %% (modulo) when supplied with double
On 11/09/2018 11:23 AM, Emil Bode wrote: Hi all, Could we modify the "%%" (modulo)-operator to include some tolerance for rounding-errors when supplied with doubles? It's not much work (patch supplied on the bottom), and I don't think it would break anything, only if you were really interested in analysing rounding differences. Any ideas about implementing this and overwriting base::`%%`, or would we want another method (as I've done for the moment)? I think this is a bad idea. Your comments say "The \code{\link[base:Arithmetic]{`\%\%`}} operator calculates the modulo, but sometimes has rounding errors, e.g. "\code{(9.1/.1) \%\% 1}" gives ~ 1, instead of 0." This is false. The %% calculation is exactly correct. The rounding error happened in your input: 9.1/0.1 is not equal to 91, it is a little bit less: > options(digits=20) > 9.1/.1 [1] 90.985789 And %% did not return 1, it returned the correct value: > (9.1/.1) %% 1 [1] 0.98578915 So it makes no sense to change %%. You might argue that the division 9.1/.1 is giving the wrong answer, but in fact that answer is correct too. The real problem is that in double precision floating point the numbers 9.1 and .1 can't be represented exactly. This is well known, it's in the FAQ (question 7.31). Duncan Murdoch Background I was writing some code where something has to happen at a certain interval, with progress indicated, something like this: interval <- .001 progress <- .1 for(i in 1:1000*interval) {myFun(i); Sys.sleep(interval); if(i %% progress, 0))) cat(i, '\n')} without interval and progress being known in advance. I could work around it and make i integer, or do something like isTRUE(all.equal(i %% progress,0)) || isTRUE(all.equal(i %% progress, progress), but I think my code is clearer as it is. And I like the idea behind all.equal: we want double to approximately identical. So my patch (with roxygen2-markup): #' Modulo-operator with near-equality #' #' The \code{\link[base:Arithmetic]{`\%\%`}} operator calculates the modulo, but sometimes has rounding errors, e.g. "\code{(9.1/.1) \%\% 1}" gives ~ 1, instead of 0.\cr #' Comparable to what all.equal does, this operator has some tolerance for small rounding errors.\cr #' If the answer would be equal to the divisor within a small tolerance, 0 is returned instead. #' #' For integer x and y, the normal \%\%-operator is used #' #' @usage `\%mod\%`(x, y, tolerance = sqrt(.Machine$double.eps)) #' x \%mod\% y #' @param x,y numeric vectors, similar to those passed on to \%\% #' @param tolerance numeric, maximum difference, see \code{\link[base]{all.equal}}. The default is ~ \code{1.5e-8} #' @return identical to the result for \%\%, unless the answer would be really close to y, in which case 0 is returned #' @note To specify tolerance, use the call \code{`\%mod\%`(x,y,tolerance)} #' @note The precedence for \code{\%mod\%} is the same as that for \code{\%\%} #' #' @name mod #' @rdname mod #' #' @export `%mod%` <- function(x,y, tolerance = sqrt(.Machine$double.eps)) { stopifnot(is.numeric(x), is.numeric(y), is.numeric(tolerance), !is.na(tolerance), length(tolerance)==1, tolerance>=0) if(is.integer(x) && is.integer(y)) { return(x %% y) } else { ans <- x %% y return(ifelse(abs(ans-y)https://stat.ethz.ch/mailman/listinfo/r-devel __ R-devel@r-project.org mailing list https://stat.ethz.ch/mailman/listinfo/r-devel
[Rd] Modification-proposal for %% (modulo) when supplied with double
Hi all, Could we modify the "%%" (modulo)-operator to include some tolerance for rounding-errors when supplied with doubles? It's not much work (patch supplied on the bottom), and I don't think it would break anything, only if you were really interested in analysing rounding differences. Any ideas about implementing this and overwriting base::`%%`, or would we want another method (as I've done for the moment)? Background I was writing some code where something has to happen at a certain interval, with progress indicated, something like this: interval <- .001 progress <- .1 for(i in 1:1000*interval) {myFun(i); Sys.sleep(interval); if(i %% progress, 0))) cat(i, '\n')} without interval and progress being known in advance. I could work around it and make i integer, or do something like isTRUE(all.equal(i %% progress,0)) || isTRUE(all.equal(i %% progress, progress), but I think my code is clearer as it is. And I like the idea behind all.equal: we want double to approximately identical. So my patch (with roxygen2-markup): #' Modulo-operator with near-equality #' #' The \code{\link[base:Arithmetic]{`\%\%`}} operator calculates the modulo, but sometimes has rounding errors, e.g. "\code{(9.1/.1) \%\% 1}" gives ~ 1, instead of 0.\cr #' Comparable to what all.equal does, this operator has some tolerance for small rounding errors.\cr #' If the answer would be equal to the divisor within a small tolerance, 0 is returned instead. #' #' For integer x and y, the normal \%\%-operator is used #' #' @usage `\%mod\%`(x, y, tolerance = sqrt(.Machine$double.eps)) #' x \%mod\% y #' @param x,y numeric vectors, similar to those passed on to \%\% #' @param tolerance numeric, maximum difference, see \code{\link[base]{all.equal}}. The default is ~ \code{1.5e-8} #' @return identical to the result for \%\%, unless the answer would be really close to y, in which case 0 is returned #' @note To specify tolerance, use the call \code{`\%mod\%`(x,y,tolerance)} #' @note The precedence for \code{\%mod\%} is the same as that for \code{\%\%} #' #' @name mod #' @rdname mod #' #' @export `%mod%` <- function(x,y, tolerance = sqrt(.Machine$double.eps)) { stopifnot(is.numeric(x), is.numeric(y), is.numeric(tolerance), !is.na(tolerance), length(tolerance)==1, tolerance>=0) if(is.integer(x) && is.integer(y)) { return(x %% y) } else { ans <- x %% y return(ifelse(abs(ans-y)https://stat.ethz.ch/mailman/listinfo/r-devel