First, let me say what a great and I truly mean great tool inline and Rcpp have been, along with R2.15-devel and the related Rtools ( now, no longer -devel, so congrat on that! ).
I'm doing a massive amount of calculations with the hypergeometric distribution, that in my original R code would take about 86 seconds to process the first 100 tests, and after using Rcpp inline to prototype a better solution, I was able to create a package (fairly easily) using the skeleton function ( on Windows even ), then used parallel with StarCluster on the Amazon AWS to process just shy of 400,000 tests in about 59 hours across 120 cores. Everything was great! I then got about 40 pages into the writeup of the results, theory, and proofs and found an error! The error was caused by floating point calculation 'skewing' and now I need to solve that! Manual tests using Rmpfr at 160 bits, shows the anticipated results ( at quad-precision 113 bit accuracy ). Needless to say, this is going to really slow things down, but rather than jump ship from R to a c++/boost/mpfr solution I'd like to be able to stick with R, as that is what all the research has been done on for the last 18 months or so... Simple and straight forward... My two use cases are: 1) Pass input as 'numeric', convert to multiprecision 160-bit, do calculations using multiprecision lgamma values, and return an mpfr array object. 2) Pass input as mpfr objects (list or array), do calculations, and return the modified values. Any examples or direction on how (if I can) to get to a Rcpp Rmpfr bridge with speed would go a long way... -- Sincerely, Thell
_______________________________________________ Rcpp-devel mailing list Rcpp-devel@lists.r-forge.r-project.org https://lists.r-forge.r-project.org/cgi-bin/mailman/listinfo/rcpp-devel