We have seen problems with a number of packages which use R/Rscript to
run R code in configure or makefiles.
(a) You must give a full path: there need be no version of R in the
path, and if there is it might not be the version/build of R under
which package installation is being done. So the
On 20 May 2010 11:56, Daniel Murphy wrote:
>>Much better to implement directly what this is trying to do: i.e. to
>>have a "halfmonth" time step. This is just the union of two "monthly"
>>sequences, one on the 1st of each month and another on the 15th of
>>each month.
>
> For some applications tha
>Much better to implement directly what this is trying to do: i.e. to
>have a "halfmonth" time step. This is just the union of two "monthly"
>sequences, one on the 1st of each month and another on the 15th of
>each month.
For some applications that might be true. But not for others. For a month
wi
How about some "computing on the language", something like this:
exprs <- parse("SCRIPT.R")
invalids <- c(".Internal", ".Primitive")
if( any( invalids %in% all.names(exprs) ) )
stop("sandbox check failed")
I believe this would prevent evaluating any direct calls to '.Primitive'
and '.Intern
I think you'll find it's a bit more complicated than that.
Firstly, R --sandbox is pretty crippled, since as far as I can tell it can't
load packages, since package loading uses gzfile(). This would include the
'stats' package. If you can load packages you would need to sanitize all
those
Dear Abhijit,
If you think that table.CAPM is the culprit, you could run the call to
such function in R on both platforms using Rprof to check which part
of the function is producing the bottleneck.
Best regards,
Carlos J. Gil Bellosta
http://www.datanalytics.com
2010/5/19 Abhijit Bera :
> Upd
Here is an updated bench mark:
Linux
Time taken by DB:0:00:00.226888
Time taken by R:0:00:05.536973
Time taken for vector conversions:0:00:00.001799
Total time taken for return calculation:0:00:00.090062
Total time taken for making Tagged list and Data Frame:0:00:00.015424
Total time taken for mak
Update: it appears that the time taken isn't so much on the Data conversion.
The maximum time taken is in CAPM calculation. :( Anyone know why the CAPM
calculation would be faster on Windows?
On Wed, May 19, 2010 at 5:51 PM, Abhijit Bera wrote:
> Hi
>
> This is my function. It serves an HTML pag
Hi
This is my function. It serves an HTML page after the calculations. I'm
connecting to a MSSQL DB using pyodbc.
def CAPM(self,client):
r=self.r
cds="1590"
bm="20559"
d1 = []
v1 = []
v2 = []
print"Parsing GET Params"
param