[Rd] R devel: install.packages(..., type="both") not supported on Windows

2016-05-14 Thread Henrik Bengtsson
Is the following intentional or something that has been overlooked?

[HB-X201]{hb}: R --vanilla

R Under development (unstable) (2016-05-13 r70616) -- "Unsuffered Consequences"
Copyright (C) 2016 The R Foundation for Statistical Computing
Platform: x86_64-w64-mingw32/x64 (64-bit)

[...]

## Note that "source" is the built-in default
> getOption("pkgType")
[1] "source"

## Trying with 'both'
> install.packages("MASS", type="both")
Installing package into 'C:/Users/hb/R/win-library/3.4'
(as 'lib' is unspecified)
Error in install.packages("MASS") :
  type == "both" can only be used on Windows or a CRAN build for Mac OS X

## But 'win.binary' works
> install.packages("MASS", type="win.binary")
Installing package into 'C:/Users/hb/R/win-library/3.4'
(as 'lib' is unspecified)
trying URL 'https://cran.r-project.org/bin/windows/contrib/3.4/MASS_7.3-45.zip'
Content type 'application/zip' length 1088567 bytes (1.0 MB)
downloaded 1.0 MB

package 'MASS' successfully unpacked and MD5 sums checked


> sessionInfo()
R Under development (unstable) (2016-05-13 r70616)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 7 x64 (build 7601) Service Pack 1

locale:
[1] LC_COLLATE=English_United States.1252
[2] LC_CTYPE=English_United States.1252
[3] LC_MONETARY=English_United States.1252
[4] LC_NUMERIC=C
[5] LC_TIME=English_United States.1252

attached base packages:
[1] stats graphics  grDevices utils datasets  methods   base

/Henrik

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


[Rd] R external pointer and GPU memory leak problem

2016-05-14 Thread Yuan Li
My question is based on a project I have partially done, but there is still 
something I'm not clear.

My goal is to create a R package contains GPU functions (some are from Nividia 
cuda library, some are my self-defined CUDA functions)

My design is quite different from current R's GPU package, I want to create a R 
object (external pointer) point to GPU address, and run my GPU function direct 
on GPU side without transferring forth and back between CPU and GPU.

I used the R external pointer to implement my design. But I found I have memory 
leak problems on GPU side, I can still fix it by running gc() function 
explicitly in R side, but I'm just wondering if I missed something in my C 
code. Would you please indicate my mistake, because this is my first time write 
a R package, and I could possibly made some terrible mistakes.

actually, I have wrote bunch of GPU functions which can run on GPU side with 
the object created by following create function, but the memory leak kills me 
if I need to deal with some huge dataset.

Here is my create function, I create a gpu pointer x, and allocate GPU memory 
for x, then make a R external pointer ext based on x, and copy the cpu vector 
input to my gpu external pointer ext, 


/*
define function to create a vector in GPU 
by transferring a R's vector to GPU.
input is R's vector and its length, 
output is a R external pointer
pointing to GPU vector(device)
*/
SEXP createGPU(SEXP input, SEXP n)
{  
int *lenth = INTEGER(n);
       PROTECT (input = AS_NUMERIC (input));
       double * temp; 
       temp = REAL(input);
double *x;               ##here is the step which causes the memory leak
cudacall(cudaMalloc((void**)&x, *lenth * sizeof(double)));
//protect the R external pointer from finalizer
SEXP ext = PROTECT(R_MakeExternalPtr(x, R_NilValue, R_NilValue));
R_RegisterCFinalizerEx(ext, _finalizer, TRUE);
 
//copying CPU to GPU
cublascall(cublasSetVector(*lenth, sizeof(double), temp, 1, 
R_ExternalPtrAddr(ext), 1));    
       UNPROTECT(2);
return ext;
}



here is my finalized for my create function,

/*
define finalizer for R external pointer
input is R external pointer, function will finalize the pointer 
when it is not in use.
*/
static void _finalizer(SEXP ext)
{
if (!R_ExternalPtrAddr(ext))
return;
       double * ptr= (double *) R_ExternalPtrAddr(ext);
Rprintf("finalizer invoked once \n");
cudacall(cudaFree(ptr));
R_ClearExternalPtr(ext);
}


My create function can run smoothly, but if I run the create function too many 
times, it shows out of memory for my GPU device, which clearly implies memory 
leak problem. Can anybody help? Help alot in advance!   
 
__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel