Re: [R-pkg-devel] handling documentation build tools

2024-05-21 Thread Simon Urbanek
Ross,

It's entirely up to you how you do this -- what you describe is something that 
has to happen before your run R CMD build, so it's not reflected in your 
package that you submit (and this has nothing to do with CRAN or R). There is 
nothing automatic as it requires you to do something in any case. It's entirely 
up to you which tools you use before you call R CMD build - it comes down to 
personal preferences. You have already described all that is needed: the most 
simple solution is to have either a Makefile or a script in your repository 
that does whatever you need (Makefile gives you the automatic update for free) 
and you put those in .Rbuildignore so they will not be part of the package 
source you submit as the output of R CMD build. Typically, it is customary to 
keep the sources (here you .lyx) in the distributed package, "tools" is one 
place you could do it safely, but it is not well-defined.

There is a slight variation on the above: you can (ab)use the "cleanup" script 
so that the process is actually run by R CMD build itself. E.g., the cleanup 
script could simply check if your Makefile is present (which would be only in 
the repo, but not in that tar ball) and simply run make in that case.

That said, as Dirk mentioned, since your output is documentation, the 
semantically correct way would be to treat it as a vignette, which is 
well-documented and specifically provides way for what you described and it is 
clear that it is documentation and not just some random script.

Cheers,
Simon


> On 22/05/2024, at 9:01 AM, Boylan, Ross via R-package-devel 
>  wrote:
> 
> I have some documentation that requires external tools to build.  I would 
> like to build the automatically, with requiring either users or repositories 
> to have the tools.  What's the best way to accomplish that.
> 
> Specifically one document is written using the LyX word processor, so the 
> "source" is msep.lyx.  To convert that into something useful, a pdf, one must 
> run lyx, which in turn requires latex.
> 
> I'm looking for recommendations about how to approach this.
> 
> A purely manual approach would be to place msep.lyx in .Rbuildignore and 
> manually regenerate the pdf if I edit the file.
> 
> This has 2 drawbacks: first, it does not guarantee that the pdf is consistent 
> with the current source; second, if I want to generate plain text or html 
> versions of the document as well, the manual approach gets more tedious and 
> error prone.
> 
> Currently I use pkgbuild::build's option to run bootstrap.R to run the 
> lyx->pdf conversion automatically with each build.  The script is pretty 
> specific to my build environment, and I think if I uploaded the package CRAN 
> would end up trying to run bootstrap.R, which would fail.
> 
> Maybe the script should go in tools/? But then it won't run automatically.  
> Maybe the script in tools goes in .Rbuildignore and the bootstrap.R script 
> simply checks if the tools/ script exists and runs it if present.
> 
> Suggestions?
> 
> I'm also unsure what directory msep.lyx should go in, though that's a 
> secondary issue.  Currently it's in inst/doc, which led to problems with the 
> build system sometimes wiping it out.  I've solved that problem.
> 
> Thanks.
> Ross
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Trouble with dependencies on phyloseq and microViz

2024-05-12 Thread Simon Urbanek
Sharon,

I have upgraded Bioc on the R-oldrel build machines and that seems to have 
taken care of the problem as far as I can tell:
https://cran.r-project.org/web/checks/check_results_HybridMicrobiomes.html

If you have reports like that, it's a better idea to ask us first and always 
supply the name of the package, otherwise the thread can quickly degrade to 
wild guesswork and misleading or incorrect answers from unrelated parties.

Cheers,
Simon


> On 7/05/2024, at 6:56 PM, Simon Urbanek  wrote:
> 
> Ivan,
> 
> sorry if it wasn't clear, but this thread was about strong dependencies -- 
> Sharon noted that phyloseq must remain strong dependency and asked how to 
> deal with that (see all the parts you cut from this thread). Now that I 
> finally have the package name I can check the details - apparently this only 
> affects R-oldrel so presumable Bioc upgrade may fix this so I'll have a look 
> tomorrow.
> 
> Cheers,
> Simon
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Fast Matrix Serialization in R?

2024-05-09 Thread Simon Urbanek



> On 10/05/2024, at 12:31 PM, Henrik Bengtsson  
> wrote:
> 
> On Thu, May 9, 2024 at 3:46 PM Simon Urbanek
>  wrote:
>> 
>> FWIW serialize() is binary so there is no conversion to text:
>> 
>>> serialize(1:10+0L, NULL)
>> [1] 58 0a 00 00 00 03 00 04 02 00 00 03 05 00 00 00 00 05 55 54 46 2d 38 00 
>> 00
>> [26] 00 0d 00 00 00 0a 00 00 00 01 00 00 00 02 00 00 00 03 00 00 00 04 00 00 
>> 00
>> [51] 05 00 00 00 06 00 00 00 07 00 00 00 08 00 00 00 09 00 00 00 0a
>> 
>> It uses the native representation so it is actually not as bad as it sounds.
>> 
>> One aspect I forgot to mention in the earlier thread is that if you don't 
>> need to exchange the serialized objects between machines with different 
>> endianness then avoiding the swap makes it faster. E.g, on Intel (which is 
>> little-endian and thus needs swapping):
>> 
>>> a=1:1e8/2
>>> system.time(serialize(a, NULL))
>>   user  system elapsed
>>  2.123   0.468   2.661
>>> system.time(serialize(a, NULL, xdr=FALSE))
>>   user  system elapsed
>>  0.393   0.348   0.742
> 
> Would it be worth looking into making xdr=FALSE the default? From
> help("serialize"):
> 
> xdr: a logical: if a binary representation is used, should a
> big-endian one (XDR) be used?
> ...
> As almost all systems in current use are little-endian, xdr = FALSE
> can be used to avoid byte-shuffling at both ends when transferring
> data from one little-endian machine to another (or between processes
> on the same machine). Depending on the system, this can speed up
> serialization and unserialization by a factor of up to 3x.
> 
> This seems like a low-hanging fruit that could spare the world from
> wasting unnecessary CPU cycles.
> 


I thought about it before, but the main problem here is (as often) 
compatibility. The current default guarantees that the output can be safely 
read on any machine while xdr=FALSE only works if used on machines with the 
same endianness and will fail horribly otherwise. R cannot really know whether 
the user intends to transport the serialized data to another machine or not, so 
it cannot assume it is safe unless the user indicates so. Therefore all we can 
safely do is tell the users that they should use it where appropriate -- and 
the documentation explicitly says so:

 As almost all systems in current use are little-endian, ‘xdr =
 FALSE’ can be used to avoid byte-shuffling at both ends when
 transferring data from one little-endian machine to another (or
 between processes on the same machine).  Depending on the system,
 this can speed up serialization and unserialization by a factor of
 up to 3x.

Unfortunately, no one bothers to reads the documentation so it is not as 
effective as changing the default, but for reasons above it is just not as easy 
to change. I do acknowledge that the risk is relatively low since big-endian 
machines are becoming rare, but it's not zero.

That said, what worries me a bit more is that some derived functions such as 
saveRDS() don't expose the xdr option, so you actually have no way to use the 
native binary format. I understand the logic - see above, but as you said, that 
makes them unnecessarily slow. I wonder if it may be worth doing something a 
bit smarter and tag officially a "reverse XDR" format instead - that way it 
would be well-defined and could be made the default. Interestingly, the 
de-serialization part actually doesn't care, so you can use readRDS() on the 
binary serialization even in current R versions, so just adding the option 
would still be backwards-compatible. Definitely something to think about...

Cheers,
Simon


> 
> 
>> 
>> Cheers,
>> Simon
>> 
>> __
>> R-package-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Fast Matrix Serialization in R?

2024-05-09 Thread Simon Urbanek



> On 9/05/2024, at 11:58 PM, Vladimir Dergachev  wrote:
> 
> 
> 
> On Thu, 9 May 2024, Sameh Abdulah wrote:
> 
>> Hi,
>> 
>> I need to serialize and save a 20K x 20K matrix as a binary file. This 
>> process is significantly slower in R compared to Python (4X slower).
>> 
>> I'm not sure about the best approach to optimize the below code. Is it 
>> possible to parallelize the serialization function to enhance performance?
> 
> Parallelization should not help - a single CPU thread should be able to 
> saturate your disk or your network, assuming you have a typical computer.
> 
> The problem is possibly the conversion to text, writing it as binary should 
> be much faster.
> 


FWIW serialize() is binary so there is no conversion to text:

> serialize(1:10+0L, NULL)
 [1] 58 0a 00 00 00 03 00 04 02 00 00 03 05 00 00 00 00 05 55 54 46 2d 38 00 00
[26] 00 0d 00 00 00 0a 00 00 00 01 00 00 00 02 00 00 00 03 00 00 00 04 00 00 00
[51] 05 00 00 00 06 00 00 00 07 00 00 00 08 00 00 00 09 00 00 00 0a

It uses the native representation so it is actually not as bad as it sounds.

One aspect I forgot to mention in the earlier thread is that if you don't need 
to exchange the serialized objects between machines with different endianness 
then avoiding the swap makes it faster. E.g, on Intel (which is little-endian 
and thus needs swapping):

> a=1:1e8/2
> system.time(serialize(a, NULL))
   user  system elapsed 
  2.123   0.468   2.661 
> system.time(serialize(a, NULL, xdr=FALSE))
   user  system elapsed 
  0.393   0.348   0.742 

Cheers,
Simon

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Fast Matrix Serialization in R?

2024-05-08 Thread Simon Urbanek
Sameh,

if it's a matrix, that's easy as you can write it directly which is the fastest 
possible way without compression - e.g. quick proof of concept:

n <- 2^2
A <- matrix(runif(n), ncol = sqrt(n))

## write (dim + payload)
con <- file(description = "matrix_file", open = "wb")
system.time({
writeBin(d <- dim(A), con)
dim(A)=NULL
writeBin(A, con)
dim(A)=d
})
close(con)

## read
con <- file(description = "matrix_file", open = "rb")
system.time({
d <- readBin(con, 1L, 2)
A1 <- readBin(con, 1, d[1] * d[2])
dim(A1) <- d
})
close(con)
identical(A, A1)

   user  system elapsed 
  0.931   2.713   3.644 
   user  system elapsed 
  0.089   1.360   1.451 
[1] TRUE

So it's really just limited by the speed of your disk, parallelization won't 
help here.

Note that in general you get faster read times by using compression as most 
data is reasonably compressible, so that's where parallelization can be useful. 
There are plenty of package with more tricks like mmapping the files etc., but 
the above is just base R.

Cheers,
Simon



> On 9/05/2024, at 3:20 PM, Sameh Abdulah  wrote:
> 
> Hi,
> 
> I need to serialize and save a 20K x 20K matrix as a binary file. This 
> process is significantly slower in R compared to Python (4X slower).
> 
> I'm not sure about the best approach to optimize the below code. Is it 
> possible to parallelize the serialization function to enhance performance?
> 
> 
>  n <- 2^2
>  cat("Generating matrices ... ")
>  INI.TIME <- proc.time()
>  A <- matrix(runif(n), ncol = m)
>  END_GEN.TIME <- proc.time()
>  arg_ser <- serialize(object = A, connection = NULL)
> 
>  END_SER.TIME <- proc.time()
>  con <- file(description = "matrix_file", open = "wb")
>  writeBin(object = arg_ser, con = con)
>  close(con)
>  END_WRITE.TIME <- proc.time()
>  con <- file(description = "matrix_file", open = "rb")
>  par_raw <- readBin(con, what = raw(), n = file.info("matrix_file")$size)
>  END_READ.TIME <- proc.time()
>  B <- unserialize(connection = par_raw)
>  close(con)
>  END_DES.TIME <- proc.time()
>  TIME <- END_GEN.TIME - INI.TIME
>  cat("Generation time", TIME[3], " seconds.")
> 
>  TIME <- END_SER.TIME - END_GEN.TIME
>  cat("Serialization time", TIME[3], " seconds.")
> 
>  TIME <- END_WRITE.TIME - END_SER.TIME
>  cat("Writting time", TIME[3], " seconds.")
> 
>  TIME <- END_READ.TIME - END_WRITE.TIME
>  cat("Read time", TIME[3], " seconds.")
> 
>  TIME <- END_DES.TIME - END_READ.TIME
>  cat("Deserialize time", TIME[3], " seconds.")
> 
> 
> 
> 
> Best,
> --Sameh
> 
> -- 
> 
> This message and its contents, including attachments are intended solely 
> for the original recipient. If you are not the intended recipient or have 
> received this message in error, please notify me immediately and delete 
> this message from your computer system. Any unauthorized use or 
> distribution is prohibited. Please consider the environment before printing 
> this email.
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Trouble with dependencies on phyloseq and microViz

2024-05-07 Thread Simon Urbanek
Ivan,

sorry if it wasn't clear, but this thread was about strong dependencies -- 
Sharon noted that phyloseq must remain strong dependency and asked how to deal 
with that (see all the parts you cut from this thread). Now that I finally have 
the package name I can check the details - apparently this only affects 
R-oldrel so presumable Bioc upgrade may fix this so I'll have a look tomorrow.

Cheers,
Simon



> On May 7, 2024, at 6:15 PM, Ivan Krylov  wrote:
> 
> On Tue, 7 May 2024 10:07:59 +1200
> Simon Urbanek  wrote:
> 
>> That doesn't work - additional repositories are not allowed on CRAN
>> other than in very exceptional cases, because it means the package
>> cannot be installed by users making it somewhat pointless.
> 
> I suppose that with(tools::CRAN_package_db(),
> sum(!is.na(Additional_repositories)) / length(Additional_repositories))
> = 0.7% does make it very rare. But not even for a weak dependency? Is
> it for data packages only, as seems to be the focus of
> [10.32614/RJ-2017-026]? The current wording of the CRAN policy makes it
> sound like Additional_repositories is preferred to explaining the
> non-mainstream weak dependencies in Description.
> 
> So what should be done about the non-Bioconductor weak dependency
> microViz?
> 
>> As for the OP, can you post the name of the package and/or the link
>> to the errors so I can have a look?
> 
> Sharon has since got rid of the WARNING and now only has NOTEs due to
> microViz and a URL to its repo in the Description:
> https://win-builder.r-project.org/incoming_pretest/HybridMicrobiomes_0.1.2_20240504_185748/Debian/00check.log
> 
> If Additional_repositories: is the correct way to specify a
> non-mainstream weak dependency for a CRAN package, the URL must be
> specified as https://david-barnett.r-universe.dev/src/contrib, not just
> https://david-barnett.r-universe.dev/. I am sorry for not getting it
> right the first time.
> 
> -- 
> Best regards,
> Ivan
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Trouble with dependencies on phyloseq and microViz

2024-05-06 Thread Simon Urbanek



> On 5/05/2024, at 4:32 AM, Ivan Krylov via R-package-devel 
>  wrote:
> 
> В Sat, 4 May 2024 15:53:25 +
> Sharon Bewick  пишет:
> 
>> I have a dependency on phyloseq, which is available through GitHub
>> but not on the CRAN site. I have a similar problem with microViz,
>> however I’ve managed to make it suggested, rather than required.
>> There is no way to get around the phyloseq requirement. How do I fix
>> this problem so that I can upload my package to the CRAN website?
> 
> Did a human reviewer tell you to get rid of the dependencies? There is
> at least 444 packages on CRAN with strong dependencies on Bioconductor
> packages, so your phyloseq dependency should work. In fact, 14 of them
> depend on phyloseq.
> 
> What you need is an Additional_repositories field in your DESCRIPTION
> specifying the source package repository where microViz could be
> installed from. I think that
> 
> Additional_repositories: https://david-barnett.r-universe.dev
> 
> ...should work.
> 

That doesn't work - additional repositories are not allowed on CRAN other than 
in very exceptional cases, because it means the package cannot be installed by 
users making it somewhat pointless. Bioconductor doesn't need to be flagged in 
Additional_repositories, but not all packages are available - only those that 
do no depend on the data repository.

As for the OP, can you post the name of the package and/or the link to the 
errors so I can have a look?

Cheers,
Simon



> Besides that, you'll need to increment the version and list the *.Rproj
> file in .Rbuildignore:
> https://win-builder.r-project.org/incoming_pretest/HybridMicrobiomes_0.1.1_20240504_173331/Debian/00check.log
> 
> -- 
> Best regards,
> Ivan
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Error handling in C code

2024-05-05 Thread Simon Urbanek
Jarrod,

could you point us to the code? There is not much to go by based on your email. 
One thing just in general: it's always safer to not re-map function names, 
especially since "error" can be defined in many random other headers, so it's 
better to use Rf_error() instead to avoid confusions with 3rd party headers 
that may (re-)define the "error" macro (depending on the order you include them 
in).

Cheers,
Simon


> On 4/05/2024, at 3:17 AM, Jarrod Hadfield  wrote:
> 
> Hi,
> 
> I have an R library with C code in it. It has failed the CRAN checks for 
> Debian.  The problem is with the error function being undefined. Section 6.2 
> of the Writing R extensions (see below) suggests error handling can be 
> handled by error and the appropriate header file is included in R.h, but this 
> seems not to be the case?
> 
> Any help would be appreciated!
> 
> Thanks,
> 
> Jarrod
> 
> 6.2 Error signaling
> 
> The basic error signaling routines are the equivalents of stop and warning in 
> R code, and use the same interface.
> 
> void error(const char * format, ...);
> void warning(const char * format, ...);
> void errorcall(SEXP call, const char * format, ...);
> void warningcall(SEXP call, const char * format, ...);
> void warningcall_immediate(SEXP call, const char * format, ...);
> 
> These have the same call sequences as calls to printf, but in the simplest 
> case can be called with a single character string argument giving the error 
> message. (Don�t do this if the string contains �%� or might otherwise be 
> interpreted as a format.)
> 
> These are defined in header R_ext/Error.h included by R.h.
> The University of Edinburgh is a charitable body, registered in Scotland, 
> with registration number SC005336. Is e buidheann carthannais a th� ann an 
> Oilthigh Dh�n �ideann, cl�raichte an Alba, �ireamh cl�raidh SC005336.
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] re-exporting plot method?

2024-04-30 Thread Simon Urbanek
Kevin,

welcome to the S4 world! ;) There is really no good solution since S4 only 
works at all if you attach the package since it relies on replacing the base S3 
generic with its own - so the question remains what are your options to do it.

The most obvious is to simply add Depends: Rgraphviz which makes sure that any 
generics required are attached so your package doesn't need to worry. This is 
the most reliable in a way as you are not limiting the functionality to methods 
you know about. The side-effect, though, (beside exposing functions the user 
may not care about) is that such package cannot be on CRAN since Rgraphics is 
not on CRAN (that said, you mentioned you are already importing then you seem 
not to worry about that).

The next option is to simply ignore Rgraphviz and instead add 
setGeneric("plot") to your package and add methods to Depends and 
importFrom(methods, setGeneric) + exportMethods(plot) to the namespace. This 
allows you to forget about any dependencies - you are just creating the S4 
generic from base::plot to make the dispatch work. This is the most 
light-weight solution as you only cherry-pick methods you need and there are no 
dependencies other than "methods". However, it is limited to just the functions 
you care about.

Finally, you could re-export the S4 plot generic from Rgraphviz, but I'd say 
that is the least sensible option, since it doesn't have any benefit over doing 
it yourself and only adds a hard dependency for no good reason. Also copying 
functions from another package opens up a can of worms with versions etc. - 
even if the risk is likely minimal.

Just for completeness - a really sneaky way would be to export an S3 plot 
method from your package - it would be only called in the case where the plot 
generic has not been replaced yet, so you could "fix" things on the fly by 
calling the generic from Rgraphviz, but that sounds a little hacky even for my 
taste ;).

Cheers,
Simon



> On 1/05/2024, at 6:03 AM, Kevin R. Coombes  wrote:
> 
> Hi,
> 
> I am working on a new package that primarily makes use of "igraph" 
> representations of certain mathematical graphs, in order to apply lots of the 
> comp sci algorithms already implemented in that package. For display 
> purposes, my "igraph" objects already include information that defines node 
> shapes and colors and edge styles and colors. But, I believe that the "graph" 
> - "Rgraphviz" - "graphNEL" set of tools will produce better plots of the 
> graphs.
> 
> So, I wrote my own "as.graphNEL" function that converts the "igraph" objects 
> I want to use into graphNEL (or maybe into "Ragraph") format in order to be 
> able to use their graph layout and rendering routines. This function is smart 
> enough to translate the node and edge attributes from igraph into something 
> that works correctly when plotted using the tools in Rgraphviz. (My 
> DESCRIPTION and NAMESPACE files already import the set of functions from 
> Rgraphviz necessary to make this happen.)
> 
> In principle, I'd like the eventual user to simply do something like
> 
> library("KevinsNewPackage")
> IG <- makeIgraphFromFile(sourcefile)
> GN <- as.graphNEL(IG)
> plot(GN)
> 
> The first three lines work fine, but the "plot" function only works if the 
> user also explicitly includes the line
> 
> library("Rgraphviz")
> 
> I suspect that there is a way with imports and exports in the NAMESPACE to 
> make this work without having to remind the user to load the other package. 
> But (in part because the plot function in Rgraphviz is actually an S4 method, 
> which I don't need to alter in any way), I'm not sure exactly what needs to 
> be imported or exported.
> 
> Helpful suggestion would be greatly appreciated.
> 
> Best,
>   Kevin
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Problem with loading package "devtools" from CRAN.

2024-04-29 Thread Simon Urbanek
Rolf,

what do you mean by "broken"? Since you failed to include any proof nor details 
it's unlikely that anyone can help you, but chances are pretty high that it was 
a problem on your end. I just checked with R 4.4.0 on Ubuntu 22.04 and devtools 
install and load just fine, so it is certainly broken on CRAN.

Make sure you don't have packages for old R version in your local libraries - 
that is a most common mistake - always remove them when upgrading R and 
re-install if still need them. You can check the locations of your libraries 
with .libPaths() in R. Sometimes, update.packages(checkBuilt=TRUE) can do the 
trick as well, but I prefer clean re-installs for safety as it also helps you 
clean up old cruft that is not longer needed.

Cheers,
Simon



> On Apr 29, 2024, at 1:19 PM, Rolf Turner  wrote:
> 
> 
> Executive summary:
> 
>> The devtools package on CRAN appears to be broken.
>> Installing devtools from github (using remotes::install_github())
>> seems to give satisfactory results.
> 
> This morning my software up-dater (Ubuntu 22.04.4) prompted me to
> install updated versions of some software, including r-base.  I thereby
> obtained what I believe is the latest version of R (4.4.0 (2024-04-24)).
> 
> Then I could not load the "devtools" package, which is fairly crucial to
> my work.
> 
> A bit of web-searching got me to a post on github by Henrik Bengtsson,
> which referred to the devtools problem.  I therefore decided to try
> installing devtools from github:
> 
>remotes::install_github("r-lib/devtools",lib="/home/rolf/Rlib")
> 
> Some 50-odd packages seemed to require up-dating.  I went for it, and
> after a fairly long wait, while messages about the updating flowed by,
> devtools seemed to get installed.  Now "library(devtools)" runs without
> error, so I am happy with my own situation.  However there seems to be
> a problem with the devtools package on CRAN, which ought to be fixed.
> 
> cheers,
> 
> Rolf Turner
> 
> -- 
> Honorary Research Fellow
> Department of Statistics
> University of Auckland
> Stats. Dep't. (secretaries) phone:
> +64-9-373-7599 ext. 89622
> Home phone: +64-9-480-4619
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [Rd] R 4.4.0 has version of Matrix 1.7-0, but it's not available on CRAN

2024-04-26 Thread Simon Urbanek
Everyone, take a deep breath - there have been many disruptions in last few 
days - some obvious, others associated with the R release and the BioC 
disruptions on CRAN, so there is no need to panic and start devising 
"solutions" for issues that are temporary. Things are being sorted out and some 
may take longer than others to settle. Feel free to report, but please don't 
speculate or try to "help". That said, you may find that this thread is 
obsolete by now (thanks to Kurt!), so no need to respond.

Cheers,
Simon


> On Apr 26, 2024, at 11:15 PM, Gábor Csárdi  wrote:
> 
> On Fri, Apr 26, 2024 at 1:06 PM Ivan Krylov  wrote:
>> 
>> On Fri, 26 Apr 2024 12:32:59 +0200
>> Martin Maechler  wrote:
>> 
>>> Finally, I'd think it definitely would be nice for
>>> install.packages("Matrix") to automatically get the correct
>>> Matrix version from CRAN ... so we (R-core) would be grateful
>>> for a patch to install.packages() to achieve this
>> 
>> Since the binaries offered on CRAN are already of the correct version
>> (1.7-0 for -release and -devel), only source package installation needs
>> to concern itself with the Recommended subdirectory.
>> 
>> Would it be possible to generate the PACKAGES* index files in the
>> 4.4.0/Recommended subdirectory? Then on the R side it would be needed
>> to add a new repo (adjusting chooseCRANmirror() to set it together with
>> repos["CRAN"]) and keep the rest of the machinery intact.
> 
> That's not how this worked in the past AFAIR. Simply, the packages in
> the x.y.z/Recommended directories were included in
> src/contrib/PACKAGES*, metadata, with the correct R version
> dependencies, in the correct order, so that `install.packages()`
> automatically installed the correct version without having to add
> extra repositories or manually search for package files.
> E.g. right now we have
> 
> Package: Matrix
> Version: 1.7-0
> Priority: recommended
> Depends: R (>= 4.5), methods
> Path: 4.5.0/Recommended
> 
> for R 4.5.0. IMHO what we would need for R 4.4.0 is adding
> something like
> 
> Package: Matrix
> Version: 1.7-0
> Priority: recommended
> Depends: R (>= 4.4), methods
> Path: 4.4.0/Recommended
> 
> *after* that.
> 
> G.
> 
> [...]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Is ALTREP "non-API"?

2024-04-24 Thread Simon Urbanek



> On Apr 25, 2024, at 12:55 AM, Hadley Wickham  wrote:
> 
> 
> 
> >>> That is not true at all - the presence of header does not constitute
> >> declaration of something as the R API. There are cases where internal
> >> functions are in the headers for historical or other reasons since the
> >> headers are used both for the internal implementation and packages. That's
> >> why this is in R-exts under "The R API: entry points for C code":
> >>> 
> >>> If I understand your point correctly, does this mean that
> >> Rf_allocVector() is not part of the "official" R API? It does not appear to
> >> be documented in the "The R API: entry points for C code" section.
> >>> 
> >> 
> >> It does, obviously:
> >> https://cran.r-project.org/doc/manuals/R-exts.html#Allocating-storage-1
> > 
> > 
> > I'm just trying to understand the precise definition of the official API
> > here. So it's any function mentioned in R-exts, regardless of which section
> > it appears in?
> > 
> > Does this sentence imply that all functions starting with alloc* are part
> > of the official API?
> > 
> 
> Again, I can only quote the R-exts (few lines below the previous "The R API" 
> quote):
> 
> 
> We can classify the entry points as
> API
> Entry points which are documented in this manual and declared in an installed 
> header file. These can be used in distributed packages and will only be 
> changed after deprecation.
> 
> 
> It says "in this manual" - I don't see anywhere restriction on a particular 
> section of the manual, so I really don't see why you would think that 
> allocation is not part on the API.
> 
> Because you mentioned that section explicitly earlier in the thread. This 
> obviously seems clear to you, but it's not at all clear to me and I suspect 
> many of the wider community. It's frustrating because we are trying our best 
> to do what y'all want us to do, but it feels like we keep getting the rug 
> pulled out from under us with very little notice, and then have to spend a 
> large amount of time figuring out workarounds. That is at least feasible for 
> my team since we have multiple talented folks who are paid full-time to work 
> on R, but it's a huge struggle for most people who are generally maintaining 
> packages in their spare time.
> 


I must be missing something here since I have no idea what you are talking 
about. The whole point if a stable API is that no rugs are pulled, so in fact 
it's exactly the opposite of what you claim - the notice is at least a year due 
to the release cycle, typically more. Unlike many other languages and 
ecosystems, R public API does not change very often - and R-core is thinking 
hard about making breaking changes if at all. In fact, I hear more complaints 
that the API does NOT change and we are too conservative, precisely because we 
want to avoid unnecessary breakage.

I will not further comment here - all I did was to point out the relevant text 
from R-exts which is the canonical source of information. If you have issues, 
find some parts unclear and want to improve the documentation, I would like to 
invite you to contribute constructively, propose changes, submit patches. The 
R-exts document has been around for decades, so it seem implausible that all of 
sudden it is being misunderstood the way you portrayed it, but it is certainly 
a good idea to improve documentation so contributions are welcome.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Is ALTREP "non-API"?

2024-04-23 Thread Simon Urbanek



> On Apr 24, 2024, at 12:52 AM, Hadley Wickham  wrote:
> 
>> 
>> 
>> 
 ALTREP is part of the official R api, as illustrated by the presence of
 src/include/R_ext/Altrep.h. Everything declared in the header files in
>> that
 directory is official API AFAIK (and I believe that is more definitive
>> than
 the manuals).
 
>>> 
>>> That is not true at all - the presence of header does not constitute
>> declaration of something as the R API. There are cases where internal
>> functions are in the headers for historical or other reasons since the
>> headers are used both for the internal implementation and packages. That's
>> why this is in R-exts under "The R API: entry points for C code":
>>> 
>>> If I understand your point correctly, does this mean that
>> Rf_allocVector() is not part of the "official" R API? It does not appear to
>> be documented in the "The R API: entry points for C code" section.
>>> 
>> 
>> It does, obviously:
>> https://cran.r-project.org/doc/manuals/R-exts.html#Allocating-storage-1
> 
> 
> I'm just trying to understand the precise definition of the official API
> here. So it's any function mentioned in R-exts, regardless of which section
> it appears in?
> 
> Does this sentence imply that all functions starting with alloc* are part
> of the official API?
> 

Again, I can only quote the R-exts (few lines below the previous "The R API" 
quote):


We can classify the entry points as
API
Entry points which are documented in this manual and declared in an installed 
header file. These can be used in distributed packages and will only be changed 
after deprecation.


It says "in this manual" - I don't see anywhere restriction on a particular 
section of the manual, so I really don't see why you would think that 
allocation is not part on the API.

Cheers,
Simon




>> For many purposes it is sufficient to allocate R objects and manipulate
> those. There are quite a
>> few allocXxx functions defined in Rinternals.h—you may want to explore
> them.
> 
> Generally, things in a file with "internal" in its name are internal.
> 
> Hadley
> 
> -- 
> http://hadley.nz
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Is ALTREP "non-API"?

2024-04-22 Thread Simon Urbanek



> On Apr 23, 2024, at 10:29 AM, Hadley Wickham  wrote:
> 
> 
> 
> On Mon, Apr 22, 2024 at 5:14 PM Simon Urbanek  
> wrote:
> 
> 
> > On Apr 22, 2024, at 7:37 PM, Gabriel Becker  wrote:
> > 
> > Hi Yutani,
> > 
> > ALTREP is part of the official R api, as illustrated by the presence of
> > src/include/R_ext/Altrep.h. Everything declared in the header files in that
> > directory is official API AFAIK (and I believe that is more definitive than
> > the manuals).
> > 
> 
> That is not true at all - the presence of header does not constitute 
> declaration of something as the R API. There are cases where internal 
> functions are in the headers for historical or other reasons since the 
> headers are used both for the internal implementation and packages. That's 
> why this is in R-exts under "The R API: entry points for C code":
> 
> If I understand your point correctly, does this mean that Rf_allocVector() is 
> not part of the "official" R API? It does not appear to be documented in the 
> "The R API: entry points for C code" section.
> 


It does, obviously:
https://cran.r-project.org/doc/manuals/R-exts.html#Allocating-storage-1

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Is ALTREP "non-API"?

2024-04-22 Thread Simon Urbanek



> On Apr 22, 2024, at 7:37 PM, Gabriel Becker  wrote:
> 
> Hi Yutani,
> 
> ALTREP is part of the official R api, as illustrated by the presence of
> src/include/R_ext/Altrep.h. Everything declared in the header files in that
> directory is official API AFAIK (and I believe that is more definitive than
> the manuals).
> 

That is not true at all - the presence of header does not constitute 
declaration of something as the R API. There are cases where internal functions 
are in the headers for historical or other reasons since the headers are used 
both for the internal implementation and packages. That's why this is in R-exts 
under "The R API: entry points for C code":

> There are a large number of entry points in the R executable/DLL that can be 
> called from C code (and some that can be called from Fortran code). Only 
> those documented here are stable enough that they will only be changed with 
> considerable notice.

And that's why CRAN does not allow unstable ones = those not documented in 
R-exts as part of the API.

Therefore Hiroaki's question is a very good one. ALTREP is declared as 
experimental and is not part of the API, but the development and stability of 
the API in some sense should get better as more packages are using it. 
Therefore it is currently allowed on CRAN in the hope that it will transition 
to stable at some point, but package authors using it must be willing to adapt 
to changes to the API as necessary.

Cheers,
Simon

 

> The documentation of ALTREP has lagged behind its implementation
> unfortunately, which may partially my fault for not submitting doc
> patches for it against the manuals. Sorry for my contribution to that, I'll
> see if I can loop back around to contributing documentation for ALTREP.
> 
> Best,
> ~G
> 
> On Sun, Apr 21, 2024 at 6:36 PM Hiroaki Yutani  wrote:
> 
>> Thanks, Hernando,
>> 
>> Sorry, "API" is a bit confusing term in this context, but what I want to
>> discuss is the "API" that Writing R Extension defines as quoted in my
>> previous email. It's probably different from an ordinary sense when we
>> casually say "R C API".
>> 
>> You might wonder why I care about such a difference. This is because
>> calling a "non-API" is considered a violation of CRAN repository policy,
>> which means CRAN will kick out the R package. I know many CRAN packages use
>> ALTREP, but just being accepted by CRAN at the moment doesn't mean CRAN
>> will keep accepting it. So, I want to clarify the current status of ALTREP.
>> 
>> Best,
>> Yutani
>> 
>> 2024年4月22日(月) 10:17 :
>> 
>>> Hello, I don't believe it is illegal, as ALTREP "implements an
>> abstraction
>>> underneath the C API". And is "compatible with all code which uses the
>>> API".
>>> 
>>> Please see slide deck by Gabriel Becker,  with L Tierney, M Lawrence and
>> T
>>> Kalibera.
>>> 
>>> 
>>> 
>> https://bioconductor.org/help/course-materials/2020/BiocDevelForum/16-ALTREP
>>> .pdf
>>> <
>> https://bioconductor.org/help/course-materials/2020/BiocDevelForum/16-ALTREP.pdf
>>> 
>>> 
>>> ALTREP framework implements an abstraction underneath traditional R C API
>>> - Generalizes whats underneath the API
>>> - Without changing how data are accessed
>>> - Compatible with all C code which uses the API
>>> - Compatible with R internals
>>> 
>>> 
>>> I hope this helps,
>>> Hernando
>>> 
>>> 
>>> -Original Message-
>>> From: R-devel  On Behalf Of Hiroaki
>> Yutani
>>> Sent: Sunday, April 21, 2024 8:48 PM
>>> To: r-devel 
>>> Subject: [Rd] Is ALTREP "non-API"?
>>> 
>>> Writing R Extension[1] defines "API" as:
>>> 
>>>Entry points which are documented in this manual and declared in an
>>> installed header file. These can be used in distributed packages and will
>>> only be changed after deprecation.
>>> 
>>> But, the document (WRE) doesn't have even a single mention of ALTREP, the
>>> term "ALTREP" itself or any entry points related to ALTREP. Does this
>> mean,
>>> despite the widespread use of it on R packages including CRAN ones,
>> ALTREP
>>> is not the API and accordingly using it in distributed packages is
>>> considered illegal?
>>> 
>>> Best,
>>> Yutani
>>> 
>>> [1]:
>>> https://cran.r-project.org/doc/manuals/r-release/R-exts.html#The-R-API
>>> 
>>>[[alternative HTML version deleted]]
>>> 
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>> 
>>> 
>> 
>>[[alternative HTML version deleted]]
>> 
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
>> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] RSS Feed of NEWS needs a hand

2024-04-02 Thread Simon Urbanek
Duncan,

I have fixed up the repo with git restore-mtime -- I think that should solve it 
- please check if it did what we needed.

Cheers,
Simon


> On Apr 3, 2024, at 2:41 AM, Duncan Murdoch  wrote:
> 
> On 02/04/2024 8:50 a.m., Dirk Eddelbuettel wrote:
>> On 2 April 2024 at 07:37, Dirk Eddelbuettel wrote:
>> |
>> | On 2 April 2024 at 08:21, Duncan Murdoch wrote:
>> | | I have just added R-4-4-branch to the feeds.  I think I've also fixed
>> | | the \I issue, so today's news includes a long list of old changes.
>> |
>> | These feeds can fussy: looks like you triggered many updates. Feedly
>> | currently greets me with 569 new posts (!!) in that channel.
>> Now 745 -- and the bigger issue seems to be that the 'posted at' timestamp is
>> wrong and 'current' so all the old posts are now seen as 'fresh'. Hence the
>> flood ... of unsorted post.
>> blosxom, simple as it is, takes (IIRC) filesystem ctime as the posting
>> timestamp so would be best if you had a backup with the old timestamps.
> 
> Looks like those dates are gone -- the switch from svn to git involved some 
> copying, and I didn't preserve timestamps.
> 
> I'll see about regenerating the more recent ones.  I don't think there's much 
> historical interest in the pre-4.0 versions, so maybe I'll just nuke those.
> 
> Duncan Murdoch
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [R-pkg-devel] Package required but not available: ‘arrow’

2024-02-25 Thread Simon Urbanek
To quote Rob: "Version numbers are cheap"

The way the policy is worded it is clear that you cannot complain if you didn't 
increase it as you are taking a risk. Also the the incoming FTP won't let you 
upload same version twice so it wasn't really a problem until more recently 
when there are multiple different ways to submit. Either way, changing the 
policy to MUST is probably the best way to avoid race conditions and certainly 
the only good practice.

Cheers,
Simon


> On 25/02/2024, at 5:44 PM, Rolf Turner  wrote:
> 
> 
> On Fri, 23 Feb 2024 10:19:41 -0600
> Dirk Eddelbuettel  wrote:
> 
>> 
>> On 23 February 2024 at 15:53, Leo Mada wrote:
>> | Dear Dirk & R-Members,
>> | 
>> | It seems that the version number is not incremented:
>> | # Archived
>> | arrow_14.0.2.1.tar.gz   2024-02-08 11:57  3.9M
>> | # Pending
>> | arrow_14.0.2.1.tar.gz   2024-02-08 18:24  3.9M
>> | 
>> | Maybe this is the reason why it got stuck in "pending".
>> 
>> No it is not.
>> 
>> The hint to increase version numbers on re-submission is a weaker
>> 'should' or 'might', not a strong 'must'.
>> 
>> I have uploaded a few packages to CRAN over the last two decades, and
>> like others have made mistakes requiring iterations. I have not once
>> increased a version number.
> 
> That's as may be but IMHO (and AFAICS) it never hurts to increment the
> version number, even if you've only corrected a trivial glitch.
> 
> cheers,
> 
> Rolf
> 
> -- 
> Honorary Research Fellow
> Department of Statistics
> University of Auckland
> Stats. Dep't. (secretaries) phone:
> +64-9-373-7599 ext. 89622
> Home phone: +64-9-480-4619
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] CRAN uses an old version of clang

2024-02-11 Thread Simon Urbanek
Just to include the necessary details: macOS CRAN build uses Apple clang-14, so 
you cannot assume anything higher. Also the target is macOS 11 SDK.

That said, LLVM does not support the special math functions at all according to 
the status report (see Mathematical Special Functions for C++17 at 
https://libcxx.llvm.org/Status/Cxx17.html) so Boost is your best bet.

BTW: this is not a Mac thing - you can replicate it on any other system, eg. in 
Linux:

$ clang++-17 -std=c++17 -stdlib=libc++  bes.cc
bes.cc:11:49: error: no member named 'cyl_bessel_k' in namespace 'std'
   11 | std::cout << "K_.5(" << x << ") = " << std::cyl_bessel_k(.5, x) << 
'\n'
  |~^
bes.cc:13:35: error: no member named 'cyl_bessel_i' in namespace 'std'
   13 |   << (pi / 2) * (std::cyl_bessel_i(-.5, x)
  |  ~^
bes.cc:14:25: error: no member named 'cyl_bessel_i' in namespace 'std'
   14 |  - std::cyl_bessel_i(.5, x)) / std::sin(.5 * pi) << 
'\n';
  |~^
3 errors generated.

Cheers,
Simon


> On 10/02/2024, at 8:04 AM, Marcin Jurek  wrote:
> 
> All this makes sense, thanks for your tips, everyone!
> 
> Marcin
> 
> On Fri, Feb 9, 2024 at 9:44 AM Dirk Eddelbuettel  wrote:
> 
>> 
>> On 9 February 2024 at 08:59, Marcin Jurek wrote:
>> | I recently submitted an update to my package. It previous version relied
>> on
>> | Boost for Bessel and gamma functions but a colleague pointed out to me
>> that
>> | they are included in the standard library beginning with the C++17
>> | standard.
>> 
>> There is an often overlooked bit of 'fine print': _compiler support_ for a
>> C++ standard is not the same as the _compiler shipping a complete library_
>> for that same standard. This can be frustrating. See the release notes for
>> gcc/g++ and clang/clang++, IIRC they usually have a separate entry for C++
>> library support.
>> 
>> In this case, can probably rely on LinkingTo: BH which has been helping
>> with
>> Boost headers for over a decade.
>> 
>> Writing R Extensions is also generally careful in reminding us that such
>> language standard support is always dependent on the compiler at hand. So
>> package authors ought to check, just like R does via its extensive
>> configure
>> script when it builds.
>> 
>> Dirk
>> 
>> --
>> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
>> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [Rd] Advice debugging M1Mac check errors

2024-02-06 Thread Simon Urbanek



> On 7/02/2024, at 5:06 AM, Prof Brian Ripley via R-devel 
>  wrote:
> 
> On 04/02/2024 19:41, Holger Hoefling wrote:
>> Hi,
>> I wanted to ask if people have good advice on how to debug M1Mac package
>> check errors when you don´t have a Mac? Is a cloud machine the best option
>> or is there something else?
> 
> I presumed this was about a CRAN package, possibly hdf5r which has a 
> R-devel-only warning from the Apple clang compiler.  And that is not a 'check 
> error' and not something to 'debug'.
> 
> The original poster had errors for his package flsa until yesterday on 
> fedora-clang and M1mac, which were compilation errors with recent LLVM and 
> Apple compilers.  Again, not really something to 'debug' -- the compiler 
> messages were clear and the CRAN notification contained advice on where in 
> our manual to look this up.
> 
> The mac-builder service offers checks for R 4.3.0, the 'development' option 
> being (last time I tried) the same as the 'release' option. (When I asked, 
> Simon said that 'development' checks were only available in the run up to a 
> x.y.0 when he starts package building and checks for R-devel.)
> 


Just to clarify, the above is outdated information - ever since the R-devel 
packages binaries are on CRAN the "development" option in the mac-builder is 
available.

Cheers,
Simon



> 
> We were left to guess, but I doubt this has to do with the lack of 'extended 
> precision' nor long doubles longer than doubles on arm64 macOS.  And issues 
> with that are rather rare (much rarer than numerical issues for non-reference 
> x86_64 BLAS/LAPACKs).  Of the 20,300 CRAN packages just 18 have 
> M1mac-specific errors, none obviously from numerical inaccuracy.  A quick 
> look back suggests we get about 20 a year with M1mac numerical issues, about 
> half of which were mirrored on the x86_64 'noLD' checks.
> 
> -- 
> Brian D. Ripley,  rip...@stats.ox.ac.uk
> Emeritus Professor of Applied Statistics, University of Oxford
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Advice debugging M1Mac check errors

2024-02-04 Thread Simon Urbanek



> On Feb 5, 2024, at 12:26 PM, Duncan Murdoch  wrote:
> 
> Hi John.
> 
> I don't think the 80 bit format was part of IEEE 754; I think it was an Intel 
> invention for the 8087 chip (which I believe preceded that standard), and 
> didn't make it into the standard.
> 
> The standard does talk about 64 bit and 128 bit floating point formats, but 
> not 80 bit.
> 

Yes, the 80 bit was Intel-specific (motivated by internal operations, not as 
external format), but as it used to be most popular architecture, people didn't 
quite realize that tests relying on Intel results will be Intel-specific 
(PowerPC Macs had 128-bit floating point, but they were not popular enough to 
cause trouble in the same way). The IEEE standard allows "extended precision" 
formats, but doesn't prescribe their format or precision - and they are 
optional. Arm64 CPUs only support 64-bit double precision in hardware (true 
both on macOS and Windows), so only what is in the basic standard. There are 
128-bit floating point solutions in software, but, obviously, they are a lot 
slower (several orders of magnitude). Apple has been asking for priorities in 
the scientific community and 128-bit floating number support was not something 
high on people's priority list. It is far from trivial, because there is a long 
list of operations (all variations of the math functions) so I wouldn't expect 
this to change anytime soon - in fact once Microsoft's glacial move is done 
we'll be likely seeing only 64-bit everywhere.

That said even if you don't have a arm64 CPU, you can build R with 
--disable-long-double to get closer to the arm64 results if that is your worry.

Cheers,
Simon


> 
> On 04/02/2024 4:47 p.m., J C Nash wrote:
>> Slightly tangential: I had some woes with some vignettes in my
>> optimx and nlsr packages (actually in examples comparing to OTHER
>> packages) because the M? processors don't have 80 bit registers of
>> the old IEEE 754 arithmetic, so some existing "tolerances" are too
>> small when looking to see if is small enough to "converge", and one
>> gets "did not converge" type errors. There are workarounds,
>> but the discussion is beyond this post. However, worth awareness that
>> the code may be mostly correct except for appropriate tests of
>> smallness for these processors.
>> JN
>> On 2024-02-04 11:51, Dirk Eddelbuettel wrote:
>>> 
>>> On 4 February 2024 at 20:41, Holger Hoefling wrote:
>>> | I wanted to ask if people have good advice on how to debug M1Mac package
>>> | check errors when you don´t have a Mac? Is a cloud machine the best option
>>> | or is there something else?
>>> 
>>> a) Use the 'mac builder' CRAN offers:
>>> https://mac.r-project.org/macbuilder/submit.html
>>> 
>>> b) Use the newly added M1 runners at GitHub Actions,
>>> 
>>> https://github.blog/changelog/2024-01-30-github-actions-introducing-the-new-m1-macos-runner-available-to-open-source/
>>> 
>>> Option a) is pretty good as the machine is set up for CRAN and builds
>>> fast. Option b) gives you more control should you need it.
>>> 
>>> Dirk
>>> 
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [R-pkg-devel] Warnings from upstream C library in CRAN submission

2024-02-03 Thread Simon Urbanek
Satyaprakash,

those are clear bugs in the SUNDIALS library - they assume that "unsigned long" 
type is 64-bit wide (that assumption is also mentioned in the comments), but 
there is no such guarantee and on Windows it is only 32-bit wide, so the code 
has to be changed to replace "unsigned long" with the proper unsigned 64-bit 
type which is uint64_t. The code is simply wrong and won't work unless those 
issues are solved, so those are not just warnings but actual errors. It would 
be also prudent to check the rest of the code in the library for similar 
incorrect use of the "long" type where 64-bit use was intended.

Cheers,
Simon


> On Feb 4, 2024, at 4:38 AM, Satyaprakash Nayak  wrote:
> 
> Hi
> 
> I had a package 'sundialr' which was archived from CRAN. It is an interface
> to some of the solvers in SUNDIALS ODE Solving library. I have fixed the
> issue which was related to emails being forwarded from the maintainer's
> email address.
> 
> The repository code can be found at - https://github.com/sn248/sundialr
> 
> I have updated the upstream library and now I am getting the following
> warnings from CRAN which are all related to the upstream library. The
> package compiles without any other issues and can be used.
> 
> Flavor: r-devel-windows-x86_64
> Check: whether package can be installed, Result: WARNING
>  Found the following significant warnings:
>./sundials/sundials/sundials_hashmap.h:26:48: warning: conversion from
> 'long long unsigned int' to 'long unsigned int' changes value from
> '14695981039346656037' to '2216829733' [-Woverflow]
>./sundials/sundials/sundials_hashmap.h:27:48: warning: conversion from
> 'long long unsigned int' to 'long unsigned int' changes value from
> '1099511628211' to '435' [-Woverflow]
>sundials/sundials/sundials_hashmap.h:26:48: warning: conversion from
> 'long long unsigned int' to 'long unsigned int' changes value from
> '14695981039346656037' to '2216829733' [-Woverflow]
>sundials/sundials/sundials_hashmap.h:27:48: warning: conversion from
> 'long long unsigned int' to 'long unsigned int' changes value from
> '1099511628211' to '435' [-Woverflow]
>sundials/sundials/sundials_profiler.c:71:24: warning: function
> declaration isn't a prototype [-Wstrict-prototypes]
>  See 'd:/RCompile/CRANincoming/R-devel/sundialr.Rcheck/00install.out' for
> details.
>  Used C++ compiler: 'g++.exe (GCC) 12.3.0'
> 
> Flavor: r-devel-linux-x86_64-debian-gcc
> Check: whether package can be installed, Result: WARNING
>  Found the following significant warnings:
>sundials/sundials/sundials_profiler.c:71:41: warning: a function
> declaration without a prototype is deprecated in all versions of C
> [-Wstrict-prototypes]
>  See '/srv/hornik/tmp/CRAN/sundialr.Rcheck/00install.out' for details.
>  Used C++ compiler: 'Debian clang version 17.0.6 (5)'
> 
> I am hesitant to change anything in the SUNDIALS library C code because I
> don't understand the consequences of changing anything there.
> 
> Any help will be kindly appreciated.
> 
> Thank you.
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [Rd] [Feature Request] Hide API Key in download.file() / R's libcurl

2024-02-03 Thread Simon Urbanek
Any reason why you didn't use quiet=TRUE to suppress that output?

There is no official API structure for credentials in R repositories, so R has 
no way of knowing which part of the URL are credentials as it is not under R's 
purview - it could be part of the path or anything, so there is no way R can 
reliably mask it. Hence it makes more sense for the user to suppress the output 
if they think it may contain sensitive information - and R supports that.

If that's still not enough, then please make a concrete proposal that defines 
exactly what kind processing you'd like to see under what conditions - and how 
you think that will solve the problem.

Cheers,
Simon



> On Feb 2, 2024, at 5:28 AM, Xinyi  wrote:
> 
> Hi all,
> 
> When trying to install a package from R using install.packages(), it will
> print out the full url address (of the remote repository) it was trying to
> access. A bit further digging shows it is from the in_do_curlDownload
> method from R's libcurl
> :
> install.packages() calls download.packages(), and download.packages() calls
> download.file(), which uses "libcurl" as its default method.
> 
> This line from R mirror
> 
> ("if (!quiet) REprintf(_("trying URL '%s'\n"), url);")  prints the full url
> it is trying to access.
> 
> This is totally fine for public urls without credentials, but in the case
> that a given url contains an API key, it poses security issues. For
> example, if the getOption("repos") has been overridden to a
> customized repository (protected by API keys), then
>> install.packages("zoo")
> Installing packages into '--removed local directory path--'
> trying URL 'https://--removed userid--:--removed
> api-ke...@repository-addresss.com:4443/.../src/contrib/zoo_1.8-12.tar.gz  '
> Content type 'application/x-gzip' length 782344 bytes (764 KB)
> ===
> downloaded 764 KB
> 
> * installing *source* package 'zoo' ...
> -- further logs removed --
>> 
> 
> I also tried several other options:
> 
> 1. quite=1
>> install.packages("zoo", quite=1)
> It did hide the url, but it also hid all other useful information.
> 2. method="curl"
>> install.packages("zoo", method="curl")
> This does not print the url when the download is successful, but if there
> were any errors, it still prints the url with API key in it.
> 3. method="wget"
>> install.packages("zoo", method="wget")
> This hides API key by *password*, but I wasn't able to install packages
> with this method even with public repos, with the error "Warning: unable to
> access index for repository https://cloud.r-project.org/src/contrib/4.3:
> 'wget' call had nonzero exit status"
> 
> 
> In other dynamic languages' package managers like Python's pip, API keys
> are hidden by default since pip 18.x in 2018, and masked by "" from pip
> 19.x in 2019, see below examples. Can we get a similar default behaviour in
> R?
> 
> 1. with pip 10.x
> $ pip install numpy -v # API key was not hided
> Looking in indexes:  https://--removed userid--:--removed
> api-ke...@repository-addresss.com:4443/.../pypi/simple
> 2. with pip 18.x # All credentials are removed by pip
> $ pip install numpy -v
> Looking in indexes:  https://repository-addresss.com:4443/
> .../pypi/simple
> 3. with pip 19.x onwards # userid is kept, API key is replaced by 
> $ pip install numpy -v
> Looking in indexes:  https://userid:@
> repository-addresss.com:4443/.../pypi/simple
> 
> 
> I was instructed by https://www.r-project.org/bugs.html that I should get
> some discussion on r-devel before filing a feature request. So looking
> forward to comments/suggestions.
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [R-pkg-devel] Clarifying CRAN's external libraries policy

2024-01-29 Thread Simon Urbanek
Nic,

as far as I can see that thread was clearly concluded that it is not a special 
case that would require external binary downloads.

Cheers,
Simon


> On Jan 30, 2024, at 11:11 AM, Nic Crane  wrote:
> 
> Hi Simon,
> 
> The email that Neal is referring to was sent by me (this email
> address) to c...@r-project.org on Mon, 23 Oct 2023.
> 
> Thanks,
> 
> Nic
> 
> 
> On Mon, 29 Jan 2024 at 18:51, Simon Urbanek  
> wrote:
>> 
>> Neal,
>> 
>> generally, binaries are not allowed since CRAN cannot check the provenance 
>> so it's not worth the risk, and it's close to impossible to maintain them 
>> over time across different systems, toolchains and architectures as they 
>> evolve. Historically, some packages allowed to provide binaries (e.g., back 
>> when the Windows toolchain was not as complete and there was only Win32 
>> target it was more common to supply a Windows binary) and CRAN was more 
>> lenient, but it should be avoided nowadays as it was simply too fragile.
>> 
>> As Andrew pointed out in special circumstances you can use external 
>> hash-checked *source* tar balls, but generally you should provide sources in 
>> the package.
>> 
>> I do not see any e-mail from you to c...@r-project.org about this, so please 
>> make sure you are using the correct e-mail if you intend to plead your case.
>> 
>> Cheers,
>> Simon
>> 
>> 
>> 
>>> On Jan 30, 2024, at 3:11 AM, Neal Richardson  
>>> wrote:
>>> 
>>> Hi,
>>> CRAN's policy on using external C/C++/Fortran/other libraries says:
>>> 
>>> "Where a package wishes to make use of a library not written solely for the
>>> package, the package installation should first look to see if it is already
>>> installed and if so is of a suitable version. In case not, it is desirable
>>> to include the library sources in the package and compile them as part of
>>> package installation. If the sources are too large, it is acceptable to
>>> download them as part of installation, but do ensure that the download is
>>> of a fixed version rather than the latest. Only as a last resort and with
>>> the agreement of the CRAN team should a package download pre-compiled
>>> software."
>>> 
>>> Apologies if this is documented somewhere I've missed, but how does one get
>>> CRAN's agreement to download pre-compiled software? A project I work with
>>> has been seeking permission since October, but emails to both
>>> c...@r-project.org and cran-submissi...@r-project.org about this have not
>>> been acknowledged.
>>> 
>>> I recognize that this mailing list is not CRAN, but I was hoping someone
>>> here might know the right way to reach the CRAN team to provide a judgment
>>> on such a request.
>>> 
>>> Thank you,
>>> Neal
>>> 
>>>  [[alternative HTML version deleted]]
>>> 
>>> __
>>> R-package-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-package-devel
>>> 
>> 
>> __
>> R-package-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Clarifying CRAN's external libraries policy

2024-01-29 Thread Simon Urbanek
Neal,

generally, binaries are not allowed since CRAN cannot check the provenance so 
it's not worth the risk, and it's close to impossible to maintain them over 
time across different systems, toolchains and architectures as they evolve. 
Historically, some packages allowed to provide binaries (e.g., back when the 
Windows toolchain was not as complete and there was only Win32 target it was 
more common to supply a Windows binary) and CRAN was more lenient, but it 
should be avoided nowadays as it was simply too fragile.

As Andrew pointed out in special circumstances you can use external 
hash-checked *source* tar balls, but generally you should provide sources in 
the package.

I do not see any e-mail from you to c...@r-project.org about this, so please 
make sure you are using the correct e-mail if you intend to plead your case.

Cheers,
Simon



> On Jan 30, 2024, at 3:11 AM, Neal Richardson  
> wrote:
> 
> Hi,
> CRAN's policy on using external C/C++/Fortran/other libraries says:
> 
> "Where a package wishes to make use of a library not written solely for the
> package, the package installation should first look to see if it is already
> installed and if so is of a suitable version. In case not, it is desirable
> to include the library sources in the package and compile them as part of
> package installation. If the sources are too large, it is acceptable to
> download them as part of installation, but do ensure that the download is
> of a fixed version rather than the latest. Only as a last resort and with
> the agreement of the CRAN team should a package download pre-compiled
> software."
> 
> Apologies if this is documented somewhere I've missed, but how does one get
> CRAN's agreement to download pre-compiled software? A project I work with
> has been seeking permission since October, but emails to both
> c...@r-project.org and cran-submissi...@r-project.org about this have not
> been acknowledged.
> 
> I recognize that this mailing list is not CRAN, but I was hoping someone
> here might know the right way to reach the CRAN team to provide a judgment
> on such a request.
> 
> Thank you,
> Neal
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Possible malware(?) in a vignette

2024-01-27 Thread Simon Urbanek
First, let's take a step back, because I think there is way too much confusion 
here.

The original report was about the vignette from the poweRlaw package version 
0.70.6. That package contains a vignette file d_jss_paper.pdf with the SHA256 
hash 9486d99c1c1f2d1b06f0b6c5d27c54d4f6e39d69a91d7fad845f323b0ab88de9 (md5 
e0439db551e1d34e9bf8713fca27887b). This is the same file that would be 
available for download from the web view until the new version was published. 
However, I assume we are talking about the same file based on the fact that 
Iñaki's VirusTotal URL has exactly the same hash, i.e., web view and the 
package are identical (I also checked the other hashes just to be really sure). 
That's why I think we're barking up the wrong tree here since this is not about 
cache poisoning, file swaps or anything like that - the file has never been 
modified - it is the same file that has been submitted to CRAN in 2020.

That's why I was saying that this most likely has nothing to do with CRAN at 
all, but rather the question is if that old file has included some malware for 
the last 4 years or if simply the AV software is misclassifying due to a 
false-positive detection. I'm not a security expert, but based on the little 
information available and inspection of the streams I came to the conclusion 
that it's likely a false-positive. The main reason that made me think so was 
that submitting the exact same *identical* PDF payload with just one-byte 
change to the /ID (which is functionally not used by Acrobat) results in the 
file NOT being flagged as malicious by VirusTotal by any of the security 
vendors. That said, I'm not a security expert, so I may be wrong or I'm missing 
something, that's why I was asking for someone with more expertise to actually 
look at the file as opposed to just trusting auto-generated reports that may be 
wrong. But that is not beyond my power.

(Also if it turns out that the file did contain malware, it would be good to 
know what we can do - for example, nowadays we are re-compressing streams 
and/or filtering through GS so one could imagine that it could be also 
effective at removing PDF malware - if it is real.)

More responses inline.


> On Jan 28, 2024, at 1:10 AM, Bob Rudis  wrote:
> 
> Simon: Is there a historical record of the hashes of just the PDFs
> that show up in the CRAN web view?
> 

Not the website, but hashes are recorded in the packages - so you can verify 
that the file has not changed for years (I can directly confirm it has not 
changed as far back as May 2021).


> Ivan: do you know what mirror NOAA used at that time to get that version of
> the package? Or, did they pull it "directly" from cran.r-project.org
> (scare-quotes only b/c DNS spoofing is and has been a pretty solid attack
> vector)?
> 
> I've asked the infosec community if anyone has VT Enterprise to do a
> historical search on any PDFs that come directly from cran.r-project.org (I
> don't have VT Enterprise). It is possible there are other PDFs from that
> timeframe with similar issues (again, not saying CRAN had any issues; this
> could still be crawler cache poisoning).
> 
> I don't know if any university folks have grad student labor to harness,
> but having a few of them do some archive.org searches for other PDFs in
> that timeframe, and note the source of the archive (likely Common Crawl) if
> there are other real issues, that'd be a solid path forward for triage.
> 
> The fact that the current PDF on CRAN — which uses some of the same
> 7-year-old PDF & JPEG images from —
> https://github.com/csgillespie/poweRlaw/tree/main/vignettes — is not being
> flagged, means it's likely not an issue with Colin's sources.
> 
> Simon: it might be a good idea for all *.r-project.org sites to set up CAA
> records (
> https://en.wikipedia.org/wiki/DNS_Certification_Authority_Authorization)
> since that could help prevent adjacent TLS spoofing.
> 
> Also having something running — https://github.com/SSLMate/certspotter —
> can let y'all know if certs are created for *.r-project.org domains. That
> won't help for well-resourced attacks, but it does add some layers that may
> give a heads-up for any mid-grade spoofing attacks.
> 


All well meant, but remember that CRAN is mirrored worldwide, we have control 
pretty much only over the WU master. That said, we can have a look, but DNS 
changes are not as easy as you would think.

Cheers,
Simon

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Possible malware(?) in a vignette

2024-01-27 Thread Simon Urbanek
Iñaki,

> On Jan 27, 2024, at 11:44 PM, Iñaki Ucar  wrote:
> 
> Simon,
> 
> Please re-read my email. I did *not* say that CRAN *generated* that file. I 
> said that CRAN *may* be compromised (some virus may have modified files).
> 


I guess I should have been more clear in my response: the file could not have 
been modified by CRAN, because the package files are checksummed (the hashes 
match) so that's how we know this could not have been a virus on the CRAN 
machine.


> I did *not* claim that the report was necessarily 100% accurate. But "that 
> page I linked" was created by a security firm, and it would be wise to 
> further investigate any potential threat reported there, which is what I was 
> suggesting.
> 


I appreciate the report, there was no objection to that. Unfortunately, the 
report has turned out to have virtually no useful information that would make 
it possible for us to investigate. The little information it provided has 
proven to be false (at least as much as could be gleamed from the tags), so 
unless we can get some real security expert to give us more details, there is 
not much more we can do given that the file is no longer distributed. And 
without more detailed information of the threat it's hard to see if there are 
any steps we could take. 

Back to my main original point - as far as CRAN machines are concerned, we did 
check the integrity of the files, machines and tools and found no link there. 
Hence the only path left is to get more details on the particular file to see 
if it is indeed a malware and if so, if it was just some random infection at 
the source or something bigger like Bob hinted at some compromised material 
that may have been circling in the community.

Cheers,
Simon



> I don't think these are "false claims".
> 
> Iñaki
> 
> El sáb., 27 ene. 2024 11:19, Simon Urbanek  <mailto:simon.urba...@r-project.org>> escribió:
> Bob,
> 
> I was not making assertions, I was only dismissing clearly false claims: CRAN 
> did NOT generate the file in question, it is not a ZIP file trojan as 
> indicated by the AV flags and content inspection did not reveal any other 
> streams than what is usual in pdflatex output. The information about the 
> alleged malware was terribly vague and incomplete to put it mildly so if you 
> have any additional forensic information that sheds more light on whether 
> this was a malware or not, it would be welcome. If it was indeed one, knowing 
> what kind would help to see how any other instances could be detected. Please 
> contact the CRAN team if you have any such information and we can take it 
> from there.
> 
> As you hinted yourself - there is no such thing as absolute safety - as the 
> webp exploits have illustrated very clearly a simple image can be malware and 
> the only read defense is to keep your software up to date.
> 
> Cheers,
> Simon
> 
> 
> 
> > On Jan 27, 2024, at 9:52 PM, Bob Rudis mailto:b...@rud.is>> 
> > wrote:
> > 
> > The current one on CRAN does get flagged for some low-level Sigma rules b/c 
> > of one of way a few URLs interact. I don't know if f-secure is pedantic 
> > enough to call that malicious (it probably is, though). The *current* PDF 
> > is "fine".
> > 
> > There is a major problem with the 2020 version. The file Iñaki's URL 
> > matches the PDF that I grabbed from the Wayback Machine for the 2020 PDF 
> > from that URL.
> > 
> > Simon's assertion about this *2020* file is flat out wrong. It's very bad.
> > 
> > Two VT sandboxes used Adobe Acrobat Reader to open the PDF and the PDF 
> > seems to either had malicious JavaScript or had been crafted sufficiently 
> > to caused a buffer overflow in Reader that then let it perform other 
> > functions on those sandboxes.
> > 
> > They are most certainly *not* false positives, and dismissing that outright 
> > is not great.
> > 
> > I'm not going to check every 2020 PDF from CRAN, but this is a big signal 
> > to me there was an issue *somewhere* in that time period.
> > 
> > I do not know what cran.r-project.org <http://cran.r-project.org/> resolved 
> > to for the Common Crawl at that date (which is where archive.org 
> > <http://archive.org/> picked it up to archive for the 2020 PDF version). I 
> > highly doubt the Common Crawl DNS resolution process was spoofed _just for 
> > that PDF URL_, but it may have been for CRAN in general or just "in 
> > general" during that crawl period.
> > 
> > It is also possible some malware hit CRAN during portions of that time 
> > period and infected more than one PDF.
> > 
> > But, outright suggesting there is no iss

Re: [R-pkg-devel] Possible malware(?) in a vignette

2024-01-27 Thread Simon Urbanek
Bob,

I was not making assertions, I was only dismissing clearly false claims: CRAN 
did NOT generate the file in question, it is not a ZIP file trojan as indicated 
by the AV flags and content inspection did not reveal any other streams than 
what is usual in pdflatex output. The information about the alleged malware was 
terribly vague and incomplete to put it mildly so if you have any additional 
forensic information that sheds more light on whether this was a malware or 
not, it would be welcome. If it was indeed one, knowing what kind would help to 
see how any other instances could be detected. Please contact the CRAN team if 
you have any such information and we can take it from there.

As you hinted yourself - there is no such thing as absolute safety - as the 
webp exploits have illustrated very clearly a simple image can be malware and 
the only read defense is to keep your software up to date.

Cheers,
Simon



> On Jan 27, 2024, at 9:52 PM, Bob Rudis  wrote:
> 
> The current one on CRAN does get flagged for some low-level Sigma rules b/c 
> of one of way a few URLs interact. I don't know if f-secure is pedantic 
> enough to call that malicious (it probably is, though). The *current* PDF is 
> "fine".
> 
> There is a major problem with the 2020 version. The file Iñaki's URL matches 
> the PDF that I grabbed from the Wayback Machine for the 2020 PDF from that 
> URL.
> 
> Simon's assertion about this *2020* file is flat out wrong. It's very bad.
> 
> Two VT sandboxes used Adobe Acrobat Reader to open the PDF and the PDF seems 
> to either had malicious JavaScript or had been crafted sufficiently to caused 
> a buffer overflow in Reader that then let it perform other functions on those 
> sandboxes.
> 
> They are most certainly *not* false positives, and dismissing that outright 
> is not great.
> 
> I'm not going to check every 2020 PDF from CRAN, but this is a big signal to 
> me there was an issue *somewhere* in that time period.
> 
> I do not know what cran.r-project.org resolved to for the Common Crawl at 
> that date (which is where archive.org picked it up to archive for the 2020 
> PDF version). I highly doubt the Common Crawl DNS resolution process was 
> spoofed _just for that PDF URL_, but it may have been for CRAN in general or 
> just "in general" during that crawl period.
> 
> It is also possible some malware hit CRAN during portions of that time period 
> and infected more than one PDF.
> 
> But, outright suggesting there is no issue was not the way to go, here. And, 
> someone should likely at least poke at more 2020 PDFs from CRAN vignette 
> builds (perhaps just the ones built that were JSS articles…it's possible the 
> header image sourced at that time was tampered with during some time window, 
> since image decoding issues have plagued Adobe Reader in buffer overflow land 
> for a long while).
> 
> - boB
> 
> 
> On Thu, Jan 25, 2024 at 9:44 PM Simon Urbanek  
> wrote:
> Iñaki,
> 
> I think you got it backwards in your conclusions: CRAN has not generated that 
> PDF file (and Windows machines are not even involved here), it is the 
> contents of a contributed package, so CRAN itself is not compromised. Also it 
> is far from clear that it is really a malware - in fact it's certainly NOT 
> what the website you linked claims as those tags imply trojans disguising 
> ZIPped executables as PDF, but the file is an actual valid PDF and not even 
> remotely a ZIP file (in fact is it consistent with pdflatex output). I looked 
> at the decompressed payload of the PDF and the only binary payload are 
> embedded fonts so my guess would be that some byte sequence in the fonts gets 
> detected as false-positive trojan, but since there is no detail on the report 
> we can just guess. False-positives are a common problem and this would not be 
> the first one. Further indication that it's a false-positive is that a simple 
> re-packaging the streams (i.e. NOT changing the actual PDF contents) make the 
> same file pass the tests as clean.
> 
> Also note that there is a bit of a confusion as the currently released 
> version (poweRlaw 0.80.0) does not get flagged, so it is only the archived 
> version (from 2020).
> 
> Cheers,
> Simon
> 
> 
> 
> > On 26/01/2024, at 12:02 AM, Iñaki Ucar  wrote:
> > 
> > On Thu, 25 Jan 2024 at 10:13, Colin Gillespie  wrote:
> >> 
> >> Hi All,
> >> 
> >> I've had two emails from users in the last 24 hours about malware
> >> around one of my vignettes. A snippet from the last user is:
> >> 
> >> ---
> >> I was trying to install a R package that depends on PowerRLaw two
> >> weeks ago.  However my virus protection software 

Re: [R-pkg-devel] Possible malware(?) in a vignette

2024-01-25 Thread Simon Urbanek
Iñaki,

I think you got it backwards in your conclusions: CRAN has not generated that 
PDF file (and Windows machines are not even involved here), it is the contents 
of a contributed package, so CRAN itself is not compromised. Also it is far 
from clear that it is really a malware - in fact it's certainly NOT what the 
website you linked claims as those tags imply trojans disguising ZIPped 
executables as PDF, but the file is an actual valid PDF and not even remotely a 
ZIP file (in fact is it consistent with pdflatex output). I looked at the 
decompressed payload of the PDF and the only binary payload are embedded fonts 
so my guess would be that some byte sequence in the fonts gets detected as 
false-positive trojan, but since there is no detail on the report we can just 
guess. False-positives are a common problem and this would not be the first 
one. Further indication that it's a false-positive is that a simple 
re-packaging the streams (i.e. NOT changing the actual PDF contents) make the 
same file pass the tests as clean.

Also note that there is a bit of a confusion as the currently released version 
(poweRlaw 0.80.0) does not get flagged, so it is only the archived version 
(from 2020).

Cheers,
Simon



> On 26/01/2024, at 12:02 AM, Iñaki Ucar  wrote:
> 
> On Thu, 25 Jan 2024 at 10:13, Colin Gillespie  wrote:
>> 
>> Hi All,
>> 
>> I've had two emails from users in the last 24 hours about malware
>> around one of my vignettes. A snippet from the last user is:
>> 
>> ---
>> I was trying to install a R package that depends on PowerRLaw two
>> weeks ago.  However my virus protection software F secure did not
>> allow me to install it from CRAN, while installation from GitHub
>> worked normally. Virus protection software claimed that
>> d_jss_paper.pdf is compromised. I asked about this from our IT support
>> and they asked it from the company F secure. Now F secure has analysed
>> the file and according them it is malware.
>> 
>> “Upon analyzing, our analysis indicates that the file you submitted is
>> malicious. Hence the verdict will remain
> 
> See 
> https://www.virustotal.com/gui/file/9486d99c1c1f2d1b06f0b6c5d27c54d4f6e39d69a91d7fad845f323b0ab88de9/behavior
> 
> According to the sandboxed analysis, there's something there trying to
> tamper with the Acrobat installation. It tries several Windows paths.
> That's not good.
> 
> The good news is that, if I recreate the vignette from your repo, the
> file is different, different hash, and it's clean.
> 
> The bad news is that... this means that CRAN may be compromised. I
> urge CRAN maintainers to check all the PDF vignettes and scan the
> Windows machines for viruses.
> 
> Best,
> Iñaki
> 
> 
>> 
>> ---
>> 
>> Other information is:
>> 
>> * Package in question:
>> https://cran.r-project.org/web/packages/poweRlaw/index.html
>> * Package hasn't been updated for three years
>> * Vignette in question:
>> https://cran.r-project.org/web/packages/poweRlaw/vignettes/d_jss_paper.pdf
>> 
>> CRAN asked me to fix
>> https://cran.r-project.org/web/checks/check_results_poweRlaw.html a
>> couple of days ago - which I'm in the process of doing.
>> 
>> Any ideas?
>> 
>> Thanks
>> Colin
>> 
>> __
>> R-package-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 
> 
> 
> -- 
> Iñaki Úcar
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] R-package-devel Digest, Vol 105, Issue 19

2024-01-24 Thread Simon Urbanek
This is a reminder why one should never build packages directly in their source 
directory since it can only be done once (for packages with native source code) 
- always use

R CMD build --no-build-vignettes foo && R CMD INSTALL foo_*.tar.gz

if you plan to edit files in the source directory and re-use it.

Cheers,
Simon


> On 25/01/2024, at 8:34 AM, Carl Schwarz  wrote:
> 
> Solved...
> 
> The src/ directory also included a .o and .so objects from the last build
> of the package that must be "out of date" because once I removed those
> older objects, the Build -> Document and build -> Check package now work
> fine without crashing...A newer version of the .o and .so objects are now
> built and it now works fine.
> 
> Thanks
> Carl Schwarz
> 
> On Wed, Jan 24, 2024 at 10:57 AM Carl Schwarz 
> wrote:
> 
>> Thanks for your suggestions.  I followed the suggestion in today's message
>> (see results below) which all run without issue.
>> I tried to isolate the problem more
>> 
>> The issue appears to be with load_dll()
>> 
>> When I try
>>> getwd()
>> [1] "/Users/cschwarz/Library/CloudStorage/Dropbox/SPAS-R/SPAS"
>>> load_dll()
>> 
>> It crashes.
>> 
>> 
>> I moved the package outside of CloudStorage to see if that is the issue.
>>> getwd()
>> [1] "/Users/cschwarz/Desktop/SPAS"
>>> load_dll()
>> 
>> It crashes.
>> 
>> 
>> I tried doing a dll_load() where there is NO c++ object, i.e. a random
>> directory and it terminates with a sensible error message
>>> setwd("/Users/cschwarz/Desktop/BikePics")
>>> library(pkgload)
>>> load_dll()
>> Error in `value[[3L]]()`:
>> ! Could not find a root 'DESCRIPTION' file that starts with '^Package' in
>> /Users/cschwarz/Desktop/BikePics.
>> ℹ Are you in your project directory and does your project have a
>> 'DESCRIPTION' file?
>> Run `rlang::last_trace()` to see where the error occurred.
>> 
>> I'm following the suggestions on including TMB code in a package at
>> 
>> https://stackoverflow.com/questions/48627069/guidelines-for-including-tmb-c-code-in-an-r-package
>> and appear to have all the necessary files
>> 
>> I created my own load_dll() function by copying over the code and adding a
>> browser().
>> It appears to run fine until the statement library.dynam2(path.lib) where
>> it cannot find the function library.dynam2
>> 
>> 
>>> my_load_dll()
>> Called from: my_load_dll()
>> Browse[1]> n
>> debug at #4: package <- pkg_name(path)
>> Browse[2]> n
>> debug at #5: env <- ns_env(package)
>> Browse[2]> n
>> debug at #6: nsInfo <- parse_ns_file(path)
>> Browse[2]> n
>> debug at #7: dlls <- list()
>> Browse[2]> n
>> debug at #8: dynLibs <- nsInfo$dynlibs
>> Browse[2]> n
>> debug at #9: nativeRoutines <- list()
>> Browse[2]> n
>> debug at #10: for (i in seq_along(dynLibs)) {
>>lib <- dynLibs[i]
>>dlls[[lib]] <- library.dynam2(path, lib)
>>routines <- assignNativeRoutines(dlls[[lib]], lib, env,
>> nsInfo$nativeRoutines[[lib]])
>>nativeRoutines[[lib]] <- routines
>>if (!is.null(names(nsInfo$dynlibs)) &&
>> nzchar(names(nsInfo$dynlibs)[i]))
>>env[[names(nsInfo$dynlibs)[i]]] <- dlls[[lib]]
>>setNamespaceInfo(env, "DLLs", dlls)
>> }
>> Browse[2]> n
>> debug at #11: lib <- dynLibs[i]
>> Browse[2]> n
>> debug at #12: dlls[[lib]] <- library.dynam2(path, lib)
>> Browse[2]> n
>> Error in library.dynam2(path, lib) :
>>  could not find function "library.dynam2"
>> 
>> I'm unable to find where the library.dynam2() function lies... A google
>> search for library.dynam2 doesn't show anything except for a cryptic
>> comment in
>> https://rdrr.io/cran/pkgload/src/R/load-dll.R
>> which says
>> 
>> ## The code below taken directly from base::loadNamespace
>>  ## 
>> https://github.com/wch/r-source/blob/tags/R-3-3-0/src/library/base/R/namespace.R#L466-L485
>>  ## except for the call to library.dynam2, which is a special version of
>>  ## library.dynam
>> 
>> This is now beyond my pay grade..
>> 
>> Suggestions?
>> 
>> 
>> --
>> 
>> From James Lamb 
>> 
>> Using the shell:
>> 
>> R CMD build .
>> - success with
>> 
>> * checking for file ‘./DESCRIPTION’ ... OK
>> 
>> * preparing ‘SPAS’:
>> 
>> * checking DESCRIPTION meta-information ... OK
>> 
>> * cleaning src
>> 
>> * installing the package to build vignettes
>> 
>> * creating vignettes ... OK
>> 
>> * cleaning src
>> 
>> * checking for LF line-endings in source and make files and shell scripts
>> 
>> * checking for empty or unneeded directories
>> 
>> * building ‘SPAS_2024.1.31.tar.gz’
>> 
>> 
>> R CMD INSTALL --with-keep.source ./SPAS_*.tar.gz
>> - success. Lots of warning from the C compiler but appears to terminate
>> successfully with
>> 
>> 
>> installing to /Users/cschwarz/Rlibs/00LOCK-SPAS/00new/SPAS/libs
>> 
>> ** R
>> 
>> ** inst
>> 
>> ** byte-compile and prepare package for lazy loading
>> 
>> ** help
>> 
>> *** installing help indices
>> 
>> ** building package indices
>> 
>> ** installing vignettes
>> 
>> ** testing if installed package can be loaded from 

Re: [Rd] Determining the size of a package

2024-01-17 Thread Simon Urbanek
William,

the check does not apply to binary installations (such as the Mac builds), 
because those depend heavily on the static libraries included in the package 
binary which can be quite big and generally cannot be reduced in size - for 
example:
https://www.r-project.org/nosvn/R.check/r-release-macos-arm64/terra-00check.html

Cheers,
Simon


> On Jan 18, 2024, at 12:26 PM, William Revelle  wrote:
> 
> Dear fellow developers,
> 
> Is there an easy way to determine how big my packages  (psych and psychTools) 
>  will be on various versions of CRAN?
> 
> I have been running into the dread 'you are bigger than 5 MB" message for 
> some installations of R on CRAN but not others.  The particular problem seems 
> to be some of the mac versions (specifically r-oldrel-macos-arm64 and 
> r-release-macos-X64 )
> 
> When I build it on my Mac M1 it is well within the limits, but when pushing 
> to CRAN,  I run into the size message.
> 
> Is there a way I can find what the size will be on these various 
> implementations without bothering the nice people at CRAN.
> 
> Thanks.
> 
> William Revellepersonality-project.org/revelle.html
> Professorpersonality-project.org
> Department of Psychology www.wcas.northwestern.edu/psych/
> Northwestern Universitywww.northwestern.edu/
> Use R for psychology personality-project.org/r
> It is 90 seconds to midnightwww.thebulletin.org
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [R-pkg-devel] CMake on CRAN Systems

2024-01-17 Thread Simon Urbanek
I had a quick look and that package (assuming it's 
https://github.com/stsds/MPCR) does not adhere to any rules from R-exts (hence 
the removal from CRAN I presume) so the failure to detect cmake is the least 
problem. I would strongly recommend reading the  R documentation as cmake is 
just the wrong tool for the job in this case. R already has a fully working 
build system which will compile the package using the correct flags and tools - 
you only need to provide the C++ sources. You cannot generate the package 
shared object with cmake by definition - you must let R build it. [In rare case 
dependent static libraries are sometimes built with cmake inside the package if 
there is no other option and cmake is used upstream, but those are rare and you 
still have to use R to build the final shared object].

Cheers,
Simon


> On Jan 17, 2024, at 8:54 PM, Ivan Krylov via R-package-devel 
>  wrote:
> 
> Dear Sameh,
> 
> Regarding your question about the MPCR package and the use of CMake
> :
> on a Mac, you have to look for the cmake executable in more than one
> place because it is not guaranteed to be on the $PATH. As described in
> Writing R Extensions
> , the
> following is one way to work around the problem:
> 
> if test -z "$CMAKE"; then CMAKE="`which cmake`"; fi
> if test -z "$CMAKE"; then
> CMAKE=/Applications/CMake.app/Contents/bin/cmake;
> fi
> if test -f "$CMAKE"; then echo "no ‘cmake’ command found"; exit 1; fi
> 
> Please don't reply to existing threads when starting a new topic on
> mailing lists. Your message had a mangled link that went to
> urldefense.com instead of cran-archive.r-project.org, letting Amazon
> (who host the website) know about every visit to the link:
> https://stat.ethz.ch/pipermail/r-package-devel/2024q1/010328.html
> 
> -- 
> Best regards,
> Ivan
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] test failure: oldrel

2024-01-16 Thread Simon Urbanek



> On Jan 17, 2024, at 3:46 AM, Josiah Parry  wrote:
> 
> Hey folks! I've received note that a package of mine is failing tests on
> oldrel.
> 
> Check results:
> https://www.r-project.org/nosvn/R.check/r-oldrel-windows-x86_64/arcgisutils-00check.html
> 
> I think I've narrowed it down to the way that I've written the test which
> uses `as.POSIXct(Sys.Date(), tz = "UTC")`.
> 

That's not where it fails - it fails in

today <- Sys.Date()
today_ms <- date_to_ms(today)
as.POSIXct(today_ms / 1000)

which is equivalent to

as.POSIXct(as.numeric(Sys.Date()) * 86400)

and that is only supported since R 4.3.0 - from NEWS:

  as.POSIXct() and as.POSIXlt(.) (without specifying origin) now work.

so you have to add R >= 4.3.0 or use .POSIXct() instead.

I didn't check your other tests so you may have more of the same ...

Cheers,
Simon


> If I understand the R release changelog correctly, this behavior did not
> exist prior to R 4.3.0.
> 
> as.POSIXlt() now does apply a tz (time zone) argument, as does
>> as.POSIXct(); partly suggested by Roland Fuß on the R-devel mailing list.
> 
> 
> https://cran.r-project.org/doc/manuals/r-release/NEWS.html
> 
> Does this check out? If so, would be more effective to modify the test to
> use a the character method of `as.POSIXct()`?
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] checking CRAN incoming feasibility

2024-01-16 Thread Simon Urbanek
Ralf,

that check always hangs for me (I don't think it likes NZ ;)), so I just use

_R_CHECK_CRAN_INCOMING_REMOTE_=0 R CMD check --as-cran ...

Cheers,
Simon


> On Jan 16, 2024, at 6:49 PM, Rolf Turner  wrote:
> 
> 
> On Tue, 16 Jan 2024 16:24:59 +1100
> Hugh Parsonage  wrote:
> 
>>> Surely the software just has to check
>> that there is web connection to a CRAN mirror.
>> 
>> Nope! The full code is in tools:::.check_package_CRAN_incoming  (the
>> body of which filled up my entire console), but to name a few checks
>> it has to do: check that the name of the package is not the same as
>> any other, including archived packages (which means that it has to
>> download the package metadata), make sure the licence is ok, see if
>> the version number is ok. 10 minutes is quite a lot though. I suspect
>> the initial connection may have been faulty.
> 
> Well, it may not have been 10 minutes, but it was at least 5.  The
> problem is persistent/repeatable.  I don't believe that there is any
> faulty connection.
> 
> Thanks for the insight.
> 
> cheers,
> 
> Rolf Turner
> 
> -- 
> Honorary Research Fellow
> Department of Statistics
> University of Auckland
> Stats. Dep't. (secretaries) phone:
> +64-9-373-7599 ext. 89622
> Home phone: +64-9-480-4619
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
>

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [Rd] Sys.which() caching path to `which`

2024-01-10 Thread Simon Urbanek
Harmen,

thanks for the additional details, it wasn't exactly clear what this is about. 
Ivan's post didn't mention that the issue here is the caching, not the path 
replacement which you are apparently already doing, now it makes more sense.

I still think it is dangerous as you have no way of knowing who else is caching 
values at installation time since there is no reason to assume that the system 
will change after installation - technically, it breaks the contract with the 
application. We are trying to get packages to not hard-code or cache paths, but 
that typically only applies to the package library location, not to system 
tools.

Cheers,
Simon


> On Jan 11, 2024, at 10:36 AM, Harmen Stoppels  wrote:
> 
> For context: I don't think Nix and Guix have to relocate anything, cause I 
> think they require absolute paths like /nix/store where all binaries go. 
> Spack on the other hand can install packages w/o sudo to a location of 
> choice, e.g. ~/spack/opt/spack. That's why we have to patch binaries.
> 
> However, Spack's relocation stuff is not about creating truly relocatable 
> binaries where everything is referenced by relative paths. We still use 
> absolute paths almost everywhere, it's just that they have to be rewired when 
> the location things are built is different from where they are installed.
> 
> I'm sure there are people who would like to have an actually fully 
> relocatable R, but that's not my goal.
> 
>> I would claim it is not an unreasonable expectation that the user doesn't 
>> delete tools after R was built. Obviously it is tedious, but Spack may need 
>> to patch all absolute paths if it wants to relocate things (it's not easy as 
>> it includes all linker paths as well FWIW) - they must be doing something 
>> like that already as even the R start-up script uses absolute paths
> 
> Basically Spack does (a) special handling of the dynamic section of ELF files 
> for Linux / FreeBSD, and load commands of mach-o files for macOS, (b) find & 
> replace of prefixes in text files / scripts and (c) somewhat fancy but still 
> basic replacement of C-strings containing prefixes in binaries.
> 
> This works reliably because the package prefixes contain hashes that make 
> false positives unlikely.
> 
> It's just that it does not work when the absolute path to be relocated is 
> captured inside serialized bytecode in a zlib-compressed database base.rdb :)
> 
> I believe `which` is the only occurrence of this.
> 
>> That said, WHICH is a mess - it may make sense to switch to the command -v 
>> built-in which is part of POSIX (where available - which is almost 
>> everywhere today) which would not require an external tool
> 
> That sounds like a decent solution to me, probably `command -v` is more 
> commonly available than `which`.
> 
>> I don't think this is the only problem Spack has... (and that's just core R 
>> - even a bigger can of worms with R packages :P).
> 
> We can deal with most issues, just not with compressed byte code.
> 
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Sys.which() caching path to `which`

2024-01-10 Thread Simon Urbanek
Ivan,

I suspect that the `which' case is just the tip of the iceberg - generally, R 
expects all tools it detects at configure time to be callable, just to list a 
few from a running session:

PAGER   /usr/bin/less
R_BROWSER   /usr/bin/open
R_BZIPCMD   /usr/bin/bzip2
R_GZIPCMD   /usr/bin/gzip
R_PDFVIEWER /usr/bin/open
R_QPDF  /Library/Frameworks/R.framework/Resources/bin/qpdf
R_TEXI2DVICMD   /usr/local/bin/texi2dvi
R_UNZIPCMD  /usr/bin/unzip
R_ZIPCMD/usr/bin/zip
SED /usr/bin/sed
TAR /usr/bin/tar

I would claim it is not an unreasonable expectation that the user doesn't 
delete tools after R was built. Obviously it is tedious, but Spack may need to 
patch all absolute paths if it wants to relocate things (it's not easy as it 
includes all linker paths as well FWIW) - they must be doing something like 
that already as even the R start-up script uses absolute paths:

$ grep ^R_ /Library/Frameworks/R.framework/Resources/bin/R 
R_HOME_DIR=/Library/Frameworks/R.framework/Resources
R_HOME="${R_HOME_DIR}"
R_SHARE_DIR=/Library/Frameworks/R.framework/Resources/share
R_INCLUDE_DIR=/Library/Frameworks/R.framework/Resources/include
R_DOC_DIR=/Library/Frameworks/R.framework/Resources/doc
R_binary="${R_HOME}/bin/exec${R_ARCH}/R"

That said, WHICH is a mess - it may make sense to switch to the command -v 
built-in which is part of POSIX (where available - which is almost everywhere 
today) which would not require an external tool, but as noted I don't think 
this is the only problem Spack has... (and that's just core R - even a bigger 
can of worms with R packages :P).

Cheers,
Simon


> On Jan 11, 2024, at 8:43 AM, Ivan Krylov via R-devel  
> wrote:
> 
> Hello R-devel,
> 
> Currently on Unix-like systems, Sys.which incorporates the absolute
> path to the `which` executable, obtained at the configure stage:
> 
>>   ## hopefully configure found [/usr]/bin/which
>>   which <- "@WHICH@"
>>   if (!nzchar(which)) {
>>   warning("'which' was not found on this platform")
> 
> This poses a problem for the Spack package manager and software
> distribution. In Spack, like in Nix, Guix, and GoboLinux, packages live
> under their own path prefixes, which look like the following:
> 
>>> /opt/spack/opt/spack/linux-ubuntu18.04-x86_64_v3/gcc-7.5.0/r-4.3.0-eqteloqhjzix6ta373ruzt5imvvbcesc
> 
> Unfortunately, Spack packages are expected to get relocated, changing
> the path prefix and invalidating stored paths, including the path to
> `which`: .
> 
> Harmen Stoppels, who is not subscribed to R-devel but interested in
> making R work in Spack, currently creates a symlink to `which`
>  as part of a patch to R.
> 
> What would be the minimally disruptive way to avoid this dependency or
> at least make it easier to fix post-factum, during relocation? What
> would be the pitfall if Sys.which() were to find `which` on the $PATH
> by itself, without remembering the full path?
> 
> -- 
> Best regards,
> Ivan
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [R-pkg-devel] portability question

2023-12-20 Thread Simon Urbanek
This has nothing to do with Steven's question since he is creating a *static* 
library whereas install_name_tool changes install name ID entry of a *dynamic* 
library. Also the data.table example is effectively a no-op, because changing 
the ID makes no difference as it can't be linked against directly anyway. [In 
general, taking advice from data.table regarding macOS doesn't strike me as 
wise given that the authors can't even get their package to work properly on 
macOS.]

Cheers,
Simon


> On 21/12/2023, at 8:42 AM, Dirk Eddelbuettel  wrote:
> 
> 
> On 20 December 2023 at 11:10, Steven Scott wrote:
> | The Boom package builds a library against which other packages link.  The
> | library is built using the Makevars mechanism using the line
> | 
> | ${AR} rc $@ $^
> | 
> | A user has asked me to change 'rc' to 'rcs' so that 'ranlib' will be run on
> | the archive.  This is apparently needed for certain flavors of macs.  I'm
> | hoping someone on this list can comment on the portability of that change
> | and whether it would negatively affect other platforms.  Thank you.
> 
> Just branch for macOS.  Here is a line I 'borrowed' years ago from data.table
> and still use for packages needed to call install_name_tool on macOS.  You
> could have a simple 'true' branch of the test use 'rcs' and the 'false'
> branch do what you have always done.  Without any portability concerns.
> 
> From https://github.com/Rdatatable/data.table/blob/master/src/Makevars.in#L14
> and indented here for clarity
> 
>if [ "$(OS)" != "Windows_NT" ] && [ `uname -s` = 'Darwin' ]; then \
>   install_name_tool -id data_table$(SHLIB_EXT) 
> data_table$(SHLIB_EXT); \
>fi
> 
> Dirk
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] portability question

2023-12-20 Thread Simon Urbanek
Steven,

no, I'm not aware of any negative effect, in fact having an index in the 
archive is always a good idea - some linkers require it, some work faster with 
it and at the worst the linker ignores it. And as far as I can tell all current 
system "ar" implementations support the -s flag (even though technically, it's 
only part of the XSI POSIX extension, but POSIX doesn't define ranlib so ar -s 
is better than using ranlib directly).

Cheers,
Simon


> On 21/12/2023, at 8:10 AM, Steven Scott  wrote:
> 
> The Boom package builds a library against which other packages link.  The
> library is built using the Makevars mechanism using the line
> 
> ${AR} rc $@ $^
> 
> A user has asked me to change 'rc' to 'rcs' so that 'ranlib' will be run on
> the archive.  This is apparently needed for certain flavors of macs.  I'm
> hoping someone on this list can comment on the portability of that change
> and whether it would negatively affect other platforms.  Thank you.
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Wrong mailing list: Could the 100 byte path length limit be lifted?

2023-12-12 Thread Simon Urbanek
Justin,

now that you clarified what you are actually talking about, this is a question 
about the CRAN policies, so you should really direct it to the CRAN team as it 
is their decision (R-devel would be appropriate if this was a limitation in R 
itself, and R-package-devel would be appropriate if you wanted help with 
refactoring to adhere to the policy). There are still path limits on various 
platforms (even if they are becoming more rare), so I'd personally question the 
source rather than the policy, but then your email was remarkably devoid of any 
details.

Cheers,
Simon


> On Dec 13, 2023, at 6:03 AM, McGrath, Justin M  wrote:
> 
> When submitting a package to CRAN, it is required that path names be shorter 
> than 100 bytes, with the reason that paths longer than that cannot be made 
> into portable tar files. This error is reported by `R CMD check --as-cran`. 
> Since this pertains only to developing packages, this seemed like the 
> appropriate list, but if you don't think so, I can instead ask on R-devel.
> 
> Best wishes,
> Justin
> 
> 
> From: Martin Maechler 
> Sent: Tuesday, December 12, 2023 10:13 AM
> To: McGrath, Justin M
> Cc: r-package-devel@r-project.org
> Subject: Wrong mailing list: [R-pkg-devel] Could the 100 byte path length 
> limit be lifted?
> 
>> McGrath, Justin M
>>on Tue, 12 Dec 2023 15:03:28 + writes:
> 
>> We include other software in our source code. It has some long paths so a 
>> few of the files end up with paths longer than 100 bytes, and we need to 
>> manually rename them whenever we pull in updates.
>> The 100 byte path limit is from tar v7, and since
>> POSIX1.1988, there has not been a path length limit. That
>> standard is 35 years old now, so given that there is
>> probably no one using an old version of tar that also
>> wants to use the latest version of R, could the 100 byte
>> limit be lifted? Incidentally, I am a big proponent of
>> wide, long-term support, but it's hard to see that this
>> change would negatively impact anyone.
> 
>> Best wishes,
>> Justin
> 
> Wrong mailing list:
> 
> This is a topic for R-devel,  not at all R-package-devel,
> but be more accurate in what you are talking about, only between
> the line I could read that it is about some variants of using
> 'tar'.
> 
> Best regards,
> Martin
> ---
> 
> Martin Maechler
> ETH Zurich  and  R Core team
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Status of -mmacosx-version-min

2023-12-09 Thread Simon Urbanek
As discussed here before packages should *never* set -mmacosx-version-min or 
similar flags by hand. As documented in R-exts 1.2 packages should retrieve 
compiler flags from R (this includes compiling 3rd party dependencies). 
Incidentally, older versions of R have included -mmacosx-version-min in the CC 
setting (high-sierra builds) which current ones don't, but complying packages 
will automatically use the correct compilers and flags regardless of the R 
version and build. Note that this is important as R can be built for different 
systems and targets so the package should not assume anything - just ask R.

The implied question was about the target macOS version for CRAN binaries: it 
is always included in the build name, so high-sierra build was targeting macOS 
10.13 (High Sierra) and big-sur build is targeting macOS 11 (Big Sur). It is 
clearly stated next to the download for each macOS R binary on CRAN: 
https://cran.r-project.org/bin/macosx/ where the current releases target macOS 
11.

Anyone distributing macOS binaries should subscribe to R-SIG-Mac where we 
discuss details such as repository locations etc. Developers writing packages 
just need to pay attention to R-exts and CRAN policies (the latter if they want 
to publish on CRAN). I hope this is now clear enough explanation.

Cheers,
Simon


> On Dec 10, 2023, at 10:07 AM, Dirk Eddelbuettel  wrote:
> 
> 
> Last month, I had asked about the setting '-mmacosx-version-min' here.  The
> setting can be used to specify what macOS version one builds for. It is,
> oddly enough, not mentioned in Writing R Extension but for both r-release and
> r-devel the R Administration manual states
> 
>   • Current CRAN macOS distributions are targeted at Big Sur so it is
> wise to ensure that the compilers generate code that will run on
> Big Sur or later.  With the recommended compilers we can use
>  CC="clang -mmacosx-version-min=11.0"
>  CXX="clang++ -mmacosx-version-min=11.0"
>  FC="/opt//gfortran/bin/gfortran -mmacosx-version-min=11.0"
> or set the environment variable
>  export MACOSX_DEPLOYMENT_TARGET=11.0
> 
> which is clear enough. (There is also an example in the R Internals manual
> still showing the old (and deprecated ?) value of 10.13.)  It is also stated
> at the top of mac.r-project.org.  But it is still in a somewhat confusing
> contradiction to the matrix of tests machines, described e.g. at
> 
>   https://cran.r-project.org/web/checks/check_flavors.html
> 
> which still has r-oldrel-macos-x86_64 with 10.13.
> 
> I found this confusing, and pressed the CRAN macOS maintainer to clarify but
> apparently did so in an insuffciently convincing manner. (There was a word
> about it being emailed to r-sig-mac which is a list I am not on as I don't
> have a macOS machine.) So in case anybody else wonders, my hope is that the
> above is of help. At my day job, we will now switch to 11.0 to take advantage
> of some more recent C++ features.
> 
> Dirk
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] macos x86 oldrel backups?

2023-12-05 Thread Simon Urbanek
Jon,

The high-sierra build packages are currently not built due to hardware issues. 
The macOS version is so long out of support by Apple (over 6 years) that it is 
hard to maintain it. Only big-sur builds are supported at this point. Although 
it is possible that we may be able to restore the old builds, it is not 
guaranteed. (BTW the right mailing list for this is R-SIG-Mac).

Cheers,
Simon



> On 5/12/2023, at 09:52, Jonathan Keane  wrote:
> 
> Thank you to the CRAN maintainers for maintenance and keeping the all
> of the CRAN infrastructure running.
> 
> I'm seeing a long delay in builds on CRAN for r-oldrel-macos-x86_64.
> I'm currently interested in Arrow [1], but I'm seeing many other
> packages with similar missing r-oldrel-macos-x86_64 builds (possibly
> all, I sampled a few packages from [2], but didn't do an exhaustive
> search) for an extended period.
> 
> It appears that this started between 2023-10-21 and 2023-10-22. It
> looks like AMR [3] has a successful build but xlcutter does not [4]
> and all the packages I've checked after 2023-10-22 don't have an
> updated build for r-oldrel-macos-x86_64
> 
> Sorry if this is scheduled maintenance, I tried to find an
> announcement here and on r-project.org but haven't yet found anything
> indicating this.
> 
> [1] - https://cran.r-project.org/web/checks/check_results_arrow.html
> [2] - https://cran.r-project.org/web/packages/available_packages_by_date.html
> [3] - https://cran.r-project.org/web/packages/AMR/index.html
> [4] - https://cran.r-project.org/web/packages/xlcutter/index.html
> 
> -Jon
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Cryptic error on Windows but not Debian

2023-11-18 Thread Simon Urbanek
Chris,

this was not a change in interpretation, but rather CRAN's tools have gotten 
better at detecting such bad behavior.

I would like to point out that there is absolutely no reason to mangle user's 
.Renviron since the package can always set any environment variables it needs 
from its (legally managed) configuration files (as described in the policy) on 
load.

Regarding the approach you suggested, personally, I think it is bad - no 
package should be touching personal configuration files - it's entirely 
unnecessary and dangerous (e.g., your code will happily remove the content of 
the file on write error losing all user's values - that's why you never write 
important files directly, but rather create a copy which you atomically move in 
place *after* you know the content is correct).

Cheers,
Simon



> On Nov 19, 2023, at 12:40 PM, Kenny, Christopher 
>  wrote:
> 
> Rather than using tools::R_user_dir(), you can also ask the user for a path 
> where they would like to save the information to. This allows you to test it 
> with a temporary directory file, but would allow the user to specify their 
> .Renviron file, if they so choose. This acts as a middle ground managing a 
> separate package-specific file and storing it as an environmental variable. 
> This is how I approach it in a handful of packages, including `feltr` (see 
> https://github.com/christopherkenny/feltr/blob/HEAD/R/felt_key.R) and `bskyr` 
> (see https://github.com/christopherkenny/bskyr/blob/main/R/auth_user.R).
> 
> For what it's worth, some of this confusion may come from a relatively recent 
> change in interpretation of the policy mentioned below by Simon (even though 
> the text has long read that way). For years, CRAN allowed packages which had 
> the practice of opting into writing to the default .Renviron file. That old 
> reading comports with the example you point to in qualtRics, where the 
> writing is controlled by the `install` argument, with a default of FALSE. 
> Since sometime in the last year, the interpretation was updated and you are 
> now met with a message from the volunteer which states:
> "Please ensure that your functions do not write by default or in your 
> examples/vignettes/tests in the user's home filespace (including the package 
> directory and getwd()). This is not allowed by CRAN policies.
> Please omit any default path in writing functions. In your 
> examples/vignettes/tests you can write to tempdir()."
> 
> The approach used in `feltr` and other packages to explicitly require a path 
> as an argument appears to be okay with the new reading of the policy. (At 
> least, the CRAN volunteers seem to accept packages which use this approach.)
> 
> Best,
> Chris
> 
> 
> From: R-package-devel  on behalf of 
> Simon Urbanek 
> Sent: Saturday, November 18, 2023 6:14 PM
> To: Adam 
> Cc: r-package-devel@r-project.org 
> Subject: Re: [R-pkg-devel] Cryptic error on Windows but not Debian 
>  
> Adam,
> 
> no, it is your code in mm_authorize() that violates the CRAN policy, it is 
> not about the test. You may not touch user's .Renviron and there is no reason 
> to resort to such drastic measures. If you want to cache user's credentials, 
> you have to do it in a file located via tools::R_user_dir().
> 
> Cheers,
> Simon
> 
> 
>> On Nov 19, 2023, at 12:07 PM, Adam  wrote:
>> 
>> Thank you dearly, Simon, for pointing out the policy. May a test do the 
>> following?
>> 
>> 1. Save the user's original value for env var X.
>> 2. Write a new value for env var X during a test.
>> 3. Write back the original value for env var X at the end of the test.
>> 
>> An example:
>> 
>> test_that("mm_authorize() sets credentials", {
>>skip_on_cran()
>>key_original <- mm_key()
>>url_original <- mm_url()
>>withr::defer({
>>  mm_authorize(
>>key = key_original,
>>url = url_original,
>>overwrite = TRUE
>>  )
>>})
>>mm_authorize(
>>  key = "1",
>>  url = "https://api.megamation.com/uw/joe/ 
>> <https://api.megamation.com/uw/joe/>",
>>  overwrite = TRUE
>>)
>>expect_false(
>>  endsWith(Sys.getenv("MEGAMATION_URL"), "/")
>>)
>> })
>> 
>> Best,
>> Adam
>> 
>> 
>> On Sat, Nov 18, 2023 at 4:52 PM Simon Urbanek > <mailto:simon.urba...@r-project.org>> wrote:
>> Adam,
>> 
>> 
>>> On Nov 19, 2023, at 9:39 AM, Adam >> <mailto:asebsadow...@gmail.com>> wrote:
>>> 
>>> Dear Ivan,
>>> 
>>> Thank you for explaining in 

Re: [R-pkg-devel] Cryptic error on Windows but not Debian

2023-11-18 Thread Simon Urbanek
Adam,

no, it is your code in mm_authorize() that violates the CRAN policy, it is not 
about the test. You may not touch user's .Renviron and there is no reason to 
resort to such drastic measures. If you want to cache user's credentials, you 
have to do it in a file located via tools::R_user_dir().

Cheers,
Simon


> On Nov 19, 2023, at 12:07 PM, Adam  wrote:
> 
> Thank you dearly, Simon, for pointing out the policy. May a test do the 
> following?
> 
> 1. Save the user's original value for env var X.
> 2. Write a new value for env var X during a test.
> 3. Write back the original value for env var X at the end of the test.
> 
> An example:
> 
> test_that("mm_authorize() sets credentials", {
>   skip_on_cran()
>   key_original <- mm_key()
>   url_original <- mm_url()
>   withr::defer({
> mm_authorize(
>   key = key_original,
>   url = url_original,
>   overwrite = TRUE
> )
>   })
>   mm_authorize(
> key = "1",
> url = "https://api.megamation.com/uw/joe/ 
> <https://api.megamation.com/uw/joe/>",
> overwrite = TRUE
>   )
>   expect_false(
> endsWith(Sys.getenv("MEGAMATION_URL"), "/")
>   )
> })
> 
> Best,
> Adam
> 
> 
> On Sat, Nov 18, 2023 at 4:52 PM Simon Urbanek  <mailto:simon.urba...@r-project.org>> wrote:
> Adam,
> 
> 
> > On Nov 19, 2023, at 9:39 AM, Adam  > <mailto:asebsadow...@gmail.com>> wrote:
> > 
> > Dear Ivan,
> > 
> > Thank you for explaining in such depth. I had not submitted to CRAN before.
> > I will look into tools::R_user_dir().
> > 
> > - May you point me toward the policy that the package should not edit 
> > .Renviron?
> 
> 
> It is the policy you have agreed to when submitting your package to CRAN:
> 
> "CRAN Repository Policy
> [...]
> The code and examples provided in a package should never do anything which 
> might be regarded as malicious or anti-social. The following are illustrative 
> examples from past experience.
> [...]
>  - Packages should not write in the user’s home filespace (including 
> clipboards), nor anywhere else on the file system apart from the R session’s 
> temporary directory. [...]
>   For R version 4.0 or later (hence a version dependency is required or only 
> conditional use is possible), packages may store user-specific data, 
> configuration and cache files in their respective user directories obtained 
> from tools::R_user_dir(), provided that by default sizes are kept as small as 
> possible and the contents are actively managed (including removing outdated 
> material).
> "
> 
> Cheers,
> Simon
> 


[[alternative HTML version deleted]]

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Cryptic error on Windows but not Debian

2023-11-18 Thread Simon Urbanek
Adam,


> On Nov 19, 2023, at 9:39 AM, Adam  wrote:
> 
> Dear Ivan,
> 
> Thank you for explaining in such depth. I had not submitted to CRAN before.
> I will look into tools::R_user_dir().
> 
> - May you point me toward the policy that the package should not edit 
> .Renviron?


It is the policy you have agreed to when submitting your package to CRAN:

"CRAN Repository Policy
[...]
The code and examples provided in a package should never do anything which 
might be regarded as malicious or anti-social. The following are illustrative 
examples from past experience.
[...]
 - Packages should not write in the user’s home filespace (including 
clipboards), nor anywhere else on the file system apart from the R session’s 
temporary directory. [...]
  For R version 4.0 or later (hence a version dependency is required or only 
conditional use is possible), packages may store user-specific data, 
configuration and cache files in their respective user directories obtained 
from tools::R_user_dir(), provided that by default sizes are kept as small as 
possible and the contents are actively managed (including removing outdated 
material).
"

Cheers,
Simon

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [Rd] Regenerate news feeds?

2023-11-17 Thread Simon Urbanek
Tim,

thanks. I have updated R to the latest R-devel on the machine in the hope that 
is may help, but I suspect it will only affect new entries as they are 
generated progressively each day.

Cheers,
Simon


> On Nov 18, 2023, at 11:38 AM, Tim Taylor  
> wrote:
> 
> The news feeds (e.g. 
> https://developer.r-project.org/blosxom.cgi/R-devel/NEWS) have some stray 
> "\abbr" floating around. Do they need generating with a more recent version 
> of R-devel?
> 
> I've run tools::Rd2txt on https://svn.r-project.org/R/trunk/doc/NEWS.Rd and 
> r85550 does seem to remove these abberations (compared to the same function 
> calling on an unpatched 4.3.2 where they remain).
> 
> Tim
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [R-pkg-devel] Can -mmacosx-version-min be raised to 10.15 ?

2023-11-16 Thread Simon Urbanek
Dirk,


> On 17/11/2023, at 10:28 AM, Dirk Eddelbuettel  wrote:
> 
> 
> Simon,
> 
> On 17 November 2023 at 09:35, Simon Urbanek wrote:
> | can you clarify where the flags come from? The current CRAN builds 
> (big-sur-x86_64 and big-sur-arm64) use
> | 
> | export SDKROOT=/Library/Developer/CommandLineTools/SDKs/MacOSX11.sdk
> | export MACOSX_DEPLOYMENT_TARGET=11.0
> | 
> | so the lowest target is 11.0 and it is no longer forced it in the flags (so 
> that users can more easily choose their desired targets).
> 
> Beautiful, solves our issue.  Was that announced at some point? If so, where?
> 

I don't see what is there to announce as the packages should be simply using 
flags passed from R and that process did not change.

That said, the binary target for CRAN has been announced on this list as part 
of the big-sur build announcement:
https://stat.ethz.ch/pipermail/r-sig-mac/2023-April/014731.html


> For reference the R-on-macOS FAQ I consulted still talks about 10.13 at
> https://cran.r-project.org/bin/macosx/RMacOSX-FAQ.html#Installation-of-source-packages
> 
>  CC = clang -mmacosx-version-min=10.13
>  CXX = clang++ -mmacosx-version-min=10.13 -std=gnu++14
>  FC = gfortran -mmacosx-version-min=10.13
>  OBJC = clang -mmacosx-version-min=10.13
>  OBJCXX = clang++ -mmacosx-version-min=10.13
> 
> so someone may want to refresh this. It is what I consulted as relevant info.
> 

It says "Look at file /Library/Frameworks/R.framework/Resources/etc/Makeconf" 
so it is just an example that will vary by build. For example big-sur-arm64 
will give you

$ grep -E '^(CC|CXX|FC|OBJC|OBJCXX) ' 
/Library/Frameworks/R.framework/Resources/etc/Makeconf
CC = clang -arch arm64
CXX = clang++ -arch arm64 -std=gnu++14
FC = /opt/R/arm64/bin/gfortran -mtune=native
OBJC = clang -arch arm64
OBJCXX = clang++ -arch arm64

Again, this is just an example, no one should be entering such flags by hand - 
that's why they are in Makeconf so packages can use them without worrying about 
the values (see R-exts 1.2: 
https://cran.r-project.org/doc/manuals/R-exts.html#Configure-and-cleanup for 
details).

Cheers,
Simon



> Thanks, Dirk
> 
> | 
> | Cheers,
> | Simon
> | 
> | 
> | 
> | > On 17/11/2023, at 2:57 AM, Dirk Eddelbuettel  wrote:
> | > 
> | > 
> | > Hi Simon,
> | > 
> | > We use C++20 'inside' our library and C++17 in the API. Part of our C++17 
> use
> | > is now expanding to std::filesystem whose availability is dependent on the
> | > implementation. 
> | > 
> | > The compiler tells us (in a compilation using -mmacosx-version-min=10.14)
> | > that the features we want are only available with 10.15.
> | > 
> | > Would we be allowed to use this value of '10.15' on CRAN?
> | > 
> | > Thanks as always,  Dirk
> | > 
> | > 
> | > [1] 
> https://github.com/TileDB-Inc/TileDB/actions/runs/6882271269/job/18720444943?pr=4518#step:7:185
> | > 
> | > -- 
> | > dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> | > 
> | > __
> | > R-package-devel@r-project.org mailing list
> | > https://stat.ethz.ch/mailman/listinfo/r-package-devel
> | > 
> | 
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Can -mmacosx-version-min be raised to 10.15 ?

2023-11-16 Thread Simon Urbanek
Dirk,

can you clarify where the flags come from? The current CRAN builds 
(big-sur-x86_64 and big-sur-arm64) use

export SDKROOT=/Library/Developer/CommandLineTools/SDKs/MacOSX11.sdk
export MACOSX_DEPLOYMENT_TARGET=11.0

so the lowest target is 11.0 and it is no longer forced it in the flags (so 
that users can more easily choose their desired targets).

Cheers,
Simon



> On 17/11/2023, at 2:57 AM, Dirk Eddelbuettel  wrote:
> 
> 
> Hi Simon,
> 
> We use C++20 'inside' our library and C++17 in the API. Part of our C++17 use
> is now expanding to std::filesystem whose availability is dependent on the
> implementation. 
> 
> The compiler tells us (in a compilation using -mmacosx-version-min=10.14)
> that the features we want are only available with 10.15.
> 
> Would we be allowed to use this value of '10.15' on CRAN?
> 
> Thanks as always,  Dirk
> 
> 
> [1] 
> https://github.com/TileDB-Inc/TileDB/actions/runs/6882271269/job/18720444943?pr=4518#step:7:185
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Matrix and Mac OS

2023-10-31 Thread Simon Urbanek
Mikael,

in that case I think your requirements are wrong - Matrix says R >= 3.5.0 which 
is apparently incorrect - from what you say it should be 4.2.2?. I can 
certainly update to 4.2.3 if necessary.

Cheers,
Simon



> On 1/11/2023, at 9:19 AM, Mikael Jagan  wrote:
> 
> Thanks.  We did see those ERRORs, stemming from use (since Matrix 1.6-0)
> of amsmath commands in Rd files.  These have been supported since R 4.2.2,
> but r-oldrel-macos-* (unlike r-oldrel-windows-*) continues to run R 4.2.0.
> My expectation was that those machines would begin running R >= 4.2.2 well
> before the R 4.4.0 release, but apparently that was wrong.
> 
> I am hesitant to complicate our Rd files with conditions on R versions
> only to support PDF output for R < 4.2.2, but maybe we can consider it
> for the Matrix 1.6-2 release if it is really a barrier for others ...
> 
> Mikael
> 
> On 2023-10-31 3:33 pm, Simon Urbanek wrote:
>> Mikael,
>> current Matrix fails checks on R-oldrel so that's why only the last working 
>> version is installed:
>> https://cran.r-project.org/web/checks/check_results_Matrix.html
>> Cheers,
>> Simon
>>> On 1/11/2023, at 4:05 AM, Mikael Jagan  wrote:
>>> 
>>> I am guessing that they mean EdSurvey:
>>> 
>>>https://cran.r-project.org/web/checks/check_results_EdSurvey.html
>>> 
>>> Probably Matrix 1.6-1.1 is not installed on r-oldrel-macos-arm64,
>>> even though it can be, because it was not released until R 4.3-z.
>>> 
>>> AFAIK, methods for 'qr' have not been touched since Matrix 1.6-0, and
>>> even those changes should have been backwards compatible, modulo handling
>>> of dimnames (class sparseQR gained a Dimnames slot in 1.6-0).
>>> 
>>> So I don't see a clear reason for requiring 1.6-1.1.  Requiring 1.6-0
>>> might make sense, if somehow EdSurvey depends on how class sparseQR
>>> preserves dimnames.  But IIRC our rev. dep. checks at that time did not
>>> reveal problems with EdSurvey.
>>> 
>>> Mikael
>>> 
>>> On 2023-10-31 7:00 am, r-package-devel-requ...@r-project.org wrote:
>>>> Paul,
>>>> can you give us a bit more detail? Which package, which build and where 
>>>> you got the errors? Older builds may not have the latest Matrix.
>>>> Cheers,
>>>> Simon
>>>>> On 31/10/2023, at 11:26 AM, Bailey, Paul via 
>>>>> R-package-devel  wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> I'm the maintainer for a few packages, one of which is currently failing 
>>>>> CRAN checks on Mac OS because Matrix is not available in my required 
>>>>> version (the latest). I had to fix a few things due to changes in the 
>>>>> latest Matrix package because of how qr works and I thought, given the 
>>>>> apparent API change, I should then require the latest version. My error 
>>>>> is, "Package required and available but unsuitable version: 'Matrix'"
>>>>> 
>>>>> When I look at the NEWS in Matrix there is no mention of Mac OS issues, 
>>>>> what the latest stable version of Matrix is, nor when a fix is expected. 
>>>>> What version do MacOS version test Matrix with by default? Where is this 
>>>>> documented? I assumes it always tested with the latest version on CRAN, 
>>>>> so I'm a bit surprised. Or will this be resolved soon and I shouldn't 
>>>>> bother CRAN maintainers with a new version of my package?
>>>>> 
>>>>> Best,
>>>>> Paul
>>>>> 
>>>>>   [[alternative HTML version deleted]]
>>> 
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Matrix and Mac OS

2023-10-31 Thread Simon Urbanek
Mikael,

current Matrix fails checks on R-oldrel so that's why only the last working 
version is installed:
https://cran.r-project.org/web/checks/check_results_Matrix.html

Cheers,
Simon



> On 1/11/2023, at 4:05 AM, Mikael Jagan  wrote:
> 
> I am guessing that they mean EdSurvey:
> 
>https://cran.r-project.org/web/checks/check_results_EdSurvey.html
> 
> Probably Matrix 1.6-1.1 is not installed on r-oldrel-macos-arm64,
> even though it can be, because it was not released until R 4.3-z.
> 
> AFAIK, methods for 'qr' have not been touched since Matrix 1.6-0, and
> even those changes should have been backwards compatible, modulo handling
> of dimnames (class sparseQR gained a Dimnames slot in 1.6-0).
> 
> So I don't see a clear reason for requiring 1.6-1.1.  Requiring 1.6-0
> might make sense, if somehow EdSurvey depends on how class sparseQR
> preserves dimnames.  But IIRC our rev. dep. checks at that time did not
> reveal problems with EdSurvey.
> 
> Mikael
> 
> On 2023-10-31 7:00 am, r-package-devel-requ...@r-project.org wrote:
>> Paul,
>> can you give us a bit more detail? Which package, which build and where you 
>> got the errors? Older builds may not have the latest Matrix.
>> Cheers,
>> Simon
>>> On 31/10/2023, at 11:26 AM, Bailey, Paul via 
>>> R-package-devel  wrote:
>>> 
>>> Hi,
>>> 
>>> I'm the maintainer for a few packages, one of which is currently failing 
>>> CRAN checks on Mac OS because Matrix is not available in my required 
>>> version (the latest). I had to fix a few things due to changes in the 
>>> latest Matrix package because of how qr works and I thought, given the 
>>> apparent API change, I should then require the latest version. My error is, 
>>> "Package required and available but unsuitable version: 'Matrix'"
>>> 
>>> When I look at the NEWS in Matrix there is no mention of Mac OS issues, 
>>> what the latest stable version of Matrix is, nor when a fix is expected. 
>>> What version do MacOS version test Matrix with by default? Where is this 
>>> documented? I assumes it always tested with the latest version on CRAN, so 
>>> I'm a bit surprised. Or will this be resolved soon and I shouldn't bother 
>>> CRAN maintainers with a new version of my package?
>>> 
>>> Best,
>>> Paul
>>> 
>>> [[alternative HTML version deleted]]
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Matrix and Mac OS

2023-10-30 Thread Simon Urbanek
Paul,

can you give us a bit more detail? Which package, which build and where you got 
the errors? Older builds may not have the latest Matrix.

Cheers,
Simon


> On 31/10/2023, at 11:26 AM, Bailey, Paul via R-package-devel 
>  wrote:
> 
> Hi,
> 
> I'm the maintainer for a few packages, one of which is currently failing CRAN 
> checks on Mac OS because Matrix is not available in my required version (the 
> latest). I had to fix a few things due to changes in the latest Matrix 
> package because of how qr works and I thought, given the apparent API change, 
> I should then require the latest version. My error is, "Package required and 
> available but unsuitable version: 'Matrix'"
> 
> When I look at the NEWS in Matrix there is no mention of Mac OS issues, what 
> the latest stable version of Matrix is, nor when a fix is expected. What 
> version do MacOS version test Matrix with by default? Where is this 
> documented? I assumes it always tested with the latest version on CRAN, so 
> I'm a bit surprised. Or will this be resolved soon and I shouldn't bother 
> CRAN maintainers with a new version of my package?
> 
> Best,
> Paul
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


[Rd] GLFW [Was: Wayland Display Support in R Plot]

2023-10-29 Thread Simon Urbanek
Duncan,

at least according to the docs GLFW doesn't really care - it will forward 
whatever OpenGL is available on the platform. In fact it says:

"By default it also includes the OpenGL header from your development 
environment. On some platforms this header only supports older versions of 
OpenGL. The most extreme case is Windows, where it typically only supports 
OpenGL 1.2."

so, really, OpenGL 1.2 is perfect.

You can request a minimal version, but don't have to, i.e., old versions are 
ok. I have tested libglfw3 (3.3.8) with OpenGL 1.2 (on macOS) and it worked 
just fine.

That said, both points were meant for the list in general - those are nice 
self-contained projects (add libwayland to Cairo and GLFW to rgl) for someone 
with spare time to contribute...

Cheers,
Simon



> On 30/10/2023, at 9:48 AM, Duncan Murdoch  wrote:
> 
> On 29/10/2023 4:20 p.m., Simon Urbanek wrote:
>>> On 30/10/2023, at 8:38 AM, Dirk Eddelbuettel  wrote:
>>> 
>>> 
>>> On 30 October 2023 at 07:54, Paul Murrell wrote:
>>> | I am unaware of any Wayland display support.
>>> |
>>> | One useful way forward would be an R package that provides such a device
>>> | (along the lines of 'Cairo', 'tikzDevice', et al)
>>> 
>>> As I understand it, it is a protocol, and not a device.
>>> 
>> Well, X11 is a protocol, not a device, either.
>> Wayland is a lot worse, since it doesn't really do much at all - the clients 
>> are fully responsible for drawing (doesn't even support remote connections).
>> Given that Wayland is essentially a "dumb" framebuffer, probably the easiest 
>> way would be to take Cairo and add a libwayland back-end. Cairo is already 
>> modular so it's relatively straight-forward to add a new back-end to it (I'd 
>> probably just copy xlib-backend.c and replace X11 calls with libwayland 
>> calls since the low-level design is the same).
>> However, that is limited only to devices, so you would still run R code in 
>> the shell (or other GUI that may or may not by Wayland-based). Given that 
>> Wayland is so minimal, you'd need some GUI library for anything beyond that 
>> - so you may was well just run a Wayland-based browser and be done with it 
>> saving you all the bother (oh, right, that's called RStudio ;)).
>> One package that may be worth adding Wayland backend to is rgl so you get 
>> OpenGL on Wayland - I'd simply re-write it to use GLFW so it works across 
>> all platforms and including Wayland.
> 
> I looked into using GLFW a while ago, but it seemed too hard to do without 
> other really major changes to rgl, so that's not going to happen soon (unless 
> someone else does it).
> 
> I think the issue was that it was hard to get it to work with the ancient 
> OpenGL 1.2 that rgl uses.  I forget whether it was just hard or actually 
> impossible.
> 
> I am slowly working towards having rgl use newer OpenGL versions, but I don't 
> expect this to be done for quite a while.
> 
> Duncan Murdoch
> 
>> Cheers,
>> Simon
>>> Several Linux distributions have long defaulted to it, so we already should
>>> have thousands of users. While 'not X11' it provides a compatibility layer
>>> and should be seamless.
>>> 
>>> I think I needed to fall back to X11 for a particular applications (likely
>>> OBS) so my session tells me (under Settings -> About -> Windowing System) I
>>> am still running X11. I'll check again once I upgrade from Ubuntu 23.04 to
>>> Ubuntu 23.10
>>> 
>>> See https://en.wikipedia.org/wiki/Wayland_(protocol) for more.
>>> 
>>> Dirk
>>> 
>>> -- 
>>> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
>>> 
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>> 
>> __
>> R-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Wayland Display Support in R Plot

2023-10-29 Thread Simon Urbanek



> On 30/10/2023, at 8:38 AM, Dirk Eddelbuettel  wrote:
> 
> 
> On 30 October 2023 at 07:54, Paul Murrell wrote:
> | I am unaware of any Wayland display support.
> | 
> | One useful way forward would be an R package that provides such a device 
> | (along the lines of 'Cairo', 'tikzDevice', et al)
> 
> As I understand it, it is a protocol, and not a device.
> 

Well, X11 is a protocol, not a device, either.

Wayland is a lot worse, since it doesn't really do much at all - the clients 
are fully responsible for drawing (doesn't even support remote connections).

Given that Wayland is essentially a "dumb" framebuffer, probably the easiest 
way would be to take Cairo and add a libwayland back-end. Cairo is already 
modular so it's relatively straight-forward to add a new back-end to it (I'd 
probably just copy xlib-backend.c and replace X11 calls with libwayland calls 
since the low-level design is the same).

However, that is limited only to devices, so you would still run R code in the 
shell (or other GUI that may or may not by Wayland-based). Given that Wayland 
is so minimal, you'd need some GUI library for anything beyond that - so you 
may was well just run a Wayland-based browser and be done with it saving you 
all the bother (oh, right, that's called RStudio ;)).

One package that may be worth adding Wayland backend to is rgl so you get 
OpenGL on Wayland - I'd simply re-write it to use GLFW so it works across all 
platforms and including Wayland.

Cheers,
Simon



> Several Linux distributions have long defaulted to it, so we already should
> have thousands of users. While 'not X11' it provides a compatibility layer
> and should be seamless.
> 
> I think I needed to fall back to X11 for a particular applications (likely
> OBS) so my session tells me (under Settings -> About -> Windowing System) I
> am still running X11. I'll check again once I upgrade from Ubuntu 23.04 to
> Ubuntu 23.10
> 
> See https://en.wikipedia.org/wiki/Wayland_(protocol) for more.
> 
> Dirk
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [R-pkg-devel] Failing to write config file in linux

2023-10-23 Thread Simon Urbanek
>From CRAN policy (which you agreed to when you submitted your package) - note 
>in particular the "nor anywhere else on the file system" part and also note 
>that it tells you what to do in your case:


Packages should not write in the user’s home filespace (including clipboards), 
nor anywhere else on the file system apart from the R session’s temporary 
directory (or during installation in the location pointed to by TMPDIR: and 
such usage should be cleaned up). Installing into the system’s R installation 
(e.g., scripts to its bin directory) is not allowed.
Limited exceptions may be allowed in interactive sessions if the package 
obtains confirmation from the user.

For R version 4.0 or later (hence a version dependency is required or only 
conditional use is possible), packages may store user-specific data, 
configuration and cache files in their respective user directories obtained 
from tools::R_user_dir(), provided that by default sizes are kept as small as 
possible and the contents are actively managed (including removing outdated 
material).




> On Oct 23, 2023, at 5:52 AM, Keshav, Krishna  wrote:
> 
> Hi,
> 
> My package is failing on linux based systems because of an attempt to write 
> in a location of package. One of the core features that we would like user to 
> have is to modify the values in the config file, for which package has a 
> function for user to provide modified config. In future, they should be able 
> to provide individual parameters for the config for which also we will be 
> writing to config in package directory /inst/ so that it can later be 
> fetched. I understand that policy doesn’t allow writing to home directory. Is 
> there a workaround for this? Or what could be other potential solutions to 
> explore.
> 
> Snippet –
> https://github.com/GarrettLab/CroplandConnectivity/blob/923a4a0ca4a0ce8376068ee80986df228ea21d80/geohabnet/R/params.R#L57
> 
> Error –
> ── Failure ('test-parameters.R:38:3'): Test 6: Test to set new 
> parameters.yaml ──
> Expected `set_parameters(new_param_file)` to run without any conditions.
> ℹ Actually got a  with text:
> cannot create file 
> '/home/hornik/tmp/R.check/r-release-gcc/Work/build/Packages/geohabnet/parameters.yaml',
>  reason 'Read-only file system'
> 
> 
> Best Regards,
> Krishna Keshav
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Suppressing long-running vignette code in CRAN submission

2023-10-17 Thread Simon Urbanek
Dirk,

I think you misread the email - John was was asking specifically about his 
approach to use REBUILD_CV_VIGNETTES without any caching since that was the 
original question which no one answered in the thread - and that was what I was 
answering. The alternative approaches were already discussed to death so I 
didn't comment on those.

Cheers,
Simon



> On 18/10/2023, at 11:03 AM, Dirk Eddelbuettel  wrote:
> 
> 
> On 18 October 2023 at 08:51, Simon Urbanek wrote:
> | John,
> | 
> | the short answer is it won't work (it defeats the purpose of vignettes).
> 
> Not exactly. Everything is under our (i.e. package author) control, and when
> we want to replace 'computed' values with cached values we can.
> 
> All this is somewhat of a charade. "Of course" we want vignettes to run
> tests. But then we don't want to fall over random missing .sty files or fonts
> (macOS machines have been less forgiving than others), not to mention compile
> time.
> 
> So for simplicity I often pre-make pdf vignettes that get included in other
> latex code as source. Works great, never fails, CRAN never complained --
> which is somewhat contrary to your statement.
> 
> It is effectively the same with tests. We all want maximum test surfaces. But
> when tests fail, or when they run too long, or [insert many other reasons
> here] so many packages run tests conditionally.  Such is life.
> 
> Dirk
> 
> 
> | However, this sounds like a purely hypothetical question - CRAN policies 
> allow long-running vignettes if they declared.
> | 
> | Cheers,
> | Simon
> | 
> | 
> | > On 18/10/2023, at 3:02 AM, John Fox  wrote:
> | > 
> | > Hello Dirk,
> | > 
> | > Thank you (and Kevin and John) for addressing my questions.
> | > 
> | > No one directly answered my first question, however, which was whether 
> the approach that I suggested would work. I guess that the implication is 
> that it won't, but it would be nice to confirm that before I try something 
> else, specifically using R.rsp.
> | > 
> | > Best,
> | > John
> | > 
> | > On 2023-10-17 4:02 a.m., Dirk Eddelbuettel wrote:
> | >> Caution: External email.
> | >> On 16 October 2023 at 10:42, Kevin R Coombes wrote:
> | >> | Produce a PDF file yourself, then use the "as.is" feature of the R.rsp
> | >> | package.
> | >> For completeness, that approach also works directly with Sweave. 
> Described in
> | >> a blog post by Mark van der Loo in 2019, and used in a number of packages
> | >> including a few of mine.
> | >> That said, I also used the approach described by John Harrold and cached
> | >> results myself.
> | >> Dirk
> | >> --
> | >> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> | >> __
> | >> R-package-devel@r-project.org mailing list
> | >> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> | > 
> | > __
> | > R-package-devel@r-project.org mailing list
> | > https://stat.ethz.ch/mailman/listinfo/r-package-devel
> | > 
> | 
> | __
> | R-package-devel@r-project.org mailing list
> | https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Suppressing long-running vignette code in CRAN submission

2023-10-17 Thread Simon Urbanek
John,

the short answer is it won't work (it defeats the purpose of vignettes).

However, this sounds like a purely hypothetical question - CRAN policies allow 
long-running vignettes if they declared.

Cheers,
Simon


> On 18/10/2023, at 3:02 AM, John Fox  wrote:
> 
> Hello Dirk,
> 
> Thank you (and Kevin and John) for addressing my questions.
> 
> No one directly answered my first question, however, which was whether the 
> approach that I suggested would work. I guess that the implication is that it 
> won't, but it would be nice to confirm that before I try something else, 
> specifically using R.rsp.
> 
> Best,
> John
> 
> On 2023-10-17 4:02 a.m., Dirk Eddelbuettel wrote:
>> Caution: External email.
>> On 16 October 2023 at 10:42, Kevin R Coombes wrote:
>> | Produce a PDF file yourself, then use the "as.is" feature of the R.rsp
>> | package.
>> For completeness, that approach also works directly with Sweave. Described in
>> a blog post by Mark van der Loo in 2019, and used in a number of packages
>> including a few of mine.
>> That said, I also used the approach described by John Harrold and cached
>> results myself.
>> Dirk
>> --
>> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
>> __
>> R-package-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Question about Clang 17 Error

2023-10-11 Thread Simon Urbanek
Reed,

please contact CRAN - this list can only help with general developer's 
questions, not specific issues with a particular CRAN setup - only the 
corresponding member of CRAN running the setup can help. I don't see anything 
obvious - we can see that it's a mismatch of run-times between the cmake build 
and the R linking, but from the package alone it's not clear to me why.

Cheers,
Simon


> On Oct 12, 2023, at 3:51 PM, Reed A. Cartwright  
> wrote:
> 
> Update: I submitted a new version of the package, but it did not fix the 
> issue. The package has now been archived and I do not have access to the 
> error log output anymore from r-devel-linux-x86_64-fedora-clang.
> 
> I did reproduce CRAN's configuration in a VM using the information provided 
> by CRAN for r-devel-linux-x86_64-fedora-clang. I still cannot reproduce the 
> error and at this point I believe that there is a chance that CRAN's machine 
> is misconfigured.
> 
> The specific error happens after rbedrock has been compiled and linked 
> successfully. The specific error is that the symbol 
> _ZNSt3__122__libcpp_verbose_abortEPKcz cannot be found when rbedrock.so is 
> loaded.This symbol was introduced into libc++ in Clang 15.0. What I believe 
> to be happening to cause the error is that Clang++ 17 is adding a reference 
> to this symbol when compiling and linking rbedrock.so but the dynamic linker 
> is loading an older version of libc++.so when trying to load rbedrock.so and 
> the symbol is not found.
> 
> If this is the cause, then I think that the CRAN machine needs to configure 
> the dynamic linker to use the Clang++ 17 libc++.so, or add the proper command 
> line options to R's config variables.
> 
> It's possible that the CRAN's r-devel-linux-x86_64-fedora-clang machine is 
> fine and I've missed something, and I would be happy if someone could help me 
> figure out what it is.
> 
> Also, a new issue cropped up when 0.3.1 was tested on the 
> r-oldrel-macos-x86_64 machine. /usr/bin/ar seems to have failed to produce an 
> archive. The other Mac versions did fine, so I'm not sure if this is a random 
> error or something related to my package. The error log is here: 
> https://www.r-project.org/nosvn/R.check/r-oldrel-macos-x86_64/rbedrock-00install.html
>  
> <https://www.r-project.org/nosvn/R.check/r-oldrel-macos-x86_64/rbedrock-00install.html>
> 
> If anyone can help me resolve this, I'd appreciate it.
> 
> 
> On Wed, Sep 27, 2023 at 2:54 PM Reed A. Cartwright  <mailto:racartwri...@gmail.com>> wrote:
> Is there any way to submit packages directly to the CRAN's clang17 setup? I 
> can enable verbose output for CMake and compare the output, but I'd rather 
> not clog up the CRAN incoming queue just to debug a linker error?
> 
> On Wed, Sep 27, 2023 at 2:43 PM Simon Urbanek  <mailto:simon.urba...@r-project.org>> wrote:
> It looks like a C++ run-time mismatch between what cmake is using to build 
> the static library and what is used by R. Unfortunately, cmake hides the 
> actual compiler calls so it's hard to tell the difference, but that setup 
> relies on the correct sequence of library paths.
> 
> The rhub manually forces -stdlib=libc++ to all its CXX flags
> https://urldefense.com/v3/__https://github.com/r-hub/rhub-linux-builders/blob/master/fedora-clang-devel/Makevars__;!!IKRxdwAv5BmarQ!bAZgiOQaK4hd5BTk_Ldx9IEHgzHKVbC-uMkvYv5GOVkZDvbedcGwS8dQ4MWXRjukFfds7UpiR9NDZfEoUCWeoVnCfrDa$
>  
> <https://urldefense.com/v3/__https://github.com/r-hub/rhub-linux-builders/blob/master/fedora-clang-devel/Makevars__;!!IKRxdwAv5BmarQ!bAZgiOQaK4hd5BTk_Ldx9IEHgzHKVbC-uMkvYv5GOVkZDvbedcGwS8dQ4MWXRjukFfds7UpiR9NDZfEoUCWeoVnCfrDa$>
>  
> so it is quite different from the gannet tests-clang-trunk setup (also note 
> the different library paths), but that's not something you can do universally 
> in the package, because it strongly depends on the toolchain setup.
> 
> Cheers,
> Simon
> 
> 
> > On 28/09/2023, at 9:37 AM, Reed A. Cartwright  > <mailto:racartwri...@gmail.com>> wrote:
> > 
> > I was unable to reproduce the error on the rhub's clang17 docker image.
> > 
> > I notice that the linking command is slightly different between systems.
> > And this suggests that I need to find some way to get CRAN to pass -stdlib
> > flag at the linking stage.
> > 
> > CRAN:
> > /usr/local/clang17/bin/clang++ -std=gnu++17 -shared
> > -L/usr/local/clang/lib64 -L/usr/local/clang17/lib -L/usr/local/gcc13/lib64
> > -L/usr/local/lib64 -o rbedrock.so actors.o bedrock_leveldb.o dummy.o init.o
> > key_conv.o nbt.o random.o subchunk.o support.o -L./leveldb-mcpe/build
> > -pthread -lleveldb -lz
> > 
> > RHUB:
> > clang++-17 -stdlib=

Re: [R-pkg-devel] What happened to mlr3proba on CRAN?

2023-10-08 Thread Simon Urbanek
Franz,

it means that the author(s) have abandoned the package: as the note says it was 
failing checks and the authors have not fixed the problems so it has been 
removed from CRAN (more than a year ago).

Cheers,
Simon



> On 9/10/2023, at 10:28 AM, Dr. Franz Király  wrote:
> 
> Dear all,
> 
> can someone explain to me what exactly happened to mlr3proba on CRAN?
> https://cran.r-project.org/web/packages/mlr3proba/index.html
> 
> Thanks
> Franz
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] CXX_VISIBILITY on macOS

2023-10-08 Thread Simon Urbanek
Matthias,

this has nothing to do with R, but rather your code. You have the wrong order 
of headers: the SWI headers mess up visibility macros, so you have to include 
them *after* Rcpp.h.

Cheers,
Simon


> On 9/10/2023, at 8:41 AM, Matthias Gondan  wrote:
> 
> Dear developers and CRAN people,
> 
> I get some linker warnings on the macOS build server,
> 
> ld: warning: direct access in function '…' from file '…' to global weak 
> symbol '…' from file '…' means the weak symbol cannot be overridden at 
> runtime. This was likely caused by different translation units being compiled 
> with different visibility settings.
> 
> Writing R Extensions (Section 6.16) says that visibility attributes are not 
> supported on macOS nor Windows. If I add $(CXX_VISIBILITY) to PKG_CXXFLAGS, 
> the warnings are still there, and I can see from the compiler log that the 
> flags do not have any effect on macOS. However, if I add -fvisibility=hidden 
> to PKG_CXXFLAGS, the warnings disappear (but I get a reminder from R CMD 
> check that -fvisibility=hidden is not portable). I am wondering if 
> $(CXX_VISIBILITY) could be supported on macOS.
> 
> Best wishes,
> 
> Matthias
> 
> This is the build log of the package: 
> https://www.r-project.org/nosvn/R.check/r-release-macos-arm64/rswipl-00install.html
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [Rd] After package update, old S4 method is dispatched

2023-10-02 Thread Simon Urbanek
Jan,

have you re-installed all packages? If you change (update) any package that 
uses S4 it may be necessary to re-install all its reverse-dependencies as well 
since they may include cached values in their namespaces, so the easiest is to 
make sure you re-install all packages.

Cheers,
Simon


> On Oct 3, 2023, at 4:18 AM, Jan Netík  wrote:
> 
> Hello R-devel,
> 
> I hope that you are all doing well and that this is the right place to
> discuss my somewhat mysterious issue with S4.
> 
> On our Ubuntu server, we have "mirt" package installed which defines S4
> method for "coef" standard generic. We updated the package with the usual
> "install.packages", restarted, and observer error calling coef on mirt
> object that should not be possible: "Error in which: argument "nfact" is
> missing, with no default" (which has no such argument).
> 
> After days of investigation, I found that from mirt 1.37 to current 1.40,
> the method changed as well as some internal functions used by the method.
> The aforementioned error stems from the fact that these internal ordinary
> functions were changed properly as we updated the package, but the S4
> method dispatched stuck with the 1.37 version. I am by no means an expert
> on S4 matter, but I know that these are cached to some extent. I thought
> the cache is session-bound and have no idea how the issue can possibly
> persist even after a complete reboot of the machine. I can detach and
> library() mirt in one R session which solves the issue temporarily, but
> emerges right back in any new R session.
> 
> Sadly, I cannot provide any reproducible example as I am completely unaware
> of the cause and even I cannot reproduce this issue outside of the server.
> 
> Any insights on how this issue could have started would be highly
> appreciated.
> 
> Best regards,
> Jan Netík
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [R-pkg-devel] Question about Clang 17 Error

2023-09-27 Thread Simon Urbanek
It looks like a C++ run-time mismatch between what cmake is using to build the 
static library and what is used by R. Unfortunately, cmake hides the actual 
compiler calls so it's hard to tell the difference, but that setup relies on 
the correct sequence of library paths.

The rhub manually forces -stdlib=libc++ to all its CXX flags
https://github.com/r-hub/rhub-linux-builders/blob/master/fedora-clang-devel/Makevars
so it is quite different from the gannet tests-clang-trunk setup (also note the 
different library paths), but that's not something you can do universally in 
the package, because it strongly depends on the toolchain setup.

Cheers,
Simon


> On 28/09/2023, at 9:37 AM, Reed A. Cartwright  wrote:
> 
> I was unable to reproduce the error on the rhub's clang17 docker image.
> 
> I notice that the linking command is slightly different between systems.
> And this suggests that I need to find some way to get CRAN to pass -stdlib
> flag at the linking stage.
> 
> CRAN:
> /usr/local/clang17/bin/clang++ -std=gnu++17 -shared
> -L/usr/local/clang/lib64 -L/usr/local/clang17/lib -L/usr/local/gcc13/lib64
> -L/usr/local/lib64 -o rbedrock.so actors.o bedrock_leveldb.o dummy.o init.o
> key_conv.o nbt.o random.o subchunk.o support.o -L./leveldb-mcpe/build
> -pthread -lleveldb -lz
> 
> RHUB:
> clang++-17 -stdlib=libc++ -std=gnu++14 -shared -L/opt/R/devel/lib/R/lib
> -L/usr/local/lib -o rbedrock.so actors.o bedrock_leveldb.o dummy.o init.o
> key_conv.o nbt.o random.o subchunk.o support.o -L./leveldb-mcpe/build
> -pthread -lleveldb -lz -L/opt/R/devel/lib/R/lib -lR
> 
> On Wed, Sep 27, 2023 at 11:36 AM Gábor Csárdi 
> wrote:
> 
>> You might be able to reproduce it with the clang17 container here:
>> 
>> https://urldefense.com/v3/__https://r-hub.github.io/containers/__;!!IKRxdwAv5BmarQ!a5vkX68B5unua6_Zsh92b99AXfbJiewU7Mp0nqAKE0JDT8v3g2d08JZ8Yq_0ubp0j4GeTWWLjLVAN-FoLqhhk9c$
>> You can either run it directly or with the rhub2 package:
>> 
>> https://urldefense.com/v3/__https://github.com/r-hub/rhub2*readme__;Iw!!IKRxdwAv5BmarQ!a5vkX68B5unua6_Zsh92b99AXfbJiewU7Mp0nqAKE0JDT8v3g2d08JZ8Yq_0ubp0j4GeTWWLjLVAN-FoxPbUQlE$
>> 
>> Gabor
>> 
>> On Wed, Sep 27, 2023 at 8:29 PM Reed A. Cartwright
>>  wrote:
>>> 
>>> My package, RBedrock, is now throwing an error when compiled against
>>> Clang17. The error log is here:
>>> 
>>> 
>> https://urldefense.com/v3/__https://www.stats.ox.ac.uk/pub/bdr/clang17/rbedrock.log__;!!IKRxdwAv5BmarQ!a5vkX68B5unua6_Zsh92b99AXfbJiewU7Mp0nqAKE0JDT8v3g2d08JZ8Yq_0ubp0j4GeTWWLjLVAN-FoNhThuZA$
>>> 
>>> The important part is
>>> """
>>> Error: package or namespace load failed for ‘rbedrock’ in dyn.load(file,
>>> DLLpath = DLLpath, ...):
>>> unable to load shared object
>>> 
>> '/data/gannet/ripley/R/packages/tests-clang-trunk/rbedrock.Rcheck/00LOCK-rbedrock/00new/rbedrock/libs/rbedrock.so':
>>> 
>>> 
>> /data/gannet/ripley/R/packages/tests-clang-trunk/rbedrock.Rcheck/00LOCK-rbedrock/00new/rbedrock/libs/rbedrock.so:
>>> undefined symbol: _ZNSt3__122__libcpp_verbose_abortEPKcz
>>> Error: loading failed
>>> """
>>> 
>>> From what I can gather through googling, this error can be caused by
>> using
>>> the C linker when one of the dependent libraries is a C++ library.
>>> 
>>> I cannot tell if this is an issue with my package (likely) or CRAN's
>>> clang17 setup (less likely).
>>> 
>>> Background about the package: rbedrock is written in C but links against
>> a
>>> C++ library (Mojang's leveldb fork)  via the library's C-API functions. I
>>> use a dummy .cpp file in the source directory to trigger R into using the
>>> C++ linker. That does still seem to be happening according to the log.
>>> 
>>> Has anyone seen this before and know where I should start looking to fix
>> it?
>>> 
>>> Thanks.
>>> 
>>> --
>>> Reed A. Cartwright, PhD
>>> Associate Professor of Genomics, Evolution, and Bioinformatics
>>> School of Life Sciences and The Biodesign Institute
>>> Arizona State University
>>> ==
>>> Address: The Biodesign Institute, PO Box 876401, Tempe, AZ 85287-6401 USA
>>> Packages: The Biodesign Institute, 1001 S. McAllister Ave, Tempe, AZ
>>> 85287-6401 USA
>>> Office: Biodesign B-220C, 1-480-965-9949
>>> Website:
>> https://urldefense.com/v3/__http://cartwrig.ht/__;!!IKRxdwAv5BmarQ!a5vkX68B5unua6_Zsh92b99AXfbJiewU7Mp0nqAKE0JDT8v3g2d08JZ8Yq_0ubp0j4GeTWWLjLVAN-Fo7waq1VI$
>>> 
>>>[[alternative HTML version deleted]]
>>> 
>>> __
>>> R-package-devel@r-project.org mailing list
>>> 
>> https://urldefense.com/v3/__https://stat.ethz.ch/mailman/listinfo/r-package-devel__;!!IKRxdwAv5BmarQ!a5vkX68B5unua6_Zsh92b99AXfbJiewU7Mp0nqAKE0JDT8v3g2d08JZ8Yq_0ubp0j4GeTWWLjLVAN-FocAOfF7A$
>> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list

Re: [Rd] Help requested: writing text to a raster in memory

2023-09-24 Thread Simon Urbanek
Since I'm working with Paul on the glyph changes to the R graphics engine I'm 
quite interested in this so I had the idea to take out the guts from my Cairo 
package into a self-contained C code. Your request is good to bump is up on my 
stack. I already have code that draws text into OpenGL textures in Acinonyx, 
but properly it's only done on macOS so it may be worth combining both 
approaches to have a decent OpenGL text drawing library.

As for measurements, I didn't look at textshaping, but since both use 
Harfbuzz+FT they should be the same for the same font and scaling (in theory).

Cheers,
Simon



> On 25/09/2023, at 2:01 PM, Duncan Murdoch  wrote:
> 
> I'm somewhat aware of how tricky it all is.  For now I'm going to do it in R 
> (usng textshaping for layout and base graphics on the ragg::agg_capture 
> device to draw to the bitmap).  I'll avoid allowing changes to happen in the 
> C++ code.
> 
> Eventually I'll see if I can translate the code into C++.  I know textshaping 
> has a C interface, but for the actual drawing I'll have to work something 
> else out.  Or maybe just leave it in R, and only try to write a new bitmap 
> when it's safe.
> 
> For future reference, will the measurements reported by 
> textshaping::shape_text() match the values used by your Cairo package, or are 
> equivalent measurements available elsewhere?
> 
> Duncan Murdoch
> 
> On 24/09/2023 6:55 p.m., Simon Urbanek wrote:
>> Duncan,
>> drawing text is one of the most complicated things you can do, so it really 
>> depends how for you want to go. You can do it badly with a simple cairo 
>> show_text API. The steps involved in doing it properly are detecting the 
>> direction of the language, finding fonts, finding glyphs (resolving 
>> ligatures), applying hints, drawing glyphs etc. Fortunately there are 
>> libraries that help with than, but even then it's non-trivial. Probably the 
>> most modern pipeline is icu + harfbuzz + freetype + fontconfig + cairo. This 
>> is implemented, e.g in 
>> https://github.com/s-u/Cairo/blob/master/src/cairotalk.c (the meat is in  
>> L608-) and for all but the drawing part there is an entire R package (in 
>> C++) devoted to this: https://github.com/r-lib/textshaping/tree/main/src -- 
>> Thomas Lin Pedersen is probably THE expert on this.
>> Cheers,
>> Simon
>>> On 24/09/2023, at 7:44 AM, Duncan Murdoch  wrote:
>>> 
>>> I am in the process of updating the rgl package.  One thing I'd like to do 
>>> is to change text support in it when using OpenGL to display to be more 
>>> like the way text is drawn in WebGL displays (i.e. the ones rglwidget() 
>>> produces).
>>> 
>>> Currently in R, rgl uses the FTGL library to draw text.  That library is 
>>> unsupported these days, and uses the old fixed pipeline in OpenGL.
>>> 
>>> In WebGL, text is displayed by "shaders", programs that run on the GPU. 
>>> Javascript code prepares bitmap images of the text to display, then the 
>>> shader transfers parts of that bitmap to the output display.
>>> 
>>> I'd like to duplicate the WebGL process in the C++ code running the OpenGL 
>>> display in R.  The first step in this is to render a character vector full 
>>> of text into an in-memory raster, taking account of font, cex, etc.  (I 
>>> want one raster for the whole vector, with a recording of locations from 
>>> which the shader should get each component of it.)
>>> 
>>> It looks to me as though I could do this using the ragg::agg_capture device 
>>> in R code, but I'd prefer to do it all in C++ code because I may need to 
>>> make changes to the raster at times when it's not safe to call back to R, 
>>> e.g. if some user interaction requires the axis labels to be recomputed and 
>>> redrawn.
>>> 
>>> Does anyone with experience doing this kind of thing know of examples I can 
>>> follow, or have advice on how to proceed?  Or want to volunteer to help 
>>> with this?
>>> 
>>> Duncan Murdoch
>>> 
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>>> 
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Help requested: writing text to a raster in memory

2023-09-24 Thread Simon Urbanek
Duncan,

drawing text is one of the most complicated things you can do, so it really 
depends how for you want to go. You can do it badly with a simple cairo 
show_text API. The steps involved in doing it properly are detecting the 
direction of the language, finding fonts, finding glyphs (resolving ligatures), 
applying hints, drawing glyphs etc. Fortunately there are libraries that help 
with than, but even then it's non-trivial. Probably the most modern pipeline is 
icu + harfbuzz + freetype + fontconfig + cairo. This is implemented, e.g in 
https://github.com/s-u/Cairo/blob/master/src/cairotalk.c (the meat is in  
L608-) and for all but the drawing part there is an entire R package (in C++) 
devoted to this: https://github.com/r-lib/textshaping/tree/main/src -- Thomas 
Lin Pedersen is probably THE expert on this.

Cheers,
Simon


> On 24/09/2023, at 7:44 AM, Duncan Murdoch  wrote:
> 
> I am in the process of updating the rgl package.  One thing I'd like to do is 
> to change text support in it when using OpenGL to display to be more like the 
> way text is drawn in WebGL displays (i.e. the ones rglwidget() produces).
> 
> Currently in R, rgl uses the FTGL library to draw text.  That library is 
> unsupported these days, and uses the old fixed pipeline in OpenGL.
> 
> In WebGL, text is displayed by "shaders", programs that run on the GPU. 
> Javascript code prepares bitmap images of the text to display, then the 
> shader transfers parts of that bitmap to the output display.
> 
> I'd like to duplicate the WebGL process in the C++ code running the OpenGL 
> display in R.  The first step in this is to render a character vector full of 
> text into an in-memory raster, taking account of font, cex, etc.  (I want one 
> raster for the whole vector, with a recording of locations from which the 
> shader should get each component of it.)
> 
> It looks to me as though I could do this using the ragg::agg_capture device 
> in R code, but I'd prefer to do it all in C++ code because I may need to make 
> changes to the raster at times when it's not safe to call back to R, e.g. if 
> some user interaction requires the axis labels to be recomputed and redrawn.
> 
> Does anyone with experience doing this kind of thing know of examples I can 
> follow, or have advice on how to proceed?  Or want to volunteer to help with 
> this?
> 
> Duncan Murdoch
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [R-pkg-devel] R_orderVector1 - algo: radix, shell, or another?

2023-09-24 Thread Simon Urbanek
I think the logic Jeff had in mind is that R order() uses C do_order() for 
method="shell" and since do_order() uses orderVector1() by induction it is the 
shell-sort implementation.

order() itself uses whatever you specify in method=.

Cheers,
Simon


> On Sep 25, 2023, at 7:10 AM, Jan Gorecki  wrote:
> 
> Hi Jeff,
> 
> Yes I did. My question is about R_orderVector1 which is part of public R C
> api.
> Should I notice something relevant in the source of R's order?
> 
> Best
> Jan
> 
> On Sun, Sep 24, 2023, 17:27 Jeff Newmiller  wrote:
> 
>> Have you read the output of
>> 
>> order
>> 
>> entered at the R console?
>> 
>> 
>> On September 24, 2023 1:38:41 AM PDT, Jan Gorecki 
>> wrote:
>>> Dear pkg developers,
>>> 
>>> Are there any ways to check which sorting algorithm is being used when
>>> calling `order` function? Documentation at
>>> https://stat.ethz.ch/R-manual/R-devel/library/base/html/sort.html
>>> says it is radix for length < 2^31
>>> 
>>> On the other hand, I am using R_orderVector1, passing in double float
>>> smaller than 2^31. Short description of it states
>>> "Fast version of 1-argument case of R_orderVector".
>>> Should I expect R_orderVector1 follow the same algo as R's order()? If so
>>> it should be radix as well.
>>> 
>>> 
>> https://github.com/wch/r-source/blob/ed51d34ec195b89462a8531b9ef30b7b72e47204/src/main/sort.c#L1133
>>> 
>>> If there is no way to check sorting algo, could anyone describe which one
>>> R_orderVector1 uses, and if there is easy API to use different ones from
>> C?
>>> 
>>> Best Regards,
>>> Jan Gorecki
>>> 
>>>  [[alternative HTML version deleted]]
>>> 
>>> __
>>> R-package-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-package-devel
>> 
>> --
>> Sent from my phone. Please excuse my brevity.
>> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] published packages not showing updated macOS results

2023-09-14 Thread Simon Urbanek
Aron,

one package managed to spawn a separate process that was blocking the build 
process (long story) and I was on the other side of the world. It should be 
fixed now, but it may take up to a day before the backlog is processed. In the 
future for faster response, please contact me directly - see "CRAN Binary 
Package Maintenance" on the CRAN Team webpage.

Cheers,
Simon


> On Sep 13, 2023, at 2:10 AM, Aron Atkins  wrote:
> 
> Hi.
> 
> It looks like macOS package publishing may have stalled. One of my
> packages, rsconnect 1.1.0, arrived on CRAN about a week ago. It is built
> almost everywhere, but r-release-macos* and r-release-macos-arm64 are still
> showing results from the previous release.
> 
> Recent releases did see check failures on the r-release-macos-x86_64 host
> (because its Pandoc installation was older than supported by one of our
> dependencies). The rsconnect 1.1.0 release should address these failures,
> but I am still waiting to see builds flow through.
> 
> https://cran.r-project.org/web/packages/rsconnect/index.html
> https://cran.r-project.org/web/checks/check_results_rsconnect.html
> 
> Is someone able to look into this or otherwise offer advice?
> 
> Thanks,
> Aron
> -- 
> email: aron.atk...@gmail.com
> home: http://gweep.net/~aron/
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] What to do when a package is archived from CRAN

2023-08-26 Thread Simon Urbanek
Yutani,


> On Aug 27, 2023, at 2:19 PM, Hiroaki Yutani  wrote:
> 
> Simon,
> 
> > it's assumed that GitHub history is the canonical source with the 
> > provenance, but that gets lost when pulled into the package.
> 
> No, not GitHub. You can usually find the ownership on crates.io 
> <http://crates.io/>. So, if you want a target to blame, it's probably just a 
> problem of the script to auto-generate inst/AUTHORS in this specific case. 
> But, clearly, Rust's ecosystem works soundly under the existence of crates.io 
> <http://crates.io/>, so I think this is the same kind of pain which you would 
> feel if you use R without CRAN.
> 

Can you elaborate? I have not found anything that would have a list of authors 
in the sources. I fully agree that I know nothing about it, but even if you use 
R without CRAN, each package contains that information in the DESCRIPTION file 
since it's so crucial. So are you saying you have to use crates.io and do some 
extra step during the (misnamed) "vendor" step? (I didn't see the submitted tar 
ball of plqrl and its release on GitHub is not the actual package so can't 
check - thus just trying reverse-engineer what happens by looking at the 
dependencies which leads to GitHub).


> Sorry for nitpicking.
> 

Sure, good to get the fact straight.

Cheers,
Simon



> Best,
> Yutani
> 
> 2023年8月27日(日) 6:57 Simon Urbanek  <mailto:simon.urba...@r-project.org>>:
> Tatsuya,
> 
> What you do is contact CRAN. I don't think anyone here can answer your 
> question, only CRAN can, so ask there.
> 
> Generally, packages with sufficiently many Rust dependencies have to be 
> handled manually as they break the size limit, so auto-rejections are normal. 
> Archival is unusual, but it may have fallen through the cracks - but the way 
> to find out is to ask.
> 
> One related issue with respect to CRAN policies that I don't see a good 
> solution for is that inst/AUTHORS is patently unhelpful, because most of them 
> say "foo (version ..): foo authors" with no contact, or real names or any 
> links. That seems to be a problem stemming from the Rust community as there 
> doesn't seem to be any accountability with respect to ownership and 
> attribution. I don't know if it's because it's assumed that GitHub history is 
> the canonical source with the provenance, but that gets lost when pulled into 
> the package.
> 
> Cheers,
> Simon
> 
> PS: Your README says "(Rust 1.65 or later)", but the version condition is 
> missing from SystemRequirements.
> 
> 
> > On Aug 26, 2023, at 2:46 PM, SHIMA Tatsuya  > <mailto:ts1s1a...@gmail.com>> wrote:
> > 
> > Hi,
> > 
> > I noticed that my submitted package `prqlr` 0.5.0 was archived from CRAN on 
> > 2023-08-19.
> > <https://CRAN.R-project.org/package=prqlr 
> > <https://cran.r-project.org/package=prqlr>>
> > 
> > I submitted prqlr 0.5.0 on 2023-08-13. I believe I have since only received 
> > word from CRAN that it passed the automated release process. 
> > <https://github.com/eitsupi/prqlr/pull/161 
> > <https://github.com/eitsupi/prqlr/pull/161>>
> > So I was very surprised to find out after I returned from my trip that this 
> > was archived.
> > 
> > The CRAN page says "Archived on 2023-08-19 for policy violation. " but I 
> > don't know what exactly was the problem.
> > I have no idea what more to fix as I believe I have solved all the problems 
> > when I submitted 0.5.0.
> > 
> > Is there any way to know what exactly was the problem?
> > (I thought I sent an e-mail to CRAN 5 days ago but have not yet received an 
> > answer, so I decided to ask my question on this mailing list, thinking that 
> > there is a possibility that there will be no answer to my e-mail, although 
> > I may have to wait a few weeks for an answer. My apologies if this idea is 
> > incorrect.)
> > 
> > Best,
> > Tatsuya
> > 
> > __
> > R-package-devel@r-project.org <mailto:R-package-devel@r-project.org> 
> > mailing list
> > https://stat.ethz.ch/mailman/listinfo/r-package-devel 
> > <https://stat.ethz.ch/mailman/listinfo/r-package-devel>
> > 
> 
> __
> R-package-devel@r-project.org <mailto:R-package-devel@r-project.org> mailing 
> list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel 
> <https://stat.ethz.ch/mailman/listinfo/r-package-devel>


[[alternative HTML version deleted]]

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] What to do when a package is archived from CRAN

2023-08-26 Thread Simon Urbanek
Tatsuya,

What you do is contact CRAN. I don't think anyone here can answer your 
question, only CRAN can, so ask there.

Generally, packages with sufficiently many Rust dependencies have to be handled 
manually as they break the size limit, so auto-rejections are normal. Archival 
is unusual, but it may have fallen through the cracks - but the way to find out 
is to ask.

One related issue with respect to CRAN policies that I don't see a good 
solution for is that inst/AUTHORS is patently unhelpful, because most of them 
say "foo (version ..): foo authors" with no contact, or real names or any 
links. That seems to be a problem stemming from the Rust community as there 
doesn't seem to be any accountability with respect to ownership and 
attribution. I don't know if it's because it's assumed that GitHub history is 
the canonical source with the provenance, but that gets lost when pulled into 
the package.

Cheers,
Simon

PS: Your README says "(Rust 1.65 or later)", but the version condition is 
missing from SystemRequirements.


> On Aug 26, 2023, at 2:46 PM, SHIMA Tatsuya  wrote:
> 
> Hi,
> 
> I noticed that my submitted package `prqlr` 0.5.0 was archived from CRAN on 
> 2023-08-19.
> 
> 
> I submitted prqlr 0.5.0 on 2023-08-13. I believe I have since only received 
> word from CRAN that it passed the automated release process. 
> 
> So I was very surprised to find out after I returned from my trip that this 
> was archived.
> 
> The CRAN page says "Archived on 2023-08-19 for policy violation. " but I 
> don't know what exactly was the problem.
> I have no idea what more to fix as I believe I have solved all the problems 
> when I submitted 0.5.0.
> 
> Is there any way to know what exactly was the problem?
> (I thought I sent an e-mail to CRAN 5 days ago but have not yet received an 
> answer, so I decided to ask my question on this mailing list, thinking that 
> there is a possibility that there will be no answer to my e-mail, although I 
> may have to wait a few weeks for an answer. My apologies if this idea is 
> incorrect.)
> 
> Best,
> Tatsuya
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Re-building vignettes had CPU time 9.2 times elapsed time

2023-08-25 Thread Simon Urbanek



> On Aug 26, 2023, at 11:01 AM, Dirk Eddelbuettel  wrote:
> 
> 
> On 25 August 2023 at 18:45, Duncan Murdoch wrote:
> | The real problem is that there are two stubborn groups opposing each 
> | other:  the data.table developers and the CRAN maintainers.  The former 
> | think users should by default dedicate their whole machine to 
> | data.table.  The latter think users should opt in to do that.
> 
> No, it feels more like it is CRAN versus the rest of the world.
> 


In reality it's more people running R on their laptops vs the rest of the 
world. Although people with laptops are the vast majority, they also are the 
least impacted by the decision going either way. I think Jeff summed up the 
core reasoning pretty well. Harm is done by excessive use, not other other way 
around.

That said, I think this thread is really missing the key point: there is no 
central mechanism that would govern the use of CPU resources. OMP_THREAD_LIMIT 
is just one of may ways and even that is vastly insufficient for reasons 
discussed (e.g, recursive use of processes). It is not CRAN's responsibility to 
figure out for each package what it needs to behave sanely - it has no way of 
knowing what type of parallelism is used, under which circumstances and how to 
control it. Only the package author knows that (hopefully), which is why it's 
on them. So instead of complaining here better use of time would be to look at 
what's being used in packages and come up with a unified approach to monitoring 
core usage and a mechanism by which the packages could self-govern to respect 
the desired limits. If there was one canonical place, it would be also easy for 
users to opt in/out as they desire - and I'd be happy to help if any components 
of it need to be in core R.



> Take but one example, and as I may have mentioned elsewhere, my day job 
> consists in providing software so that (to take one recent example) 
> bioinformatics specialist can slice huge amounts of genomics data.  When that 
> happens on a dedicated (expensive) hardware with dozens of cores, it would be 
> wasteful to have an unconditional default of two threads. It would be the end 
> of R among serious people, no more, no less. Can you imagine how the internet 
> headlines would go: "R defaults to two threads". 
> 

If you run on such a machine then you or your admin certainly know how to set 
the desired limits. From experience the problem is exactly the opposite - it's 
far more common for users to not know how to not overload such a machine. As 
for internet headlines, they will always be saying blatantly false things like 
"R is not for large data" even though we have been using it to analyze 
terabytes of data per minute ...

Cheers,
Simon



> And it is not just data.table as even in the long thread over in its repo we 
> have people chiming in using OpenMP in their code (as data.table does but 
> which needs a different setter than the data.table thread count).
> 
> It is the CRAN servers which (rightly !!) want to impose constraints for when 
> packages are tested.  Nobody objects to that.
> 
> But some of us wonder if settings these defaults for all R user, all the 
> time, unconditional is really the right thing to do.  Anyway, Uwe told me he 
> will take it to an internal discussion, so let's hope sanity prevails.
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] gdb availability on r-devel-linux-x86_64-fedora-gcc

2023-08-13 Thread Simon Urbanek



> On 14/08/2023, at 5:25 AM, Jamie Lentin  wrote:
> 
> Thanks both!
> 
> On 2023-08-12 23:52, Uwe Ligges wrote:
>> On 12.08.2023 23:19, Dirk Eddelbuettel wrote:
>>> On 12 August 2023 at 18:12, Uwe Ligges wrote:
>>> | On 12.08.2023 15:10, Jamie Lentin wrote:
>>> | > The system call in question is done by the TMB package[2], and not ours
>>> | > to tinker with:
>>> | >
>>> | >cmd <- paste("R --vanilla < ",file," -d gdb --debugger-args=\"-x",
>>> | > gdbscript,"\"")
>>> | >txt <- system(cmd,intern=TRUE,ignore.stdout=FALSE,ignore.stderr=TRUE)
>>> | >
>>> | > My only vaguely reasonable guess is that gdb isn't available on the host
>>> | > in question (certainly R will be!). How likely is this? Is it worth
>>> | > trying to resubmit with the call wrapped with an "if (gdb is on the 
>>> path)"?
>>> |
>>> | I guess it is really not available as that system got an update.
>>> | Note that you package does not declare any SystemRequirements. Please do
>>> | so and mention gdb.
> 
> It's TMB::gdbsource() that's calling system("R -d gdb"), so presumably the 
> SystemRequirements should live there rather than gadget3? I can raise an 
> issue suggesting this.
> 
>>> | Wrapping it in "if (gdb is on the path)" seems a good solution.
>>> Seconded esp as some systems may have lldb instead of gdb, or neither.
>>> Adding a simple `if (nzchar(Sys.which("gdb")))` should get you there.
>>> Dirk
>> Note that also
>> 1. The machine does not have R on the path (but Rdev)
> 
> Okay, I'll check for "all(nzchar(Sys.which(c('gdb', 'R'". This is 
> overkill somewhat, and the example won't run in some environments that 
> TMB::gdbsource() should work in. However, at least it'll check it does work 
> for the relatively default case.
> 

Please note that it should not be calling some random program called R - it 
should be calling the R instance it's running in (as Uwe pointed out there may 
be several) so possibly something like file.path(R.home(),"bin","R")

Cheers,
Simon


>> 2. you need to use a current pandoc. Citing Professor Ripley: "The
>> platforms failing are using pandoc 3.1.6 or (newly updated, M1mac)
>> 3.1.6.1"
> 
> I'll be sure to try upgrading before resubmitting.
> 
> Thanks again for your help!
> 
>> Best,
>> Uwe Ligges
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] macOS results not mirrored/updated at CRAN

2023-08-10 Thread Simon Urbanek
Dirk,

thanks - one of those annoying cases where a script works in the login shell, 
but not in the cron job -- hopefully fixed.

Cheers,
Simon


> On 9/08/2023, at 12:45 AM, Dirk Eddelbuettel  wrote:
> 
> 
> Simon,
> 
> This is still an issue for arm64.  Uploaded tiledb and RQuantLib yesterday,
> both already built binaries for macOS (thank you!) but on the x86_64 ones are
> on the results page.  Can you take another peek at this?
> 
> Thanks so much,  Dirk
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [Rd] Detecting physical CPUs in detectCores() on Linux platforms

2023-08-07 Thread Simon Urbanek



> On 8/08/2023, at 12:07 PM, Dirk Eddelbuettel  wrote:
> 
> 
> On 8 August 2023 at 11:21, Simon Urbanek wrote:
> | First, detecting HT vs cores is not necessarily possible in general, Linux 
> may assign core id to each HT depending on circumstances:
> | 
> | $ grep 'cpu cores' /proc/cpuinfo | uniq
> | cpu cores   : 32
> | $ grep 'model name' /proc/cpuinfo | uniq
> | model name  : Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz
> | 
> | and you can look up that Xenon 6142 has 16 cores.
> | 
> | Second, instead of "awk"ward contortions it's easily done in R with 
> something like
> | 
> | d=read.dcf("/proc/cpuinfo")
> | sum(as.integer(tapply(
> |   d[,grep("cpu cores",colnames(d))],
> |   d[,grep("physical id",colnames(d))], `[`, 1)))
> | 
> | which avoids subprocesses, quoting hell and all such issues...
> 
> Love the use of read.dcf("/proc/cpuinfo") !!
> 
> On my box a simpler
> 
>> d <- read.dcf("/proc/cpuinfo") 
>> as.integer(unique(d[, grep("cpu cores",colnames(d))]))
>  [1] 6
>> 
> 

I don't think that works on NUMA/SMP machines - you need to add all the cores 
for each CPU (that's why the above splits by physical id which is unique per 
cpu). On a dual-cpu machine:

> as.integer(unique(d[, grep("cpu cores",colnames(d))]))
[1] 32
> sum(as.integer(tapply(
 d[,grep("cpu cores",colnames(d))],
 d[,grep("physical id",colnames(d))], `[`, 1)))
[1] 64

Also things get quite fun on VMs as they can cobble together quite a few 
virtual CPUs regardless of the underlying hardware.

To be honest I think the motivation of this thread is dubious at best: it is a 
bad idea to use detectCore() blindly to specify parallelization and we 
explicitly say it's a bad idea - any sensible person will set it according to 
the demands, the hardware and the task. The number of cores is only partially 
relevant - e.g. if any I/O is involved you want to oversubscribe the CPU. If 
you have other users you want to only use a fraction etc. That doesn't mean 
that the we couldn't do a better job, but if you have to use detectCores() then 
you are already in trouble to start with.

Cheers,
Simon

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [Rd] Detecting physical CPUs in detectCores() on Linux platforms

2023-08-07 Thread Simon Urbanek
First, detecting HT vs cores is not necessarily possible in general, Linux may 
assign core id to each HT depending on circumstances:

$ grep 'cpu cores' /proc/cpuinfo | uniq
cpu cores   : 32
$ grep 'model name' /proc/cpuinfo | uniq
model name  : Intel(R) Xeon(R) Gold 6142 CPU @ 2.60GHz

and you can look up that Xenon 6142 has 16 cores.

Second, instead of "awk"ward contortions it's easily done in R with something 
like

d=read.dcf("/proc/cpuinfo")
sum(as.integer(tapply(
  d[,grep("cpu cores",colnames(d))],
  d[,grep("physical id",colnames(d))], `[`, 1)))

which avoids subprocesses, quoting hell and all such issues...

Cheers,
Simon


> On 8/08/2023, at 12:47 AM, Julian Hniopek  wrote:
> 
> On Mon, 2023-08-07 at 07:12 -0500, Dirk Eddelbuettel wrote:
>> 
>> On 7 August 2023 at 08:48, Nils Kehrein wrote:
>>> I recently noticed that `detectCores()` ignores the `logical=FALSE`
>>> argument on Linux platforms. This means that the function will
>>> always
>>> return the number of logical CPUs, i.e. it will count the number of
>>> threads
>>> that theoretically can run in parallel due to e.g. hyper-threading.
>>> Unfortunately, this can result in issues in high-performance
>>> computing use
>>> cases where hyper-threading might degrade performance instead of
>>> improving
>>> it.
>>> 
>>> Currently, src/library/parallel/R/detectCores.R uses the following
>>> R/shell
>>> code fragment to identify the number of logical CPUs:
>>> linux = 'grep "^processor" /proc/cpuinfo 2>/dev/null | wc -l'
>>> 
>>> As far as I understand, one could derive the number of online
>>> physical CPUs
>>> by parsing the contents of /sys/devices/system/cpu/* but that seems
>>> rather
>>> cumbersome. Instead, could we amend the R code with the following
>>> line?
>>> linux = if(logical) 'grep "^processor" /proc/cpuinfo 2>/dev/null |
>>> wc -l'
>>> else 'lscpu -b --parse="CORE" | tail -n +5 | sort -u | wc -l'
>> 
>> That's good but you also need to at protect this from `lscpu` being
>> in the
>> path.  Maybe `if (logical && nzchar(Sys.which("lscpu")))` ?
>> 
>> Dirk
>> 
> Alternatively, using only on POSIX utils which should be in the path of
> all Linux Systems and /proc/cpuinfo:
> 
> awk '/^physical id/{PHYS_ID=$NF; next} /^cpu cores/{print PHYS_ID"
> "$NF;}' /proc/cpuinfo 2>/dev/null | sort | uniq | awk '{sum+=$NF;} END
> {print sum}'.
> 
> Parses /proc/cpuinfo for the number of physical cores and physical id
> in each CPU. Only returns unique combinations of physical id (i.e.
> Socket) and core numbers. Then sums up the number of cores for each
> physicalid to get the total amount of physical cores.
> 
> Something I had lying around. Someone with better awk skills could
> probably do sorting and filtering in awk as well to save on pipes.
> Works on single and multisocket AMD/Intel from my experience.
> 
> Julian
>>> 
>>> This solution uses `lscpu` from `sys-utils`. The -b switch makes
>>> sure that
>>> only online CPUs/cores are listed and due to the --parse="CORE",
>>> the output
>>> will contain only a single column with logical core ids. It seems
>>> to do the
>>> job in my view, but there might be edge cases for exotic CPU
>>> topologies
>>> that I am not aware of.
>>> 
>>> Thank you, Nils
>>> 
>>> [[alternative HTML version deleted]]
>>> 
>>> __
>>> R-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-devel
>> 
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [R-pkg-devel] macOS results not mirrored/updated at CRAN

2023-07-22 Thread Simon Urbanek
Drik,

thanks. I have tried to address the problem and the actual sync problem for 
big-sur-x86_64 was fixed (as you can the see the results have been updated 
after you reported it), but apparently there was another, independent, problem 
with the cron jobs on that machine. I have changed the way the results sync is 
triggered, so hopefully that will make it more reliable.

Cheers,
Simon


> On Jul 23, 2023, at 12:26 AM, Dirk Eddelbuettel  wrote:
> 
> 
> Simon,
> 
> This still persists. As Murray reported, it happened for a while now, it is
> still happening eg package tiledb has been rebuilt everywhere [1] since the
> upload a few days ago -- yet the results page still reports builds two
> uploads ago [2] for both arm64 variants of your macOS setup.
> 
> Can you take a look, please?
> 
> Thanks in advance,  Dirk
> 
> 
> [1] https://cran.r-project.org/package=tiledb
> [2] https://cran.r-project.org/web/checks/check_results_tiledb.html
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] macOS results not mirrored/updated at CRAN

2023-07-15 Thread Simon Urbanek
I looked into it and there was no issue on the build machine or staging server, 
so it will require some more digging in the international waters .. hopefully 
sometime next week…

Cheers,
Simon


> On 16/07/2023, at 11:25, Dirk Eddelbuettel  wrote:
> 
> 
> Simon,
> 
> On 12 July 2023 at 19:02, Dirk Eddelbuettel wrote:
> | 
> | Simon,
> | 
> | It looks like some result mirroring / pushing from your machines to CRAN 
> fell
> | over.  One of my packages, digest 0.6.33, arrived on CRAN about a week ago,
> | is built almost everywhere (apart from macOS_release_x86_64 stuck at 0.6.32)
> | but the result page still has nags from the 0.6.31 build for macOS release
> | and one of the oldrel builds.
> | 
> | Could you look into that?  And if it is "just" general issue at CRAN as per
> | Uwe's email earlier I will happily wait.  But it has been in this frozen /
> | partial update of results state for a few days now.
> 
> digest on macOS x86_64 is still stuck at 0.6.31 results on the summary
> displaying a now very stale and factually incorrect NOTE.
> 
> Can you please look into this?
> 
> Thanks,  Dirk
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Feedback on "Using Rust in CRAN packages"

2023-07-13 Thread Simon Urbanek



> On Jul 14, 2023, at 11:19 AM, Hadley Wickham  wrote:
> 
>>> If CRAN cannot trust even the official one of Rust, why does CRAN have Rust 
>>> at all?
>>> 
>> 
>> I don't see the connection - if you downloaded something in the past it 
>> doesn't mean you will be able to do so in the future. And CRAN has Rust 
>> because it sounded like a good idea to allow packages to use it, but I can 
>> see that it opened a can of worms that we trying to tame here.
> 
> Can you give a bit more detail about your concerns here? Obviously
> crates.io isn't some random site on the internet, it's the official
> repository of the Rust language, supported by the corresponding
> foundation for the language. To me that makes it feel very much like
> CRAN, where we can assume if you downloaded something in the past, you
> can download something in the future.
> 

I was just responding to Yutani's question why we downloaded the Rust compilers 
on CRAN at all. This has really nothing to do with the previous discussion 
which is why I did say "I don't see the connection". Also I wasn't talking 
about crates.io anywhere in my responses in this thread. The only thing I 
wanted to discuss here was that I think the existing Rust model  ("vendor" into 
the package sources) seems like a good one to apply to Go, but that got somehow 
hijacked...

Cheers,
Simon

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Feedback on "Using Rust in CRAN packages"

2023-07-13 Thread Simon Urbanek
Yutani,

[moving back to the original thread, please don't cross-post]


> On Jul 13, 2023, at 3:34 PM, Hiroaki Yutani  wrote:
> 
> Hi Simon,
> 
> Thanks for the response. I thought
> 
>> download a specific version from a secure site and check that the
> download is the expected code by some sort of checksum
> 
> refers to the usual process that's done by Cargo automatically. If it's
> not, I think the policy should have a clear explanation. It seems it's not
> only me who wondered why this policy doesn't mention Cargo.lock at all.
> 


as explained. The instructions will be updated to make it clear that "cargo 
vendor" is the right tool here.


>> it is not expected to use cargo to resolve them from random (possibly
> inaccessible) places
> 
> Yes, I agree with you. So, I suggested the possibility of forbidding the Git 
> dependency. Or, do you call crates.io, Rust's official repository, "random 
> places"?


No, as I understand it, the lock file can have arbitrary URLs, that's what I 
was referring to.


> If CRAN cannot trust even the official one of Rust, why does CRAN have Rust 
> at all?
> 


I don't see the connection - if you downloaded something in the past it doesn't 
mean you will be able to do so in the future. And CRAN has Rust because it 
sounded like a good idea to allow packages to use it, but I can see that it 
opened a can of worms that we trying to tame here.


> That said, I agree with your concern about downloading via the Internet in
> general. Downloading is one of the common sources of failure. If you want
> to prevent cargo from downloading any source files, you can enforce adding
> --offline option to "cargo build". While the package author might feel
> unhappy, I think this would make your intent a bit clearer.
> 


I'm not a cargo expert, but I thought cargo build --offline is not needed if 
the dependencies are already vendored? If you think cargo users need more help 
with the steps, then feel free to propose what the instructions should say (we 
really assume that the authors know what they are doing).

Cheers,
Simon

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Feedback on "Using Rust in CRAN packages"

2023-07-12 Thread Simon Urbanek



> On 13/07/2023, at 2:50 PM, Kevin Ushey  wrote:
> 
> Package authors could use 'cargo vendor' to include Rust crate sources 
> directly in their source R packages. Would that be acceptable?
> 


Yes, that is exactly what was suggested in the original thread.

Cheers,
Simon



> Presumedly, the vendored sources would be built using the versions specified 
> in an accompanying Cargo.lock as well.
> 
> https://doc.rust-lang.org/cargo/commands/cargo-vendor.html
> 
> 
> On Wed, Jul 12, 2023, 7:35 PM Simon Urbanek  
> wrote:
> Yutani,
> 
> I'm not quite sure your reading fully matches the intent of the policy. 
> Cargo.lock is not sufficient, it is expected that the package will provide 
> *all* the sources, it is not expected to use cargo to resolve them from 
> random (possibly inaccessible) places. So the package author is expected to 
> either include the sources in the package *or* (if prohibitive due to extreme 
> size) have a release tar ball available at a fixed, secure, reliable location 
> (I was recommending Zenodo.org for that reason - GitHub is neither fixed nor 
> reliable by definition).
> 
> Based on that, I'm not sure I fully understand the scope of your proposal for 
> improvement. Carlo.lock is certainly the first step that the package author 
> should take in creating the distribution tar ball so you can fix the 
> versions, but it is not sufficient as the next step involves collecting the 
> related sources. We don't want R users to be involved in that can of worms 
> (especially since the lock file itself provides no guarantees of 
> accessibility of the components and we don't want to have to manually inspect 
> it), the package should be ready to be used which is why it has to do that 
> step first. Does that explain the intent better? (In general, the downloading 
> at install time is actually a problem, because it's not uncommon to use R in 
> environments that have no Internet access, but the download is a concession 
> for extreme cases where the tar balls may be too big to make it part of the 
> package, but it's yet another can of worms...).
> 
> Cheers,
> Simon
> 
> 
> 
> > On 13/07/2023, at 12:37 PM, Hiroaki Yutani  wrote:
> > 
> > Hi,
> > 
> > I'm glad to see CRAN now has its official policy about Rust [1]!
> > It seems it probably needs some feedback from those who are familiar with
> > the Rust workflow. I'm not an expert, but let me leave some quick feedback.
> > This email is sent to the R-package-devel mailing list as well as to cran@~
> > so that we can publicly discuss.
> > 
> > It seems most of the concern is about how to make the build deterministic.
> > In this regard, the policy should encourage including "Cargo.lock" file
> > [2]. Cargo.lock is created on the first compile, and the resolved versions
> > of dependencies are recorded. As long as this file exists, the dependency
> > versions are locked to the ones in this file, except when the package
> > author explicitly updates the versions.
> > 
> > Cargo.lock also records the SHA256 checksums of the crates if they are from
> > crates.io, Rust's official crate registry. If the checksums don't match,
> > the build will fail with the following message:
> > 
> >error: checksum for `foo v0.1.2` changed between lock files
> > 
> >this could be indicative of a few possible errors:
> > 
> >* the lock file is corrupt
> >* a replacement source in use (e.g., a mirror) returned a different
> > checksum
> >* the source itself may be corrupt in one way or another
> > 
> >unable to verify that `foo v0.1.2` is the same as when the lockfile was
> > generated
> > 
> > For dependencies from Git repositories, Cargo.lock records the commit
> > hashes. So, the version of the source code (not the version of the crate)
> > is uniquely determined. That said, unlike cargo.io, it's possible that the
> > commit or the Git repository itself has disappeared at the time of
> > building, which makes the build fail. So, it might be reasonable the CRAN
> > policy prohibits the use of Git dependency unless the source code is
> > bundled. I have no strong opinion here.
> > 
> > Accordingly, I believe this sentence
> > 
> >> In practice maintainers have found it nigh-impossible to meet these
> > conditions whilst downloading as they have too little control.
> > 
> > is not quite true. More specifically, these things
> > 
> >> The standard way to download a Rust ‘crate’ is by its version number, and
> > these have been changed without changing their number.
> >> Down

Re: [R-pkg-devel] Feedback on "Using Rust in CRAN packages"

2023-07-12 Thread Simon Urbanek
Yutani,

I'm not quite sure your reading fully matches the intent of the policy. 
Cargo.lock is not sufficient, it is expected that the package will provide 
*all* the sources, it is not expected to use cargo to resolve them from random 
(possibly inaccessible) places. So the package author is expected to either 
include the sources in the package *or* (if prohibitive due to extreme size) 
have a release tar ball available at a fixed, secure, reliable location (I was 
recommending Zenodo.org for that reason - GitHub is neither fixed nor reliable 
by definition).

Based on that, I'm not sure I fully understand the scope of your proposal for 
improvement. Carlo.lock is certainly the first step that the package author 
should take in creating the distribution tar ball so you can fix the versions, 
but it is not sufficient as the next step involves collecting the related 
sources. We don't want R users to be involved in that can of worms (especially 
since the lock file itself provides no guarantees of accessibility of the 
components and we don't want to have to manually inspect it), the package 
should be ready to be used which is why it has to do that step first. Does that 
explain the intent better? (In general, the downloading at install time is 
actually a problem, because it's not uncommon to use R in environments that 
have no Internet access, but the download is a concession for extreme cases 
where the tar balls may be too big to make it part of the package, but it's yet 
another can of worms...).

Cheers,
Simon



> On 13/07/2023, at 12:37 PM, Hiroaki Yutani  wrote:
> 
> Hi,
> 
> I'm glad to see CRAN now has its official policy about Rust [1]!
> It seems it probably needs some feedback from those who are familiar with
> the Rust workflow. I'm not an expert, but let me leave some quick feedback.
> This email is sent to the R-package-devel mailing list as well as to cran@~
> so that we can publicly discuss.
> 
> It seems most of the concern is about how to make the build deterministic.
> In this regard, the policy should encourage including "Cargo.lock" file
> [2]. Cargo.lock is created on the first compile, and the resolved versions
> of dependencies are recorded. As long as this file exists, the dependency
> versions are locked to the ones in this file, except when the package
> author explicitly updates the versions.
> 
> Cargo.lock also records the SHA256 checksums of the crates if they are from
> crates.io, Rust's official crate registry. If the checksums don't match,
> the build will fail with the following message:
> 
>error: checksum for `foo v0.1.2` changed between lock files
> 
>this could be indicative of a few possible errors:
> 
>* the lock file is corrupt
>* a replacement source in use (e.g., a mirror) returned a different
> checksum
>* the source itself may be corrupt in one way or another
> 
>unable to verify that `foo v0.1.2` is the same as when the lockfile was
> generated
> 
> For dependencies from Git repositories, Cargo.lock records the commit
> hashes. So, the version of the source code (not the version of the crate)
> is uniquely determined. That said, unlike cargo.io, it's possible that the
> commit or the Git repository itself has disappeared at the time of
> building, which makes the build fail. So, it might be reasonable the CRAN
> policy prohibits the use of Git dependency unless the source code is
> bundled. I have no strong opinion here.
> 
> Accordingly, I believe this sentence
> 
>> In practice maintainers have found it nigh-impossible to meet these
> conditions whilst downloading as they have too little control.
> 
> is not quite true. More specifically, these things
> 
>> The standard way to download a Rust ‘crate’ is by its version number, and
> these have been changed without changing their number.
>> Downloading a ‘crate’ normally entails downloading its dependencies, and
> that is done without fixing their version numbers
> 
> won't happen if the R package does include Cargo.lock because
> 
> - if the crate is from crates.io, "the version can never be overwritten,
> and the code cannot be deleted" there [3]
> - if the crate is from a Git repository, the commit hash is unique in its
> nature. The version of the crate might be the same between commits, but a
> git dependency is specified by the commit hash, not the version of the
> crate.
> 
> I'm keen to know what problems the CRAN maintainers have experienced that
> Cargo.lock cannot solve. I hope we can help somehow to improve the policy.
> 
> Best,
> Yutani
> 
> [1]: https://cran.r-project.org/web/packages/using_rust.html
> [2]: https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
> [3]: https://doc.rust-lang.org/cargo/reference/publishing.html
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 


Re: [R-pkg-devel] Best practices for CRAN package using Go

2023-07-12 Thread Simon Urbanek
Dewey,

you will definitely need to include all the necessary sources for your package. 
You may want to have a look at the "Using Rust"[1] document linked from the 
CRAN policy. I think Go is quite similar to Rust in that sense so you should 
use the same approach, i.e. checking for system and user installations (for go 
the official location is /usr/local/go/bin/go and it may not be on the PATH), 
declaring Go version dependency and making sure your package has included all 
module dependency sources (i.e. don't use install-time module 
resolution/download).

If you need to include a large source tar ball that is not permissible on CRAN, 
I'd recommend using Zenodo.org since it is specifically designed to facilitate 
longevity and reproducibility (as opposed to Github or other transient storage 
that may disappear at any point).

All that said, you may run into the same issues as Rust (errors and segfaults 
due to limited interoperability of compilers) so use with care and test well. 
External bindings like Rust or Go are only provided on "best effort" basis.

Cheers,
Simon

[1] - https://cran.r-project.org/web/packages/using_rust.html

PS: go is now available on the CRAN macOS builder machines and the Mac Builder 
(https://mac.r-project.org/macbuilder/submit.html).


> On 13/07/2023, at 2:36 AM, Dewey Dunnington  wrote:
> 
> Thank you! It seems I needed the refresher on CRAN policy regarding 
> downloading sources: it seems like the go.sum/go.mod provide sufficient 
> checksumming to comply with the policy, as you noted (with `go mod vendor` as 
> a backup if this turns out to not be acceptable). Downloading Go is probably 
> out based on the advice for Rust that explicitly forbids this.
> 
> Cheers!
> 
> -dewey
> 
> On 2023-07-10 11:09, Ivan Krylov wrote:
>> В Thu, 06 Jul 2023 15:22:26 -0300
>> Dewey Dunnington  пишет:
>>> I've wrapped two of these drivers for R that seem to build and
>>> install on MacOS, Linux, and Windows [3][4]; however, I am not sure
>>> if the pattern I used is suitable for CRAN or whether these packages
>>> will have to be GitHub-only for the foreseeable future.
>> There are a few parts to following the CRAN policy [*] regarding
>> external dependencies.
>> I think (but don't know for sure) that your package will not be allowed
>> to download Go by itself. The policy says: "Only as a last resort and
>> with the agreement of the CRAN team should a package download
>> pre-compiled software."
>> An already installed Go should be able to "first look to see if [a
>> dependency] is already installed and if so is of a suitable version"
>> when installing the dependencies of the Go part of the code. The go.mod
>> and go.sum files specify the exact versions and checksums of the
>> dependencies, which satisfies the requirement for fixed dependency
>> versions ("it is acceptable to download them as part of installation,
>> but do ensure that the download is of a fixed version rather than the
>> latest"), so your package seems to be fine in this respect.
>> One more thing: when bootstrapping the source package, can you run go
>> mod vendor [**] in order to bundle *all* the Go dependencies together
>> with the package? Is the resulting directory prohibitively large? Would
>> it satisfy the CRAN policy preference to "include the library sources
>> in the package and compile them as part of package installation"
>> without requiring Internet access? Unfortunately, I don't know enough
>> about Go to answer these questions myself. I think that a small bundle
>> of vendored Go code would be preferrable for CRAN but *not* preferrable
>> for packaging in a GNU/Linux distro like Debian where dynamic linking
>> (in the widest possible sense) is a strong preference.
>> --
>> Best regards,
>> Ivan
>> [*] https://cran.r-project.org/web/packages/policies.html
>> [**] https://go.dev/ref/mod#vendoring
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] gfortran: command not found

2023-07-05 Thread Simon Urbanek
To quote from the page you downloaded R from:

This release uses Xcode 14.2/14.3 and GNU Fortran 12.2. If you wish to compile 
R packages which contain Fortran code, you may need to download the 
corresponding GNU Fortran compiler from https://mac.R-project.org/tools. 



> On Jul 6, 2023, at 11:50 AM, Spencer Graves 
>  wrote:
> 
> Hello:
> 
> 
> "R CMD build KFAS" under macOS 11.7.8 stopped with:
> 
> 
> using C compiler: ‘Apple clang version 12.0.5 (clang-1205.0.22.9)’
> sh: gfortran: command not found
> using SDK: ‘MacOSX11.3.sdk’
> gfortran -arch x86_64  -fPIC  -Wall -g -O2  -c  approx.f90 -o approx.o
> make: gfortran: No such file or directory
> make: *** [approx.o] Error 1
> ERROR: compilation failed for package ‘KFAS'
> 
> 
> My web search suggests several different ways to fix this problem, 
> but I don't know which to try.
> 
> 
> 
> Suggestions?
> Thanks,
> Spencer Graves
> 
> 
> p.s.  I have both "brew" and "port" installed.  I recently used "port" to 
> upgrade another software package.  A web search suggested the following:
> 
> 
> sudo port install gcc48
> sudo port select -set gcc mp-gcc48
> 
> 
> However, this comment was posted roughly 9 years ago.  Below please 
> find sessionInfo().
> 
> 
> sessionInfo()
> R version 4.3.1 (2023-06-16)
> Platform: x86_64-apple-darwin20 (64-bit)
> Running under: macOS Big Sur 11.7.8
> 
> Matrix products: default
> BLAS: 
> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
>  
> LAPACK: 
> /Library/Frameworks/R.framework/Versions/4.3-x86_64/Resources/lib/libRlapack.dylib;
>   LAPACK version 3.11.0
> 
> locale:
> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
> 
> time zone: America/Chicago
> tzcode source: internal
> 
> attached base packages:
> [1] stats graphics  grDevices utils datasets  methods   base
> 
> loaded via a namespace (and not attached):
> [1] compiler_4.3.1  R6_2.5.1magrittr_2.0.3  cli_3.6.1
> [5] tools_4.3.1 glue_1.6.2  rstudioapi_0.14 roxygen2_7.2.3
> [9] xml2_1.3.4  vctrs_0.6.2 stringi_1.7.12  knitr_1.42
> [13] xfun_0.39   stringr_1.5.0   lifecycle_1.0.3 rlang_1.1.1
> [17] purrr_1.0.1
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [Rd] Correct use of tools::R_user_dir() in packages?

2023-06-28 Thread Simon Urbanek
Carl,

I think your statement is false, the whole point of R_user_dir() is for 
packages to have a well-defined location that is allowed - from CRAN policy:

"For R version 4.0 or later (hence a version dependency is required or only 
conditional use is possible), packages may store user-specific data, 
configuration and cache files in their respective user directories obtained 
from tools::R_user_dir(), provided that by default sizes are kept as small as 
possible and the contents are actively managed (including removing outdated 
material)."

Cheers,
Simon


> On 28/06/2023, at 10:36 AM, Carl Boettiger  wrote:
> 
> tools::R_user_dir() provides configurable directories for R packages
> to write persistent information consistent with standard best
> practices relative to each supported operating systems for
> applications to store data, config, and cache information
> respectively.  These standard best practices include writing to
> directories in the users home filespace, which is also specifically
> against CRAN policy.
> 
> These defaults can be overridden by setting the environmental
> variables R_USER_DATA_DIR , R_USER_CONFIG_DIR, R_USER_CACHE_DIR,
> respectively.
> 
> If R developers should be using the locations provided by
> tools::R_user_dir() in packages, why does CRAN's check procedure not
> set these three environmental variables to CRAN compliant location by
> default (e.g. tempdir())?
> 
> In order to comply with CRAN policy, a package developer can obviously
> set these environmental variables themselves within the call for every
> example, every unit test, and every vignette.  Is this the recommended
> approach or is there a better technique?
> 
> Thanks for any clarification!
> 
> Regards,
> 
> Carl
> 
> ---
> Carl Boettiger
> http://carlboettiger.info/
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [R-pkg-devel] Convention or standards for using header library (e.g. Eigen)

2023-06-23 Thread Simon Urbanek
Stephen,

If you want to give the system version a shot, I would simply look for 
pkg-config, add the supplied CPPFLAGS to the package R flags if present and 
then test (regardless of pkg-config) with AC_CHECK_HEADER (see standard R-exts 
autoconf rules for packages). If that fails then use your included copy by 
adding the corresponding -I flag pointing to your supplied copy. You should not 
download anything as there is no expectation that the user has any internet 
access as the time of the installation so if you want to provide a fall-back, 
it should be in the sources of your package. That said, there is nothing wrong 
with ignoring the system version especially in this header-only case since you 
can then rely on the correct version which you tested - you can still allow the 
user to provide an option to override that behavior if desired. 

Cheers,
Simon



> On Jun 23, 2023, at 10:08 PM, Stephen Wade  wrote:
> 
> I recently submitted a package to CRAN which downloaded Eigen via Makevars
> and Makevars.win. My Makevars.ucrt was empty as I noted that Eigen3 is
> installed by default (however, this doesn't ensure that a version of Eigen
> compatible/tested with the package is available).
> 
> The source is currently on github:
> https://github.com/stephematician/literanger
> 
> Here is the Makevars
> 
> $ more src/Makevars
> # downloads eigen3 to extlibs/ and sets include location
> PKG_CPPFLAGS = -I../src -I../extlibs/
> .PHONY: all clean extlibs
> all: extlibs $(SHLIB)
> extlibs:
> "${R_HOME}/bin${R_ARCH_BIN}/Rscript" "../tools/extlibs.R"
> clean:
> rm -f $(SHLIB) $(OBJECTS)
> 
> The details of `extlibs.R` are fairly mundane, it downloads a release from
> gitlab and unzips it to `extlibs`.
> 
> CRAN gave me this feedback:
> 
>> Why do you download eigen here rather than using the system version of
>> Eigen if available?
>> 
>> We asked you to do that for Windows as you did in Makevars.ucrt. For
>> Unix-like OS you should only fall back (if at all) to some download if
>> the system Eigen is unavailable.
> 
> The problem is I'm not sure what a minimum standard to 'searching' for a
> system version of Eigen looks like. I also note that packages like
> RcppEigen simply bundle the Eigen headers within the package (and its
> repository) which will certainly ignore any system headers.
> 
> I would like a solution that would keep CRAN happy, i.e. i need to meet
> some standard for searching for the compiler flags, checking the version of
> the system headers, and then falling through to download release if the
> system headers fail.
> 
> 1.  For each platform (Unix, Windows, OS-X) what tool(s) should be invoked
> to check for compiler flags for a header-only library like Eigen? e.g.
> pkg-config, pkgconf? others?
> 2.  What is a reasonable approach for the possible package names for Eigen
> (e.g. typically libeigen3-dev on Debian, and eigen3 on arch, homebrew,
> others)? Is this enough?
> 3.  If pkg-config/pkgconf (or others) are unavailable, what is a reasonable
> standard for checking if the library can be built with some reasonable
> guess for the compiler flags (probably empty) - I assume I would need to
> try to compile a test program (within Makevars)?
> 4.  Following on from 3... would a package need to check (again via a test
> program) that the _system_ headers have the correct version (e.g. some
> static assert on EIGEN_WORLD_VERSION), and if that fails _then_ download
> the release from gitlab?
> 
> Any and all advice would be appreciated.
> 
> Kind regards,
> -Stephen Wade
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] NOTE about missing package ‘emmeans’ on macos-x86_64

2023-06-23 Thread Simon Urbanek



> On Jun 24, 2023, at 12:19 AM, Uwe Ligges  
> wrote:
> 
> 
> 
> On 23.06.2023 11:27, Helmut Schütz wrote:
>> Dear all,
>> since a while (January?) we face NOTEs in package checks 
>> (https://cran.r-project.org/web/checks/check_results_PowerTOST.html):
>> Version: 1.5-4
>> Check: package dependencies
>> Result: NOTE
>> Package suggested but not available for checking: ‘emmeans’
>> Flavor: r-release-macos-x86_64
>> Version: 1.5-4
>> Check: Rd cross-references
>> Result: NOTE
>> Package unavailable to check Rd xrefs: ‘emmeans’
>> Flavor: r-release-macos-x86_64
>> First I thought that ‘emmeans’ is not available for macos-x86_64 on CRAN.
>> However, ‘emmeans’ itself passed all checks 
>> (https://cran.r-project.org/web/checks/check_results_emmeans.html).
>> Since we want to submit v1.5-5 of PowerTOST soon, any ideas?
> 
> Please go ahead. Simon rarely updates the check results, so I guess this was 
> a coincidence at the time and never got updated. I'd ignore this one.
> 

Correct, packages are only re-checked if they failed the check before. Once a 
package passes the checks the results are not re-run, because it would take way 
too long given how many packages we have (re-running all takes 2-3 days).

If you don't intend to update your package and want such NOTEs to disappear, 
send me an email and I can run it by hand (I did now for PowerTOST and the NOTE 
is gone).

Cheers,
Simon

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Questions regarding a new (seperated package) and how to submit them to cran

2023-06-22 Thread Simon Urbanek
Bernd,

the sequence in which you submit doesn't matter - the packages have to work 
regardless of the sequence. Suggests means that the dependency is optional, not 
that it can break tests. You have to skip the tests that cannot be run due to 
missing dependencies (see 1.1.3.1 in R-exts)

Cheers,
Simon



> On Jun 23, 2023, at 2:35 PM, Bernd.Gruber  
> wrote:
> 
> Hi,
> 
> I have a question regarding the separation of a package into smaller pieces 
> (to avoid long testing/installation times and more important to avoid to many 
> dependencies)
> 
> I am the maintainer of an R package (dartR) which has grown and is now at the 
> limit in terms of testing/run time and also dependencies. To further develop 
> the package we started to break the package into smaller packages namely
> 
> 
> Two core packages (dartR.base and dartR.data) and here dartR.base has 
> dartR.data in the depends. (dartR.base is 60% of the previous package) and 
> dartR.data is our data.package for test data (dartR.data is already on CRAN)
> 
> 
> 
> 
> Next to the two core packages we also have 3 more addon packages that deal 
> with specialised analysis
> 
> dartR.sim
> dartR.spatial
> dartR.popgenomics.
> 
> Those packages depend on dartR.base and dartR.data.
> 
> All addon packages and core packages should have the other addon packages as 
> suggests, hence here comes the question.
> 
> 
> How do I submit the packages?  All of them at once? Or step by step.
> 
> If I submit step by step (e.g. dartR.base) it obviously cannot have the other 
> dartR addon packages as suggests (cannot be tested and will break the CRAN 
> tests).
> 
> So would be the correct way to:
> Submit dartR.base (without dartR.sim, dartR.spatial and dartR.popgenomics in 
> the suggest.)
> Then submit dartR.sim, then dartR.spatial and finally dartR.popgenomics (all 
> without suggests of the other packages)
> 
> And finally update all packages (only their description file and add the 
> suggests once they are on CRAN).
> 
> Hope that makes sense and thanks in advance,
> 
> Bernd
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] clang linker error: Symbol not found: _objc_msgSend$UTF8String

2023-06-18 Thread Simon Urbanek


Andreas,

Xcode update fixed the issue as expected so in due time the ERRORs should 
disappear.

Cheers,
Simon


> On 18/06/2023, at 10:29 AM, Simon Urbanek  wrote:
> 
> Andreas,
> 
> that is actually not your problem - the stubs are generated in glib, so your 
> package can do nothing about it, your compile flags won't change it. The only 
> way to fix it is on my end, the proper way is to upgrade to Xcode 14 for the 
> package builds, but that requires some changes to the build machine, so I'll 
> do it on Monday when I'm at work, so hold on tight in the meantime.
> 
> Cheers,
> Simon
> 
> Explanation of the issue for posterity: the issue is caused by Xcode 14 which 
> generates those stubs[1], but can also handle them. However, older Xcode 
> versions cannot. We are using macOS 11 target and SDK to ensure compatibility 
> with older macOS versions, but apparently Xcode 14 assumes that the linking 
> will still happen with Xcode 14 even if libraries are compiled for older 
> targets. Therefore the proper fix is to make sure that packages are also 
> linked with Xcode 14. Another work-around would be to compile glib with 
> -fno-objc-msgsend-selector-stubs so it would also work with older Xcode, but 
> it's more future-proof to just upgrade Xcode.
> 
> [1] https://github.com/llvm/llvm-project/issues/56034
> 
> 
>> On Jun 17, 2023, at 7:07 PM, Andreas Blätte  
>> wrote:
>> 
>> Dear colleagues,
>> 
>> 
>> 
>> after submitting a release of my package RcppCWB (no problems with test 
>> servers), CRAN check results reported ERRORS on the macOS check systems: 
>> https://cran.r-project.org/web/checks/check_results_RcppCWB.html
>> 
>> 
>> 
>> The core is that when test loading the package, you get the error: Symbol 
>> not found: _objc_msgSend$UTF8String
>> 
>> 
>> 
>> Picking up a solution discussed here (disable objc_msgSend stubs in clang), 
>> I modified the configure script of my package to pass the flag 
>> “-fno-objc-msgsend-selector-stubs“ to the linker, which I thought would 
>> solve the problem.
>> 
>> 
>> 
>> However: The CRAN Debian system for incoming R packages uses clang 15, which 
>> does not accept this flag any more, resulting in an error.
>> 
>> 
>> 
>> Certainly, I could refine my configure script to address a very specific 
>> scenario on CRAN macOS systems, i.e. making usage of the flag conditional on 
>> a specific clang version. But I am not sure whether this is the way to go. 
>> It would feel like a hack I would like to avoid.
>> 
>> 
>> 
>> Has anybody encountered this error? Is there a best practice or a recomended 
>> solution? I would be very glad to get your advice!
>> 
>> 
>> 
>> Kind regards
>> 
>> Andreas
>> 
>> 
>> 
>> --
>> 
>> Prof. Dr. Andreas Blaette
>> 
>> Professor of Public Policy
>> 
>> University of Duisburg-Essen
>> 
>> 
>> 
>> 
>> 
>> 
>>  [[alternative HTML version deleted]]
>> 
>> __
>> R-package-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-package-devel
>> 
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] clang linker error: Symbol not found: _objc_msgSend$UTF8String

2023-06-17 Thread Simon Urbanek
Andreas,

that is actually not your problem - the stubs are generated in glib, so your 
package can do nothing about it, your compile flags won't change it. The only 
way to fix it is on my end, the proper way is to upgrade to Xcode 14 for the 
package builds, but that requires some changes to the build machine, so I'll do 
it on Monday when I'm at work, so hold on tight in the meantime.

Cheers,
Simon

Explanation of the issue for posterity: the issue is caused by Xcode 14 which 
generates those stubs[1], but can also handle them. However, older Xcode 
versions cannot. We are using macOS 11 target and SDK to ensure compatibility 
with older macOS versions, but apparently Xcode 14 assumes that the linking 
will still happen with Xcode 14 even if libraries are compiled for older 
targets. Therefore the proper fix is to make sure that packages are also linked 
with Xcode 14. Another work-around would be to compile glib with 
-fno-objc-msgsend-selector-stubs so it would also work with older Xcode, but 
it's more future-proof to just upgrade Xcode.

[1] https://github.com/llvm/llvm-project/issues/56034


> On Jun 17, 2023, at 7:07 PM, Andreas Blätte  
> wrote:
> 
> Dear colleagues,
> 
> 
> 
> after submitting a release of my package RcppCWB (no problems with test 
> servers), CRAN check results reported ERRORS on the macOS check systems: 
> https://cran.r-project.org/web/checks/check_results_RcppCWB.html
> 
> 
> 
> The core is that when test loading the package, you get the error: Symbol not 
> found: _objc_msgSend$UTF8String
> 
> 
> 
> Picking up a solution discussed here (disable objc_msgSend stubs in clang), I 
> modified the configure script of my package to pass the flag 
> “-fno-objc-msgsend-selector-stubs“ to the linker, which I thought would solve 
> the problem.
> 
> 
> 
> However: The CRAN Debian system for incoming R packages uses clang 15, which 
> does not accept this flag any more, resulting in an error.
> 
> 
> 
> Certainly, I could refine my configure script to address a very specific 
> scenario on CRAN macOS systems, i.e. making usage of the flag conditional on 
> a specific clang version. But I am not sure whether this is the way to go. It 
> would feel like a hack I would like to avoid.
> 
> 
> 
> Has anybody encountered this error? Is there a best practice or a recomended 
> solution? I would be very glad to get your advice!
> 
> 
> 
> Kind regards
> 
> Andreas
> 
> 
> 
> --
> 
> Prof. Dr. Andreas Blaette
> 
> Professor of Public Policy
> 
> University of Duisburg-Essen
> 
> 
> 
> 
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [Rd] codetools wrongly complains about lazy evaluation in S4 methods

2023-06-13 Thread Simon Urbanek
I agree that this is not an R issue, but rather user error of not defining a 
proper generic so the check is right. Obviously, defining a generic with 
implementation-specific ncol default makes no sense at all, it should only be 
part of the method implementation. If one was to implement the same default 
behavior in the generic itself (not necessarily a good idea) the default would 
be ncol = if (complete) nrow(qr.R(qr, TRUE)) else min(dim(qr.R(qr, TRUE))) to 
not rely on the internals of the implementation.

Cheers,
Simon


> On 14/06/2023, at 6:03 AM, Kasper Daniel Hansen 
>  wrote:
> 
> On Sat, Jun 3, 2023 at 11:51 AM Mikael Jagan  wrote:
> 
>> The formals of the newly generic 'qr.X' are inherited from the non-generic
>> function in the base namespace.  Notably, the inherited default value of
>> formal argument 'ncol' relies on lazy evaluation:
>> 
>>> formals(qr.X)[["ncol"]]
>> if (complete) nrow(R) else min(dim(R))
>> 
>> where 'R' must be defined in the body of any method that might evaluate
>> 'ncol'.
>> 
> 
> Perhaps I am misunderstanding something, but I think Mikael's expectations
> about the scoping rules of R are wrong.  The enclosing environment of ncol
> is where it was _defined_ not where it is _called_ (apologies if I am
> messing up the computer science terminology here).
> 
> This suggests to me that codetools is right.  But a more extended example
> would be useful. Perhaps there is something special with setOldClass()
> which I am no aware of.
> 
> Also, Bioconductor has 100s of packages with S4 where codetools works well.
> 
> Kasper
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


Re: [R-pkg-devel] [External] Test fails on M1 Mac on CRAN, but not on macOS builder

2023-05-22 Thread Simon Urbanek
Florian,

ok, understood. It works for me on both M1 build machines, so can't really 
help. I'd simply submit the new version on CRAN. Of course it would help if the 
tests were more informative such as actually showing the values involved on 
failure so you could at least have an idea from the output.

Cheers
Simon



> On May 22, 2023, at 11:07 PM, Pein, Florian  wrote:
> 
> Dear Duncan and Simon,
> thank you both very much for you help.
> 
> I can make the test more informative and also break it down into substeps. 
> But I am unsure whether CRAN policies allow to use their system for such 
> testing steps. I rather think not. Though I must say that I still do not know 
> how to otherwise fix the error. Can anyone, ideally a CRAN maintainer, 
> confirm that this is okay?
> 
> Regarding the tolerance, the test compares to small integers. In this 
> specific situation both sides are 5L on my local system. The tolerance is 
> there to ensure that 
> ncol(compareStat) * testalpha is not something like 4.9 due to 
> floating point approximations and we end up with 4L when as.integer() is 
> applied. This is not happening in the concrete example, since 
> ncol(compareStat) * testalpha = 5.67. I am very sure that the error is on the 
> left hand side and compare is larger than 5. 
> 
> In fact, tol in
> rejected[, i] <- teststat[i, ] > ret[i] + tol
> may need to be larger, since teststat contains quite large values. I think 
> there is a realistic chance that a floating point error occurs at this point. 
> But once again, I do not want to send a random guess to CRAN when I cannot 
> test whether this has fixed the problem or not. I have tested the old code 
> with --disable-long-double and compiler flags such as -ffloat-store and 
> -fexcess-precision=standard and it works. I do not know to what degree this 
> ensures that it works on all systems.
> 
> Many thanks and best wishes,
> Florian
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Test fails on M1 Mac on CRAN, but not on macOS builder

2023-05-21 Thread Simon Urbanek
Florian,

looking at the notes for 2.1-4 it says the tolerance has the wrong sign, i.e. 
you're adding it to the value on both sides of the interval (instead of 
subtracting for the lower bound). In your latest version the tolerances get 
added everywhere so that makes even less sense to me, but then I don't know 
what you actually intended to be completely honest. All I say, simply make sure 
you get the logic for the tolerance intervals right.

Cheers,
Simon


> On 19/05/2023, at 9:49 PM, Pein, Florian  wrote:
> 
> Dear everyone,
> my R package stepR (https://cran.r-project.org/web/packages/stepR/) fails the 
> CRAN package checks on M1 Mac, but the error does not occur on the macOS 
> builder (https://mac.r-project.org/macbuilder/submit.html). So, I am unable 
> to reproduce the error and hence unable to fix it (starring at the code did 
> not help either).
> 
> The relevant part is
> 
> * checking tests ...
>  Running ĄĨtestthat.RĄĶ [35s/35s]
> [36s/36s] ERROR
> Running the tests in ĄĨtests/testthat.RĄĶ failed.
> 
>> test_check("stepR")
>  [ FAIL 1 | WARN 0 | SKIP 23 | PASS 22741 ]
> 
>   Failed tests 
> 
>  ĒwĒw Failure ('test-critVal.R:2463:3'): family 'hsmuce' works 
> ĒwĒwĒwĒwĒwĒwĒwĒwĒwĒwĒwĒwĒwĒwĒwĒwĒwĒwĒwĒw
>  compare <= as.integer(ncol(compareStat) * testalpha + tolerance) is not TRUE
> 
>  `actual`:   FALSE
>  `expected`: TRUE
>  sqrt
>  Backtrace:
>  Ēg
>   1. Ē|ĒwstepR (local) testVector(...) at test-critVal.R:2463:2
>   2.   Ē|Ēwtestthat::expect_true(...) at test-critVal.R:50:2
> 
>  [ FAIL 1 | WARN 0 | SKIP 23 | PASS 22741 ]
>  Error: Test failures
>  Execution halted
> 
> 
> Has anyone an idea how to tackle this problem?
> 
> The test code is long (a full version is available on CRAN). The following is 
> the code part that I think is relevant (once again I cannot reproduce the 
> error, so I am also unable to give a minimal reproducible example, I can only 
> guess one):
> 
> library(stepR)
> library(testthat)
> 
> testn <- 1024L
> teststat <- monteCarloSimulation(n = 1024L, r = 100L, family = "hsmuce") # 
> essentially a matrix with values generated by rnorm()
> testalpha <- 0.0567
> tolerance <- 1e-12
> 
> ret <- critVal(n = 1024L, penalty = "sqrt", output = "vector", family = 
> "hsmuce", alpha = testalpha, stat = teststat)
> 
> statVec <- as.numeric(teststat)
> tol <- min(min(diff(sort(statVec[is.finite(statVec)]))) / 2, 1e-12) # 
> different to the CRAN version to be robust to two values very close to each 
> other
> rejected <- matrix(FALSE, ncol(teststat), nrow(teststat))
> compare <- integer(ncol(teststat))
> 
> for (i in 1:nrow(teststat)) {
>  rejected[, i] <- teststat[i, ] > ret[i] + tol
> }
> 
> for (i in 1:ncol(teststat)) {
>  compare[i] <- max(rejected[i, ])
> }
> compare <- sum(compare)
> expect_true(compare <= as.integer(ncol(teststat) * testalpha + tolerance), 
> info = "sqrt")
> 
> # version with an additional tolerance (suggested when the test failed on 
> CRAN, but it does not help either)
> # both sides are small intgers, so it should not be needed
> expect_true(as.integer(compare + tolerance) <= as.integer(ncol(teststat) * 
> testalpha + tolerance) + 0.5, info = "sqrt")
> 
> 
> I am not sure how to approach this problem, so any suggestions are very much 
> welcomed.
> Many thanks and best wishes,
> Florian Pein
> (Lancaster university)
> 
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Problems with devtools::build() in R

2023-05-17 Thread Simon Urbanek
This thread went way off the rails and was cross-posted so the solution is on 
R-SIG-Mac.

It was simply wrong Fortran with wrong R - installing latest R and Fortran 
(from CRAN or https://mac.r-project.org/tools/) is the easiest way to solve the 
problem.

Note that R binaries and tools go together so if in doubt, just go to CRAN and 
follow the instructions.

Cheers,
Simon



> On 18/05/2023, at 3:41 AM, Ivan Krylov  wrote:
> 
> В Wed, 17 May 2023 11:05:46 -0400
> Jarrett Phillips  пишет:
> 
>> `which gfortran`  returns
>> 
>> /usr/local/bin/gfortran
> 
> I think you ran the other gfortran. Is there a gfortran installation in
> /opt/gfortran?
> 
>> libraries:
>> =/usr/local/gfortran/lib/gcc/aarch64-apple-darwin22/12.2.0/:/usr/local/gfortran/lib/gcc/aarch64-apple-darwin22/12.2.0/../../../../aarch64-apple-darwin22/lib/aarch64-apple-darwin22/12.2.0/:/usr/local/gfortran/lib/gcc/aarch64-apple-darwin22/12.2.0/../../../../aarch64-apple-darwin22/lib/:/usr/local/gfortran/lib/gcc/aarch64-apple-darwin22/12.2.0/../../../aarch64-apple-darwin22/12.2.0/:/usr/local/gfortran/lib/gcc/aarch64-apple-darwin22/12.2.0/../../../
> 
>> "/Library/Frameworks/R.framework/Resources/etc/Makeconf"
> 
> If you open this file, the flags
> -L/opt/R/arm64/gfortran/lib/gcc/aarch64-apple-darwin20.6.0/12.0.1
> -L/opt/R/arm64/gfortran/lib must be present in there somewhere. (Or
> maybe it's in ~/.R/Makevars, but you would've remembered creating it
> yourself.)
> 
> What if you replace the paths with the ones returned by gfortran,
> namely, -L/usr/local/gfortran/lib/gcc/aarch64-apple-darwin22/12.2.0
> -L/usr/local/gfortran/lib? (Even better, with the paths returned by
> /opt/gfortran/bin/gfortran -print-search-dirs, assuming this command
> works.) While you're at it, fix other Fortran-related paths like the
> path to the compiler. I still suspect you may end up having problems
> because your R was built with a different version of gfortran, but I
> don't know a better way of moving forward.
> 
> I'm going on general POSIX-like knowledge since I lack a Mac to test
> things on. Maybe R-SIG-Mac will have better advice for you.
> 
> -- 
> Best regards,
> Ivan
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Are the CRAN macOS builders down?

2023-05-17 Thread Simon Urbanek
Everything should be catched up by now
https://mac.r-project.org/reports/status.html

FWIW the macOS builds are simply run off published CRAN, so that's why the the 
sequence is upload -> incoming-tests -> CRAN-src -> check+build -> macos-master 
-> CRAN-bin where each step may involve cron jobs which is why the total time 
from upload to published binary can take a bit of time.

Cheers,
Simon



> On 17/05/2023, at 10:55 AM, Dirk Eddelbuettel  wrote:
> 
> 
> On 17 May 2023 at 10:39, Simon Urbanek wrote:
> | Dirk,
> | 
> | thanks, ok, now I get what you meant. This has nothing to do with CRAN 
> uploads (which are handled in Vienna) this was about specific macOS builds. 
> The arm64 Big Sur build machine had apparently issues. I have re-started the 
> arm64 builds so they should catch up in a few hours.
> 
> Thanks but as I noted _all other non-arm64 macOS machines are also lagging_
> and now for about five days -- which is why wrote the email.
> 
> But good to know you are on it now!
> 
> Dirk
> 
> | 
> | Thanks,
> | Simon
> | 
> | 
> | > On 17/05/2023, at 8:39 AM, Dirk Eddelbuettel  wrote:
> | > 
> | > 
> | > Simon:
> | > 
> | > On 17 May 2023 at 07:57, Simon Urbanek wrote:
> | > | builds are immediate, so it is a matter of seconds for most packages. I 
> don't see any issues on the Mac Builder server.
> | > | If you have a problem, please be more specific and include the check 
> link returned at submission.
> | > 
> | > I was talking about _CRAN uploads_. To be as specific as you asked:
> | > 
> | > - crc32c on CRAN since May 11, all systems apart from macOS built but all
> | >   macOS builds missing
> | >   https://cran.r-project.org/web/checks/check_results_crc32c.html
> | > 
> | > - RcppSimdJson on CRAN since May 14, six linux + windows builds made, two
> | >   linux builds and all macOS missing
> | >   https://cran.r-project.org/web/checks/check_results_RcppSimdJson.html
> | > 
> | > So no builds on macOS for either of my uploads to CRAN. Can you comment?
> | > 
> | > Dirk
> | > 
> | > | Cheers,
> | > | Simon
> | > |  
> | > | 
> | > | > On 17/05/2023, at 4:27 AM, Dirk Eddelbuettel  wrote:
> | > | > 
> | > | > 
> | > | > Simon,
> | > | > 
> | > | > As a follow-up to the cmake questions (and me now knowing I have to 
> tell R
> | > | > where cmake is on macOS), I uploaded a new package last Thursday. It 
> has long
> | > | > built everywhere on CRAN, but not on macOS.  Ditto for another 
> package update
> | > | > from Sunday (RcppSimdJson) which also has not been touched.
> | > | > 
> | > | > Should I adjust my expectations that this can take a week or longer 
> on macOS,
> | > | > or did a few builds fall off the wagon?
> | > | > 
> | > | > Thanks as always for looking after that architecture.
> | > | > 
> | > | > Best,  Dirk
> | > | > 
> | > | > -- 
> | > | > dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> | > | > 
> | > | 
> | > 
> | > -- 
> | > dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> | > 
> | 
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Are the CRAN macOS builders down?

2023-05-16 Thread Simon Urbanek
Dirk,

thanks, ok, now I get what you meant. This has nothing to do with CRAN uploads 
(which are handled in Vienna) this was about specific macOS builds. The arm64 
Big Sur build machine had apparently issues. I have re-started the arm64 builds 
so they should catch up in a few hours.

Thanks,
Simon


> On 17/05/2023, at 8:39 AM, Dirk Eddelbuettel  wrote:
> 
> 
> Simon:
> 
> On 17 May 2023 at 07:57, Simon Urbanek wrote:
> | builds are immediate, so it is a matter of seconds for most packages. I 
> don't see any issues on the Mac Builder server.
> | If you have a problem, please be more specific and include the check link 
> returned at submission.
> 
> I was talking about _CRAN uploads_. To be as specific as you asked:
> 
> - crc32c on CRAN since May 11, all systems apart from macOS built but all
>   macOS builds missing
>   https://cran.r-project.org/web/checks/check_results_crc32c.html
> 
> - RcppSimdJson on CRAN since May 14, six linux + windows builds made, two
>   linux builds and all macOS missing
>   https://cran.r-project.org/web/checks/check_results_RcppSimdJson.html
> 
> So no builds on macOS for either of my uploads to CRAN. Can you comment?
> 
> Dirk
> 
> | Cheers,
> | Simon
> |  
> | 
> | > On 17/05/2023, at 4:27 AM, Dirk Eddelbuettel  wrote:
> | > 
> | > 
> | > Simon,
> | > 
> | > As a follow-up to the cmake questions (and me now knowing I have to tell R
> | > where cmake is on macOS), I uploaded a new package last Thursday. It has 
> long
> | > built everywhere on CRAN, but not on macOS.  Ditto for another package 
> update
> | > from Sunday (RcppSimdJson) which also has not been touched.
> | > 
> | > Should I adjust my expectations that this can take a week or longer on 
> macOS,
> | > or did a few builds fall off the wagon?
> | > 
> | > Thanks as always for looking after that architecture.
> | > 
> | > Best,  Dirk
> | > 
> | > -- 
> | > dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> | > 
> | 
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Are the CRAN macOS builders down?

2023-05-16 Thread Simon Urbanek
Dirk,

builds are immediate, so it is a matter of seconds for most packages. I don't 
see any issues on the Mac Builder server.
If you have a problem, please be more specific and include the check link 
returned at submission.

Cheers,
Simon
 

> On 17/05/2023, at 4:27 AM, Dirk Eddelbuettel  wrote:
> 
> 
> Simon,
> 
> As a follow-up to the cmake questions (and me now knowing I have to tell R
> where cmake is on macOS), I uploaded a new package last Thursday. It has long
> built everywhere on CRAN, but not on macOS.  Ditto for another package update
> from Sunday (RcppSimdJson) which also has not been touched.
> 
> Should I adjust my expectations that this can take a week or longer on macOS,
> or did a few builds fall off the wagon?
> 
> Thanks as always for looking after that architecture.
> 
> Best,  Dirk
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Please install cmake on macOS builders

2023-05-11 Thread Simon Urbanek
I think it would be quite useful to have some community repository of code 
snippets dealing with such situations. R-exts gives advice and pieces of code 
which are useful, but they are not complete solutions and situations like 
Dirk's example are not that uncommon. (E.g., I recall some of the spatial 
packages copy/pasting code from each other for quote some time - which works, 
but is error prone if changes need to be made).

If one has to rely on a 3rd party library and one wants to fall back to source 
compilation when it is not available, it is a quite complex task, because one 
has to match the library's build system to R's and the package build rules as 
well. There are many ways where this can go wrong - Dirk mentioned some of them 
- and ideally not every package developer in that situation should be going 
through the pain of learning all the details the hard way.

Of course there are other packages as an example, but for someone not familiar 
with the details it's hard to see which ones do it right, and which ones don't 
- we don't always catch all the bad cases on CRAN.

I don't have a specific proposal, but if there was a GitHub repo or wiki or 
something to try to distill the useful bits from existing packages, I'd be 
happy to review it and give advice based on my experience from that macOS 
binary maintenance if that's useful.

Cheers,
Simon


> On May 12, 2023, at 8:36 AM, Dirk Eddelbuettel  wrote:
> 
> 
> Hi Reed,
> 
> On 11 May 2023 at 11:15, Reed A. Cartwright wrote:
> | I'm curious why you chose to call cmake from make instead of from configure.
> | I've always seen cmake as part of the configure step of package building.
> 
> Great question! Couple of small answers: i) This started as a 'proof of
> concept' that aimed to be small so getting by without requiring `configure`
> seemed worth a try, ii) I had seen another src/Makevars invoking compilation
> of a static library in a similar (albeit non-cmake) way and iii) as we now
> know about section 1.2.6 (or soon 1.2.9) 'Using cmake' has it that way too.
> 
> Otherwise I quite like having `configure` and I frequently use it -- made
> from 'genuine' configire.in via `autoconf`, or as scripts in shell or other
> languages.
> 
> Cheers, Dirk
> 
> PS My repaired package is now on CRAN. I managed to bungle the static library
> build (by not telling `cmake` to use position independent code), bungled
> macOS but not telling myself where `cmake` could live, and in fixing that
> bungled Windows by forgetting to add `src/Makevars.win` fallback. Yay me.
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Please install cmake on macOS builders

2023-05-10 Thread Simon Urbanek
Dirk,

can you be more specific, please? I suspect that it may be rather an issue in 
your package. All build machines have the official cmake releases installed and 
there are many packages that use it successfully. Here is the report on the 
currently installed versions. If you require more recent version, let me know.

high-sierra-x86_64$ /Applications/CMake.app/Contents/bin/cmake --version | head 
-n1
cmake version 3.17.3

big-sur-arm64$ /Applications/CMake.app/Contents/bin/cmake --version | head -n1
cmake version 3.19.4

mac-builder-arm64$ /Applications/CMake.app/Contents/bin/cmake --version | head 
-n1
cmake version 3.21.2

big-sur-x86_64$ /Applications/CMake.app/Contents/bin/cmake --version | head -n1
cmake version 3.26.0

Cheers,
Simon


> On May 11, 2023, at 12:01 AM, Dirk Eddelbuettel  wrote:
> 
> 
> Simon,
> 
> Explicitly declaring
> 
>SystemRequirements: cmake
> 
> appears to be insufficient to get a build on the (otherwise lovely to have)
> 'macOS builder', and leads to failure on (at least) 'r-oldrel-macos-x86_64'.
> 
> Would it be possible to actually have cmake installed?
> 
> These daus cmake is for better or worse becoming a standard, and I rely on it
> for one (new) package to correctly configure a library. It would be nice to
> be able to rely on it on macOS too.
> 
> Thanks,  Dirk
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Unfortunate function name generic.something

2023-05-09 Thread Simon Urbanek
Duncan,

you're right that any functions in the call environment are always treated as 
methods (even before consulting method registrations). That is a special case - 
I presume for compatibility with the world before namespaces so that, e.g., you 
don't have to register methods in the global environment when working 
interactively. I wonder if that is something that packages could choose to opt 
out of for safety since they are already relying on method registration (and 
that would also in theory improve performance).

One interesting related issue is that in the current implementation of the 
method registration there is no concept of "private" methods (which is what the 
above rule effectively provides) since methods get registered with the generic, 
so they are either visible to everyone or not at all. If one would really want 
to support this, it would require a kind of "local" registration and then 
replacing the name-based search up the call chain with local registration 
search - but probably again at the cost of performance.

Cheers,
Simon


> On May 9, 2023, at 11:23 AM, Duncan Murdoch  wrote:
> 
> On 08/05/2023 6:58 p.m., Simon Urbanek wrote:
>>> On 8/05/2023, at 11:58 PM, Duncan Murdoch  wrote:
>>> 
>>> There really isn't such a thing as "a function that looks like an S3 
>>> method, but isn't".  If it looks like an S3 method, then in the proper 
>>> circumstances, it will be called as one.
>>> 
>> I disagree - that was the case in old versions, but not anymore. The whole 
>> point of introducing namespaces and method registration was to make it clear 
>> when a function is a method and when it is a function. If you export a 
>> function it won't be treated as a method:
>> In a package NAMESPACE:
>> export(foo.cls)
>> package R code: foo.cls <- function(x) "foo.cls"
>> in R:
>>> cls=structure(1,class="cls")
>>> foo=function(x) UseMethod("foo")
>>> foo(cls)
>> Error in UseMethod("foo") :
>>   no applicable method for 'foo' applied to an object of class "cls"
>>> foo.cls(cls)
>> [1] "foo.cls"
>> So R knows very well what is a method and what is a function. If you wanted 
>> it to be a method, you have to use S3method(foo, cls) and that **is** 
>> different from export(foo.cls) - quite deliberately so.
> 
> That is true for package users, but it's not true within the package.  I just 
> tested this code in a package:
> 
>  levels.no <- function(xx, ...) {
>stop("not a method")
>  }
> 
>  f <- function() {
>x <- structure(1, class = "no")
>levels(x)
>  }
> 
> Both levels.no and f were exported.  If I attach the package and call f(), I 
> get the error
> 
>  > library(testpkg)
>  > f()
>  Error in levels.no(x) : not a method
> 
> because levels.no is being treated as a method when levels() is called in the 
> package.
> 
> If I create an x like that outside of the package and call levels(x) there, I 
> get NULL, because levels.no is not being treated as a method in that context.
> 
> As far as I know, there is no possible way to have a function in a package 
> that is called "levels.no" and not being treated as a method within the 
> package.  I don't think there's any way to declare "this is not a method", 
> other than naming it differently.
> 
> Duncan
> 
>> Cheers,
>> Simon
>>> In your case the function name is levels.no, and it isn't exported.  So if 
>>> you happen to have an object with a class inheriting from "no", and you 
>>> call levels() on it, levels.no might be called.
>>> 
>>> This will only affect users of your package indirectly.  If they have 
>>> objects inheriting from "no" and call levels() on them, levels.no will not 
>>> be called.  But if they pass such an object to one of your package 
>>> functions, and that function calls levels() on it, they could end up 
>>> calling levels.no().  It all depends on what other classes that object 
>>> inherits from.
>>> 
>>> You can test this yourself.  Set debugging on any one of your functions, 
>>> then call it in the normal way.  Then while still in the debugger set 
>>> debugging on levels.no, and create an object using
>>> 
>>>  x <- structure(1, class = "no")
>>> 
>>> and call levels(x).  You should break to the code of levels.no.
>>> 
>>> That is why the WRE manual says "First, a caveat: a function named gen.cl 
>>> will be invoked by the generic gen for

Re: [R-pkg-devel] Unfortunate function name generic.something

2023-05-08 Thread Simon Urbanek



> On 8/05/2023, at 11:58 PM, Duncan Murdoch  wrote:
> 
> There really isn't such a thing as "a function that looks like an S3 method, 
> but isn't".  If it looks like an S3 method, then in the proper circumstances, 
> it will be called as one.
> 


I disagree - that was the case in old versions, but not anymore. The whole 
point of introducing namespaces and method registration was to make it clear 
when a function is a method and when it is a function. If you export a function 
it won't be treated as a method:

In a package NAMESPACE:
export(foo.cls)
package R code: foo.cls <- function(x) "foo.cls"

in R:
> cls=structure(1,class="cls")
> foo=function(x) UseMethod("foo")
> foo(cls)
Error in UseMethod("foo") : 
  no applicable method for 'foo' applied to an object of class "cls"
> foo.cls(cls)
[1] "foo.cls"

So R knows very well what is a method and what is a function. If you wanted it 
to be a method, you have to use S3method(foo, cls) and that **is** different 
from export(foo.cls) - quite deliberately so.

Cheers,
Simon


> In your case the function name is levels.no, and it isn't exported.  So if 
> you happen to have an object with a class inheriting from "no", and you call 
> levels() on it, levels.no might be called.
> 
> This will only affect users of your package indirectly.  If they have objects 
> inheriting from "no" and call levels() on them, levels.no will not be called. 
>  But if they pass such an object to one of your package functions, and that 
> function calls levels() on it, they could end up calling levels.no().  It all 
> depends on what other classes that object inherits from.
> 
> You can test this yourself.  Set debugging on any one of your functions, then 
> call it in the normal way.  Then while still in the debugger set debugging on 
> levels.no, and create an object using
> 
>  x <- structure(1, class = "no")
> 
> and call levels(x).  You should break to the code of levels.no.
> 
> That is why the WRE manual says "First, a caveat: a function named gen.cl 
> will be invoked by the generic gen for class cl, so do not name functions in 
> this style unless they are intended to be methods."
> 
> So probably the best solution (even if inconvenient) is to rename levels.no 
> to something that doesn't look like an S3 method.
> 
> Duncan Murdoch
> 
> On 08/05/2023 5:50 a.m., Ulrike Groemping wrote:
>> Thank your for the solution attempt. However, using the keyword internal
>> does not solve the problem, the note is still there. Any other proposals
>> for properly documenting a function that looks like an S3 method, but isn't?
>> Best, Ulrike
>> Am 05.05.2023 um 12:56 schrieb Iris Simmons:
>>> You can add
>>> 
>>> \keyword{internal}
>>> 
>>> to the Rd file. Your documentation won't show up the in the pdf
>>> manual, it won't show up in the package index, but you'll still be
>>> able to access the doc page with ?levels.no  or
>>> help("levels.no ").
>>> 
>>> This is usually used in a package's deprecated and defunct doc pages,
>>> but you can use it anywhere.
>>> 
>>> On Fri, May 5, 2023, 06:49 Ulrike Groemping
>>>  wrote:
>>> 
>>> Dear package developeRs,
>>> 
>>> I am working on fixing some notes regarding package DoE.base.
>>> One note refers to the function levels.no  and
>>> complains that the
>>> function is not documented as a method for the generic function
>>> levels.
>>> Actually, it is not a method for the generic levels, but a standalone
>>> internal function that I like to have documented.
>>> 
>>> Is there a way to document the function without renaming it and
>>> without
>>> triggering a note about method documentation?
>>> 
>>> Best, Ulrike
>>> 
>>> --
>>> ##
>>> ## Prof. Ulrike Groemping
>>> ## FB II
>>> ## Berliner Hochschule für Technik (BHT)
>>> ##
>>> ## prof.bht-berlin.de/groemping 
>>> ## Phone: +49(0)30 4504 5127
>>> ## Fax:   +49(0)30 4504 66 5127
>>> ## Home office: +49(0)30 394 04 863
>>> ##
>>> 
>>> __
>>> R-package-devel@r-project.org mailing list
>>> https://stat.ethz.ch/mailman/listinfo/r-package-devel
>>> 
>> __
>> R-package-devel@r-project.org mailing list
>> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] How to declare Bioconductor Dependencies in the Description File of my R Package

2023-05-03 Thread Simon Urbanek



> On May 4, 2023, at 3:36 AM, Martin Morgan  wrote:
> 
> CRAN is fine with Bioconductor Depends: and Imports: dependencies, as 
> previously mentioned. This is because the CRAN maintainers explicitly 
> configure their system to know about Bioconductor package repositories.
> 

That is not exactly true (at least not for all maintainers ;)). Bioconductor 
packages are installed on as-needed (best-effort) basis and it is a manual 
process. Ideally, Bioconductor packages would be in Suggests, because if they 
are not, the package binary will be effectively broken for most users as they 
cannot install it without additional steps (and no stable state can be 
guaranteed, either). That's why I believe someone was suggesting a pre-flight 
check that alerts the user to such situation and prints instructions to remedy 
it (e.g., to use setRepositories()) as the majority of users will have no idea 
what's going on.

Cheers,
Simon



> Users face a different challenge -- many users will not have identified 
> (e.g., via `setRepositories()` a Bioconductor repository, so when they try to 
> install your package it will fail in a way that you have no control over -- a 
> generic message saying that the Bioconductor dependencies was not found.
> 
> You could mitigate this by advertising that your CRAN package should be 
> installed via `BiocManager::install("")`, which defines 
> appropriate repositories for both CRAN and Bioconductor, but there is no way 
> to unambiguously communicate this to users.
> 
> Martin
> 
> From: R-package-devel  on behalf of 
> Ruff, Sergej 
> Date: Wednesday, May 3, 2023 at 11:13 AM
> To: Dirk Eddelbuettel 
> Cc: r-package-devel@r-project.org 
> Subject: Re: [R-pkg-devel] How to declare Bioconductor Dependencies in the 
> Description File of my R Package
> Thank you, Dirk.
> 
> 
> I see your dependencies are Suggested. I know that Suggested dependencies 
> should be conditional.
> 
> 
> Do you know if Non-Cran (Bioconductor) packages need to be conditional?  Do 
> you have any experiece regarding Non-CRAN Dependencies
> 
> and how to handle them?
> 
> 
> I believe Duncan Murdoch's experience and opinion regarding that topic, but i 
> take any second and third opinion to be sure.
> 
> 
> Thank you for your help.
> 
> 
> Sergej
> 
> 
> Von: Dirk Eddelbuettel 
> Gesendet: Mittwoch, 3. Mai 2023 16:22:09
> An: Ruff, Sergej
> Cc: Duncan Murdoch; Ivan Krylov; r-package-devel@r-project.org
> Betreff: Re: [R-pkg-devel] How to declare Bioconductor Dependencies in the 
> Description File of my R Package
> 
> 
> Sergej,
> 
> Please consider:
> 
>  - there are nearly 20k CRAN packages
> 
>  - all of them are mirrored at https://github.com/cran so you can browse
> 
>  - pick any one 'heavy' package you like, Seurat is a good example; there
>are other examples in geospatial or bioinformatics etc
> 
>  - you can browse _and search_ these to your hearts content
> 
> Here is an example of mine. In RcppArmadillo, years ago we (thanks to fine
> Google Summer of Code work by Binxiang Ni) added extended support for sparse
> matrices pass-through / conversione from R to C++ / Armadillo and back. That
> is clearly an optional feature as most uses of (Rcpp)Armadillo use dense
> matrices. So all code and test code is _conditional_.  File DESCRIPTION has
> 
>   Suggests: [...], Matrix (>= 1.3.0), [...], reticulate, slam
> 
> mostly for tests. I.e. We have very little R code: in one single file
> R/SciPy2R.R we switched to doing this via reticulate and opee the function
> with
> 
>if (!requireNamespace("reticulate", quietly=TRUE)) {
>stop("You must install the 'reticulate' package (and have SciPy).", 
> call.=FALSE)
>}
> 
> after an actual deprecation warning (as there was scipy converter once).
> 
> Similarly, the testsuites in inst/tinytests/* have several
> 
>if (!requireNamespace("Matrix", quietly=TRUE)) exit_file("No Matrix 
> package")
> 
> as well as
> 
>if (!requireNamespace("reticulate", quietly=TRUE)) exit_file("Package 
> reticulate missing")
> 
>if (!packageVersion("reticulate") >= package_version("1.14"))
>exit_file("SciPy not needed on newer reticulate")
> 
> and tests for slam (another sparse matrix package besides the functionality
> in Matrix).
> 
> Hopefully this brief snapshot gives you an idea.  There are (likely!!)
> thousandss of examples you can browse, and I am sure you will find something.
> If you have further (concrete) questions please do not hesitate to use the
> resource of this list.
> 
> Cheers (or I should say "mit Braunschweiger Gruessen nach Hannover),
> 
> Dirk
> 
> --
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 
>[[alternative HTML version deleted]]
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 
>   [[alternative HTML version deleted]]
> 
> 

Re: [R-pkg-devel] 'Default' macos (x86) download URL now gone?

2023-04-24 Thread Simon Urbanek
Dirk,

thanks - the problem is that there is not a single installer package (for 
several years now), so that URL is ambiguous. Whether the missing link is a 
good or bad depends on how it is used. I would argue that any link to that URL 
is inherently bad, because there is no way of knowing that the link works for a 
particular system - that's why I have originally removed it with the R 4.3.0 
release. I have restored it now, making it point to the R 4.3.0 arm64 release 
since that is arguably the closest to a single "latest R". R releases have not 
been stored in /bin/macosx since 2015, so anyone using a link there is asking 
for trouble.

For any CI I would strongly recommend using the "last-success" links: 
https://mac.r-project.org/big-sur/last-success/ in particular the .xz versions 
are they are specifically designed to be used by CI (small download, fast and 
localized install).

Cheers,
Simon


> On 24/04/2023, at 3:25 AM, Dirk Eddelbuettel  wrote:
> 
> 
> The URL ${CRAN}/bin/macosx/R-latest.pkg is in fairlt widespread use. A quick
> Google query [1] reveals about 1.1k hits. And it happens to be used too in a
> CI job a colleague noticed failing yesterday.
> 
> The bin/macosx/ page now prominently displays both leading flavours
>  R-4.3.0-arm64.pkg
>  R-4.3.0-x86_64.pkg
> which makes sense give the architecture choices. We can of course update the
> CI script, and likely will.
> 
> But given that this was apparently a somewhat widely-used URL to fetch R on
> macOS, may I suggest that the convenience link be reestablished as a courtesy?
> 
> Best,  Dirk
> 
> https://github.com/search?q=macosx%2FR-latest.pkg=code
> 
> -- 
> dirk.eddelbuettel.com | @eddelbuettel | e...@debian.org
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] Discovering M1mac cowpads

2023-03-24 Thread Simon Urbanek
John,

you provide no details to go on, but generally the main difference is that 
arm64 uses 64-bit precision for long double (which is permitted by the C 
standard), while Intel uses 80-bits of precision (on systems that enable it). 
That leads to differences in results, e.g. when computing long sums:

set.seed(1); x=rnorm(1e6)

## Intel with extended precision
> sprintf("%a", sum(x))
[1] "0x1.7743176e2372bp+5"

## arm64
> sprintf("%a", sum(x))
[1] "0x1.7743176e23a33p+5"


For R you can get the same results on all platforms by using 
--disable-long-double which prevents the use of extended precision doubles in R 
- this is Intel with --disable-long-double:

> sprintf("%a", sum(x))
[1] "0x1.7743176e23a33p+5"

Cheers,
Simon



> On Mar 25, 2023, at 8:03 AM, J C Nash  wrote:
> 
> Recently I updated my package nlsr and it passed all the usual checks and was
> uploaded to CRAN. A few days later I got a message that I should "fix" my
> package as it had failed in "M1max" tests.
> 
> The "error" was actually a failure in a DIFFERENT package that was used as
> an example in a vignette. I fixed it in my vignette with try(). However, I
> am interested in just where the M1 causes trouble.
> 
> As far as I can determine so far, for numerical computations, differences will
> show up only when a package is able to take advantage of extended precision
> registers in the IEEE arithmetic. I think this means that in pure R, it won't
> be seen. Packages that call C or Fortran could do so. However, I've not yet
> got a good handle on this.
> 
> Does anyone have some small, reproducible examples? (For me, reproducing so
> far means making a small package and submitting to macbuilder, as I don't
> have an M1 Mac.)
> 
> Cheers,
> 
> John Nash
> 
> __
> R-package-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-package-devel
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [R-pkg-devel] How to declare Bioconductor Dependencies in the Description File of my R Package

2023-03-17 Thread Simon Urbanek
Packages can only be installed from the repositories listed and only CRAN is 
the default so only CRAN package are guaranteed to work. I'd like to add that 
the issue below is exactly why, personally, I would not recommend using 
Bioconductor package as strong dependency (imports/depends), because that makes 
the package unusable for most users, as it cannot be installed (without extra 
steps they don't know about) since the dependency doesn't exist on CRAN.

If your users are already Bioconductor users by the virtue of the package 
application then they already know so it's fine, but then you are probably 
better off to have your package on Bioconductor as part of the ecosystem which 
is much more streamlined and coordinated.

If it is only suggested (weak dependency) for some optional functionality, then 
your package will work even if the dependency is not installed so all is well. 
And if the optional Bioconductor functionality is used you can direct the user 
to instructions explaining that Bioconductor is required for that - but the 
package has to do that, it is not anything automatic in R.

Cheers,
Simon


> On Mar 18, 2023, at 1:29 AM, Ruff, Sergej  
> wrote:
> 
> Really.Whats a problem i have when all dependencies arent prei installed. I 
> thought the problem would be solved once my package is available on CRAN.
> 
> 
> Here is a recent question I had regarding the same issue:
> 
> 
> I am currently working on a r-package. I would like to submit my r package to 
> CRAN, but I have a question regarding dependency installations on CRAN.
> 
> I have almost finished my package, added the required dependencies to the 
> NAMESPACE and DESCRIPTION files as Imports, and get no errors or warnings
> 
> when running the check in Rstudio. The package runs on the pc, where I´ve 
> built the package, but when I install the package on a pc, where the 
> dependencies
> 
> are not preinstalled, I get the following error:
> 
> ERROR:
> 
> dependencies 'depth', 'geometry' are not available for package 'packagename'
> * removing 
> 'C:/Users/156932/AppData/Local/Programs/R/R-4.2.1/library/packagename'
> Warning in install.packages : installation of package ‘packagename’ had 
> non-zero exit status
> 
> 
> The problem is that a local installation of my package (via USB-stick for 
> example) can´t install the dependencies from CRAN.
> 
> The package works perfectly fine, if the dependencies are preinstalled.
> 
> Now I don´t want to submit my package to CRAN if the end user gets the same 
> error message when installing my package.
> 
> Question: After I submit my package to CRAN, will CRAN install dependencies 
> automatically (via "install.packages()"), resolving the issue I have right 
> now?
> 
> Or do I have to modify the R-package or the Description-file to make sure my 
> Package can install dependencies?
> 
> I provided the dependencies to the NAMESPACE-file as @ImportFrom via the 
> devtools::document()-function. I added the dependencies to the 
> DESCRIPTION-file via usethis::use_package("x",type="Imports"). The 
> Description looks like this:
> 
> License: GPL (>= 3)
> Encoding: UTF-8
> LazyData: true
> RoxygenNote: 7.2.3
> Imports:
>depth,
>geometry,
>graphics,
>grDevices,
>MASS,
>mvtnorm,
>nlme,
>rgl,
>stats
> 
> 
> 
> So I thought all dependencies would install automatically from CRAN? Is that 
> not the case?
> 

__
R-package-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-package-devel


Re: [Rd] nightly r-devel.pkg builds failing since Jan 15

2023-02-26 Thread Simon Urbanek
Gabe,

thanks, this was a fall-out from a power outage due to the cyclone. Although 
all systems were back up, svn lock files have led to an early abort in the 
update step. It should be fixed now.

Cheers,
Simon


> On 27/02/2023, at 10:40 AM, Gabriel Becker  wrote:
> 
> Hi all,
> 
> It looks like for intel macs (ie high sierra) the nightly build of R-devel
> has been failing continuously since Jan 16th:
> 
> https://mac.r-project.org/high-sierra/last-success/
> 
> Is this a known issue? I didn't see any way to get at the relevant logs (of
> the .pkg creation step), as the .tar.gz step succeeded.
> 
> Also, the framework (at least the non-pkg'ed one thats in the .tar.gz file)
> is unsigned, meaning the OS gives you grief about opening it.
> 
> Finally, it seems now that even the 4.2 branch is failing in the make stage.
> 
> Best,
> ~G
> 
>   [[alternative HTML version deleted]]
> 
> __
> R-devel@r-project.org mailing list
> https://stat.ethz.ch/mailman/listinfo/r-devel
> 

__
R-devel@r-project.org mailing list
https://stat.ethz.ch/mailman/listinfo/r-devel


  1   2   3   4   5   6   7   8   9   10   >