Hello Anna,
The speed of parallel computing depends on many factors. To avoid any
potential confounders, Please try to use this code for timing (assuming you
still have all the variables you used in your example)
```
parallel_param <- SnowParam(workers = ncores, type = "SOCK", tasks =
Hi Hadley,
On 8 August 2023 at 08:34, Hadley Wickham wrote:
| Do you think it's worth also/instead considering a fix to S4 to avoid
| this caching issue in future R versions?
That is somewhat orthogonal to my point of "'some uses' of the 20 year old S4
system (which as we know is fairly widely
Hi Gabriel,
Nice idea! I have encountered this problem several times, while probably a
better management of libraries could avoided the issue this is an elegant
solution.
How would this proposal handle arguments mask.ok, include.only and exclude?
For example, in the (edge) case two packages
Hi Dirk,
Do you think it's worth also/instead considering a fix to S4 to avoid
this caching issue in future R versions?
(This is top of my for me as we consider the design of S7, and I
recently made a note to ensure we avoid similar problems there:
My motivation for using distributed memory was that my package is also
accessible on Windows. Is it better to use shared memory as default but
check the user's system and then switch to socket only if necessary?
Regarding the real data. I have 68 samples (rows) of methylation EPIC array
data
Dear Anna,
According to the documentation of "BiocParallelParam", SnowParam() is a
subclass suitable for distributed memory (e.g. cluster) computing. If you're
running your code on a simpler machine with shared memory (e.g. your PC),
you're probably better off using MulticoreParam() instead.
Dear Martin,
thank you very much for the quick response.
I will apply your advice and add the package to "Suggests".
Kind regards
Maren
--
NGS Integrative Genomics (NIG), Core Unit
Department of Human Genetics
University Medical Center
Simon,
This is still an issue for arm64. Uploaded tiledb and RQuantLib yesterday,
both already built binaries for macOS (thank you!) but on the x86_64 ones are
on the results page. Can you take another peek at this?
Thanks so much, Dirk
--
dirk.eddelbuettel.com | @eddelbuettel |
Hi,
On Tue, 8 Aug 2023 at 12:32, Sitte, Maren
wrote:
> Dear Bioconductor Developers,
>
>
> I received an email that my package "pwOmics" gets an error in the check
> under Linux.
> Install and build gets an OK, but check shows an error.
>
> I had a look and the problem seems to be:
>
> >
But why time methods that the author (me!) has been telling the community for
years have updates? Especially as optimx::optimr() uses same syntax as optim()
and gives access to a number of solvers, both production and didactic. This set
of solvers is being improved or added to regularly, with a
Dear all,
This is not a question.
I just put a blog post online with a lengthy overview of how I have
developed and maintained the R package GGIR as available on CRAN for
the past 10 years:
https://www.accelting.com/updates/10th-anniversary-of-ggir/
The package itself is probably not of interest
Hi all!
I'm switching from the base R *parallel* package to *BiocParallel* for my
Bioconductor submission and I have two questions. First, I wanted advice on
whether I've implemented load balancing correctly. Second, I've noticed
that the running time is about 15% longer with BiocParallel. Any
Dear Bioconductor Developers,
I received an email that my package "pwOmics" gets an error in the check under
Linux.
Install and build gets an OK, but check shows an error.
I had a look and the problem seems to be:
> BiocGenerics:::testPackage("pwOmics")
Error in library("RUnit", quietly =
Thank you all very much for the suggestions, after testing, each of them would
be a viable solution in certain contexts. Code for benchmarking:
# preliminaries
install.packages("microbenchmark")
library(microbenchmark)
data <- new.env()
data$ans2 <- 0
data$ans3 <- 0
data$i <- 0
data$fun.value
14 matches
Mail list logo