Hi Valerie,
Excellent. In addition to collecting log outputs, I have a few more
suggestions that may be worth considering:
- Collecting the results form parallel computing tasks directly in an R
object is a great convenience, which I like a lot. However, in the
context of slow computations
On Thu, Nov 20, 2014 at 12:17 PM, Thomas Girke thomas.gi...@ucr.edu wrote:
Hi Valerie,
Excellent. In addition to collecting log outputs, I have a few more
suggestions that may be worth considering:
- Collecting the results form parallel computing tasks directly in an R
object is a great
Hi Valerie, Michel and others,
Finally, I freed up some time to revisit this problem. As it turns out,
it is related to the use of a module system on our cluster. If I add in
the template file for Torque (torque.tmpl) an explicit module load line
for the specific R version, I am using on the
Hi Michel,
In BiocParallel 0.99.24 .convertToSimpleError() now checks for NULL and
converts to NA_character_.
I'm testing with BatchJobs 1.4, BiocParallel 0.99.24 and SLURM. I'm
still not getting an informative error message:
xx - bplapply(1:2, FUN)
SubmitJobs
This was a bug in BatchJobs::waitForJobs(). We now throw an error if
jobs disappear due to a faulty template file. I'd appreciate if you
could confirm that this is now correctly catched and handled on your
system. I furthermore suggest to replace NULL with NA_character_ in
.convertToSimpleError().
Hi,
Martin and I looked into this a bit. It looks like a problem with
handling an 'undefined error' returned from a worker (i.e., job did not
run). When there is a problem executing the tmpl script no error message
is sent back. The NULL is coerced to simpleError and becomes a problem
Hi Thomas,
Just wanted to let you know I saw this and am looking into it.
Valerie
On 09/20/2014 02:54 PM, Thomas Girke wrote:
Hi Martin, Micheal and Vincent,
If I run the following code, with the release version of BiocParallel then it
works (took me some time to actually realize that), but