We initially had issues with Collections containing thousands of datasets
that was related to the limit of jobs in the Slurm queue - excessively
increasing this limit fixed our issue.
Cheers,
Mo Heydarian
On Thu, Aug 30, 2018 at 11:18 AM Peter Cock
wrote:
> There is a sweet spot for splittin
Thanks this helps a lot!
Cheers Jochen
On 30.08.2018 17:14, Peter Cock wrote:
> There is a sweet spot for splitting your BLAST query fasta file
> by sequence - one big file with 25000 sequences is not great,
> but one sequence per file is the worst possible option.
>
> This is due to all the ext
There is a sweet spot for splitting your BLAST query fasta file
by sequence - one big file with 25000 sequences is not great,
but one sequence per file is the worst possible option.
This is due to all the extra overheads, you would have 25000
jobs submitted to the cluster, each of which would load
If there are any limits, it would be down to the Galaxy Admin's job
settings - something generic with collections.
Personally I've not done this - I tend to concatenate FASTA files
to make large files with multiple sequences instead.
(And then we have the optional task splitting enabled so that G
Hi,
is there any limit to run BLAST jobs from a collection of single FASTA
files? I started a job but is does not get executed... its just sending
for about an hour.
Cheers Jochen
--
ETH Zurich
*Jochen Bick*
Animal Physiology
Institute of Agricultural Sciences
Postal address: Universitätstrasse