Hi,
It's seems like you got me ! yey ! :D I was already working on my fork
for singularity ,
since we don't need any sudo rights for the galaxy user "setup manually
by the admin and the paths hard-coded for each tool "
I will test it out and let you know !
Best regards from Greece,
Nikos
Has anyone ever encountered an error such as this when running galaxy
with slurm and submitting files for upload and unzip.
slurmstepd: get_exit_code task 0 died by signal
___
Please keep all replies on the list by using "reply all"
in
Dear list,
I have two question for all DRMAA users. Here is the first one.
I was checking how our queuing system (univa GridEngine) and Galaxy
react if jobs are submitted that exceed run time or memory limits.
I found out that the python drmaa library cannot query the job status
after the
Dear list,
I was thinking about implementing the job resubmission feature for drmaa.
I hope that I can simplify the job configuration for our installation
(and probably others as well) by escalating through different queues (or
ressource limits). Thereby I hope to reduce the number of special
Hi
Same for me.
But It looks like to be related with tools installed via the toolshed (Ie :
bedtools, emboss, blast+ suite...).
My own tools seem to be ok.
Just like you Edgar, nothing in the log...
Fred
De : galaxy-dev [mailto:galaxy-dev-boun...@lists.galaxyproject.org] De la part
de
Hello Frederic,
I actually just fixed the problem not 5 minutes ago...
In your file: /etc/httpd/vhosts.d/galaxy_prod.conf
Add the following line: AllowEncodedSlashes NoDecode
Please let me know what you think...
Regards,
Edgar Fernandez
System Administrator (Linux)
Technologies de
Hi Edgar, Frederic,
if you are using Apache as proxy for your Galaxy server, then you should
probably add
|AllowEncodedSlashes NoDecode|
to your Apache config, see
https://galaxyproject.org/admin/config/apache-proxy/#allow-encoded-slashes-in-urls
Cheers,
Nicola
On 15/06/17 15:35, SAPET,
Matthias,
We have had this problem on our SGE based installation for years. We referred
to it as the "green screen of death" - as it would allow a biologist to
continue analysis using output that was partial, at best, often resulting in
seemingly successful completion of the entire analysis,
I would guess something about the slurm job exceeded some allocated resource
(memory, cpu, time) and slurm killed the job.
David Hoover
HPC @ NIH
> On Jun 15, 2017, at 11:28 AM, Nate Coraor wrote:
>
> Hi Evan,
>
> Are there any other details logged for this job?
>
> --nate
If slurm accounting is configured, `sacct` might reveal more. You might
also check the slurmd logs on the execution host where the job ran.
--nate
On Thu, Jun 15, 2017 at 11:47 AM, Hoover , David (NIH/CIT) [E] <
hoove...@helix.nih.gov> wrote:
> I would guess something about the slurm job
Hello,
Nice !
Thank you for the help.
Regards,
Fred
De : Fernandez Edgar [mailto:edgar.fernan...@umontreal.ca]
Envoyé : jeudi 15 juin 2017 16:41
À : SAPET, Frederic ; galaxy-...@bx.psu.edu
Objet : RE: [galaxy-dev] new installation galaxy-17.05 - Uncaught error.
Hello,
Thank you as well,
Is that a new behavior with 17.05 ? I use the Apache proxy since a while
without any problem, with previous versions of Galaxy.
Regards,
Fred
De : Nicola Soranzo [mailto:nicola.sora...@gmail.com] De la part de Nicola
Soranzo
Envoyé : jeudi 15 juin 2017 16:44
À :
According to the same documentation, you need it only if using the
|mod_proxy| Apache module with HTTP transport, but I'm using nginx, not
Apache, so I'm not the best person to ask.
Cheers,
Nicola
On 15/06/17 15:55, Fernandez Edgar wrote:
Thanks Nicola for the comfirmation.
Would you mind
Hi Matthias,
I can't speak for GridEngine's specific behavior because I haven't used it
in a long time, but it's not surprising that jobs "disappear" as soon as
they've exited. Unfortunately, Galaxy uses periodic polling rather than
waiting on completion. We'd need to create a
Hi Evan,
Are there any other details logged for this job?
--nate
On Thu, Jun 15, 2017 at 9:20 AM, evan clark wrote:
> Has anyone ever encountered an error such as this when running galaxy with
> slurm and submitting files for upload and unzip.
>
> slurmstepd: get_exit_code
Hi Fred,
It was already needed for data manager tools in 17.01, it seems it's
needed for all tools installed from the ToolShed since 17.05, but I use
nginx, not Apache.
Cheers,
Nicola
On 15/06/17 15:57, SAPET, Frederic wrote:
Hello,
Thank you as well,
Is that a new behavior with 17.05 ?
16 matches
Mail list logo