This is really odd. I see no code in the job runner stuff at all that
could cause this behavior outside the context of the dataset being
marked hidden as part of a workflow - let alone something DRM specific
that could cause this. Are you rerunning an existing job that has been
marked this way in a workflow. Does this happen if you click new tools
outside the context of workflows or past jobs.
Can you find the corresponding dataset via the history API or in the
database and determine if they indeed are having visible set to False
- this I guess is what should cause a dataset to become "hidden"?
On Fri, Nov 8, 2013 at 11:40 AM, Andrew Warren <anwar...@vbi.vt.edu> wrote:
> Hello all,
> We are in the process of switching from SGE to SLURM for our galaxy setup.
> We are currently experiencing a problem where jobs that are completely
> successful (no text in their stderr file and the proper exit code) are being
> hidden after the job completes. Any job that fails or has some text in the
> stderr file is not hidden (note: hidden not deleted; they can be viewed by
> selecting 'Unhide Hidden Datasets').
> Our drmaa.py is at changeset 10961:432999eabbaa
> Our drmaa egg is at drmaa = 0.6
> And our SLURM version is 2.3.5
> And we are currently passing no parameters for default_cluster_job_runner =
> We have the same code base on both clusters but only observe this behavior
> when using SLURM.
> Any pointers or advice would be greatly appreciated.
> Please keep all replies on the list by using "reply all"
> in your mail client. To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> To search Galaxy mailing lists use the unified search at:
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
To search Galaxy mailing lists use the unified search at: