Liisa Koski wrote:
> Hi,
> Yesterday I ran the cleanup_datasets.py scripts as follows..
> 
>  Deleting Userless Histories
> python cleanup_datasets.py universe_wsgi.ini -d 10 -1
> 
> Purging Deleted Histories
> python cleanup_datasets.py universe_wsgi.ini -d 10 -2 -r
> 
> Purging Deleted Datasets
> python cleanup_datasets.py universe_wsgi.ini -d 10 -3 -r
> 
> Purging Library Folders
> python cleanup_datasets.py universe_wsgi.ini -d 10 -5 -r
> 
> Purging Libraries
> python cleanup_datasets.py universe_wsgi.ini -d 10 -4 -r
> 
> Deleting Datasets / Purging Dataset Instances
> python cleanup_datasets.py universe_wsgi.ini -d 10 -6 -r 
> 
> This morning I noticed a number of workflows were either stuck at a 
> certain step (ie..job running) or the step was grey (waiting in queue) but 
> our cluster has free nodes. If I start a new workflow...it completes 
> fine...just the 19 histories that were running yesterday are stuck. Did I 
> do something wrong with the cleanup. Is there a way to restart these stuck 
> histories without having to restart the entire workflow? 

Hi Liisa,

I don't think cleanup would've been related.  If you have
enable_job_recovery = True in your configuration, since you're using a
cluster you can restart the Galaxy job runner process and it should
resume those jobs and finish them.

--nate

> 
> Thanks in advance,
> Liisa
> 

> ___________________________________________________________
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>   http://lists.bx.psu.edu/

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Reply via email to