Peter, 

Thanks for the quick reply. Saved me a lot of google/grep time. 

Here's what I did to work-around (added it to the card as a comment). Anyone 
got a more thorough/elegant solution? 

Regards,
Curtis


Workaround:

1. Backup migrated_tools_conf.xml, shed_tool_conf.xml and tool_conf.xml
2. edit each to remove all sections
3. edit static/welcome.html to notify users with a big orange announcement

That gives you an empty tool pane, though it doesn't prevent anyone from 
hitting "re-run" on an existing dataset, or doing something through the API.


-----Original Message-----
From: Peter Cock [mailto:p.j.a.c...@googlemail.com] 
Sent: Thursday, May 01, 2014 1:03 PM
To: Curtis Hendrickson (Campus)
Cc: galaxy-dev PSU list-serv (galaxy-dev@lists.bx.psu.edu)
Subject: Re: [galaxy-dev] Cleanest way to disable job-runs during a long 
cluster maintenace window?

Hi Curtis,

This used to be easy - there was a setting on the admin pages to lock new job 
submission, which I would use before a planned shutdown.

See https://trello.com/c/BTDaHy9m/1269-re-institute-job-lock-feature
(and vote on it to help prioritize the issue).

Right now I'm not sure if there is an easy alternative :(

Peter


On Thu, May 1, 2014 at 6:45 PM, Curtis Hendrickson (Campus) <curt...@uab.edu> 
wrote:
> Folks
>
>
>
> The cluster under our galaxy server will be down for a WEEK, so it can 
> be relocated.
>
> The fileserver that hosts the galaxy datasets, the galaxy server and 
> the galaxy database will stay up.
>
>
>
> What’s the most elegant way to disable people’s ability to run new 
> jobs, without blocking them from browsing existing histories and 
> downloading/viewing their data?
>
>
>
> Our configuration:
>
>                 External Auth
>
>                                 universe_wsgi.ini:use_remote_user = 
> True
>
>                 Job runner: SGE
>
>                                 universe_wsgi.ini:start_job_runners = 
> drmaa
>
>                 and the new_file_path and job_working_directories are 
> on a fileserver that will be down.
>
> grep -H /scratch/share/galaxy universe_wsgi.ini
>
> universe_wsgi.ini:file_path = /scratch/share/galaxy/staging  # 
> available
>
> universe_wsgi.ini:new_file_path = /scratch/share/galaxy/temp  # down
>
> universe_wsgi.ini:job_working_directory = 
> /scratch/share/galaxy/job_working_directory # down
>
>
>
> We’d rather something nicer than just letting the jobs go to the queue 
> and never get run.
>
>
>
> Regards,
>
> Curtis
>
>
>
>
> ___________________________________________________________
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this and other 
> Galaxy lists, please use the interface at:
>   http://lists.bx.psu.edu/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Reply via email to