Folks
The cluster under our galaxy server will be down for a WEEK, so it can be
relocated.
The fileserver that hosts the galaxy datasets, the galaxy server and the galaxy
database will stay up.
What's the most elegant way to disable people's ability to run new jobs,
without blocking them from browsing existing histories and downloading/viewing
their data?
Our configuration:
External Auth
universe_wsgi.ini:use_remote_user = True
Job runner: SGE
universe_wsgi.ini:start_job_runners = drmaa
and the new_file_path and job_working_directories are on a
fileserver that will be down.
grep -H /scratch/share/galaxy universe_wsgi.ini
universe_wsgi.ini:file_path = /scratch/share/galaxy/staging # available
universe_wsgi.ini:new_file_path = /scratch/share/galaxy/temp # down
universe_wsgi.ini:job_working_directory =
/scratch/share/galaxy/job_working_directory # down
We'd rather something nicer than just letting the jobs go to the queue and
never get run.
Regards,
Curtis
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/