Ravi,

Do your galaxy jobs submit to the cluster as the galaxy user or as the user?

Thanks,

Ilya

Sent from my iPhone

On Jun 15, 2011, at 2:43 PM, "Ravi Madduri" 
<madd...@mcs.anl.gov<mailto:madd...@mcs.anl.gov>> wrote:

Hi
We have been able to get the following setup working pretty reliably:

  *   Automated deployment of Galaxy clusters on EC2. These clusters have the 
following setup:
     *   A NFS server providing global directories for home directories, 
software, and scratch space. We are also experimenting with glusterfs
     *   A NIS server providing an authentication domain within the cluster. 
The list of users must be specified when the cluster is created
     *   A GridFTP server, which is automatically registered as a Globus Online 
(<http://www.globusonline.org>www.globusonline.org<http://www.globusonline.org>)
 endpoint for high speed and reliable data transfer.
     *   Ability to script transfer tasks as part of the workflow using Globus 
Online Galaxy tools (To get large quantities of data in and out of galaxy EC2 
cluster reliably)
     *   A Condor pool with jobs running using user's credentials (using users 
accounts that are provisioned in the EC2 cluster)
     *   A Galaxy server with the modifications outlined above, which we plan 
to contribute back to the galaxy community soon

I gave a talk at the recent Galaxy users conference about this setup. You can 
find slides here: 
PowerPoint<http://wiki.g2.bx.psu.edu/GCC2011?action=AttachFile&do=get&target=GalaxyGridFTPCondorAndGlobusOnline.pptx>

Please let me know if you are interested and would like more information.

Regards
On Jun 15, 2011, at 5:28 AM, Marina Gourtovaia wrote:

There are two slightly different problems here. One with the files ystem and 
the second is whether the machine Galaxy is running on can submit jobs to the 
cluster (ssh is mentioned in the original e-mail, suggesting that it's 
impossible to communicate with the cluster from the machine the job is running 
on). The Galaxy instance is constantly communicating with the cluster job 
scheduling system (SGE or other) in order to get updates on the status of the 
jobs. It should be possible to do it over ssh, but, in my experience, this 
slows down the code.

We run the Galaxy server on one of the cluster nodes. I imagine that other 
people using Galaxy with the cluster do the same. Are they?

Marina

On 15/06/2011 08:51, Peter Cock wrote:
On Wed, Jun 15, 2011 at 2:30 AM, Ka Ming 
Nip<<mailto:km...@bcgsc.ca>km...@bcgsc.ca<mailto:km...@bcgsc.ca>>  wrote:
Hello,

I have been trying to set up Galaxy to run its jobs on my SGE
cluster using the Unified Method, based on the steps described
in the wiki:

<https://bitbucket.org/galaxy/galaxy-central/wiki/Config/Cluster>https://bitbucket.org/galaxy/galaxy-central/wiki/Config/Cluster

My local instance of Galaxy is installed under my home directory,
but my cluster is on a different file system. I need to "ssh" to the
head node before I can submit jobs.

Is there any way to set up Galaxy with a SGE cluster on a different
file system?
I haven't looked into it in great detail, but we currently have a
similar network setup, and would also like to link Galaxy to SGE.

The main problem is the lack of a shared file system - in order
to run a job on the cluster we (Galaxy) has to get the data onto
the cluster, run the job, and get the results back.

Peter
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  <http://lists.bx.psu.edu/> http://lists.bx.psu.edu/


--
The Wellcome Trust Sanger Institute is operated by Genome Research Limited, a 
charity registered in England with number 1021457 and a company registered in 
England with number 2742969, whose registered office is 215 Euston Road, 
London, NW1 2BE. ___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

<http://lists.bx.psu.edu/>http://lists.bx.psu.edu/

--
Ravi K Madduri
The Globus Alliance | Argonne National Laboratory | University of Chicago
<http://www.mcs.anl.gov/~madduri>http://www.mcs.anl.gov/~madduri

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 <http://lists.bx.psu.edu/> http://lists.bx.psu.edu/

___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Reply via email to