Here's a little cleanup of the confusion:

Condor-G is a job submission client.
Condor is a resource manager.
GRAM is an abstract interface to resource managers.

So using Condor-G to submit to GRAM that runs jobs on Condor is not as silly as it might sound.

I think maybe you are trying to solve this all in your head before you start working with it; I'm not sure that's going to work. It sounds like what you want is:

A condor pool.
A machine running GRAM that can run condor_submit to your condor pool.
A client that runs Pegasus+Condor-G.

Then you can use your client to submit a job to your GRAM host and see if that's enough. If it's not, it sounds like what you need additionally is an installation of Globus that is shared by your condor pool nodes, but that they don't need to be running services; they just need access to your clients.


Charles

On Oct 1, 2008, at 11:40 PM, Yoichi Takayama wrote:

Hi

I need your advice.

Neither Condor or Globus manual is too clear about the deployment topology that works.

The figures below are the only references I can get and I am guessing.

If I am configuring a local Condor pool managed by Globus which can run Pegasus (DAG generator for Globus), do I need something like below???

1. Condor Submit machines with Pegasus, Condor-G and Condor Submit daemon installed. (There may be several of these, which are a part of the local Condor cluster, i.e. share the same pool name. They may create local or remote jobs.) Does these machines need to have Globus installed or instructed as to where the Globus manager is? I have not encountered how to configure Condor-G to know that. If a job using gt4 universe is sent to local Condor (Submit daemon), does it automatically know to call Globus, if the setup-globus-gram-job- manager has been run by globus on that node? Or, does the job sent to the central manager(s) first and do they work it our there then? Only the managers need to know the association???

For the local cluster:

2. A Globus Manager machine with Globus services (GRAM, RFT, GridFTP), configured to use Condor as the job-manager. (Pegasus submits a job to Condor-G that will use Globus to manage the job but uses Condor to run??)

3. A Condor Manager machine which Globus is configured to use. (This could be the Globus Manger machine).

4. A Condor cluster with Execute nodes (may mount NFS to share executables and data needed for the jobs??) - no GridFTP

(If a Globus job is to run on a remote site, the site also needs to have GridFTP installed.)

Thanks,
Yoichi

---------------------------------------------------------------------------------------------------------
This figure is from GT4 Primer:

<GRAM in GT4.png>



---------------------------------------------------------------------------------------------------------
This figure is from Condor 7.0.4 manual (for gt2 job):
<gt2 Universe job.png>




--------------------------------------------------------------------------
Yoichi Takayama, PhD
Senior Research Fellow
RAMP Project
MELCOE (Macquarie E-Learning Centre of Excellence)
MACQUARIE UNIVERSITY

Phone: +61 (0)2 9850 9073
Fax: +61 (0)2 9850 6527
www.mq.edu.au
www.melcoe.mq.edu.au/projects/RAMP/
--------------------------------------------------------------------------
MACQUARIE UNIVERSITY: CRICOS Provider No 00002J

This message is intended for the addressee named and may contain confidential information. If you are not the intended recipient, please delete it and notify the sender. Views expressed in this message are those of the individual sender, and are not necessarily the views of Macquarie E-Learning Centre Of Excellence (MELCOE) or Macquarie University.

On 02/10/2008, at 12:12 AM, Charles Bacon wrote:

Not really, no. The idea is that you run some kind of local scheduler (like, say, Condor) and just use a single Globus node to address those instances. If you need access to the client binaries, you can mount them on NFS. But there's not much reason for a single resource pool to need more than one GRAM, RFT, or Index service to talk to it.


Charles


Reply via email to