Re: [galaxy-dev] Switching Torque>PBSpro: qsub error 111 (cannot connect to server)

2012-02-17 Thread Fields, Christopher J
Re: PBSPro RPMs, that's actually good to know, we will likely need use of that 
here as well.

chris

On Feb 17, 2012, at 10:30 AM, Louise-Amélie Schmitt wrote:

> Thanks a lot Chris, actually we found another place where we found rpms for 
> PBS Pro:
> http://apps.man.poznan.pl/trac/pbs-drmaa
> 
> And it works now! :D
> 
> Have a great weekend,
> L-A
> 
> 
> Le 17/02/2012 17:07, Fields, Christopher J a écrit :
>> If they used a package install, I think the DRMAA libs for torque are 
>> normally separate.  For instance:
>> 
>> http://pkgs.org/download/torque-drmaa
>> 
>> If going from source, you can enable DRMAA compilation with --enable-drmaa; 
>> I don't recall if that is on by default (I don't think it is).
>> 
>> chris
>> 
>> On Feb 17, 2012, at 3:49 AM, Louise-Amélie Schmitt wrote:
>> 
 You don't need a full PBS server on that machine, just the client 
 libraries and configuration.  i.e. if you can run 'qstat' from machine A 
 and have it return the queue on machine B, that should be all you need.
 
>>> We asked the IT guys they did just that :) now qsub works fine from machine 
>>> A!
>>> But I still have a problem:
>>> They have no libdrmaa.so anywhere! What should I do? If I understood 
>>> correctly it's part of the Fedstage DRMAA service provider, will it be 
>>> enough installing it on our Galaxy server?
>>> 
 Torque's syntax allows you to specify the server name right in the runner 
 URL, but pbs_python links against libtorque, which is part of the torque 
 client, which must be installed somewhere on the local system.
 
>>> Ok, thanks a lot!
>>> 
>>> Best,
>>> L-A
>>> 
> Best,
> L-A
> 
> 
> Le 30/01/2012 18:20, Nate Coraor a écrit :
>> On Jan 30, 2012, at 12:07 PM, Louise-Amélie Schmitt wrote:
>> 
>>> Hi Nate,
>>> 
>>> Thanks for the leads!
>>> 
>>> But setting DRMAA_LIBRARY_PATH means I'm in trouble since the libraries 
>>> are on machine B which is maintained by our IT dept. I cannot access 
>>> them from machine A.
>>> 
>>> Is it a desperate situation? Will it work if I have a copy of those 
>>> libs somewhere? :/
>> Hi L-A,
>> 
>> The Galaxy server will need to be a submission host, so I believe it 
>> will have to have PBS Pro installed.  If it has this, then the FedStage 
>> DRMAA library should be installable on the same host.  It may be 
>> possible copy the libraries, although I don't know whether you'd be able 
>> to configure the server address without access to the directories in 
>> which the library will look for its configuration.
>> 
>> --nate
>> 
>>> Best,
>>> L-A
>>> 
>>> 
>>> Le 30/01/2012 17:18, Nate Coraor a écrit :
 On Jan 30, 2012, at 7:37 AM, Louise-Amélie Schmitt wrote:
 
> Hi Nate,
> 
> Thanks for the info!
> 
> I'm trying to understand how the URL for DRMAA works but I don't 
> understand how we can set it so it uses a different machine.
> Our Galaxy runner runs on machine A and the cluster is on machine B, 
> where do I put B in the URL?
> 
> In the wiki there is this example:
> drmaa://[native_options]/
> I'm a bit confused, I would have expected something like:
> drmaa://[machine]/[native_options]/
> like for TORQUE. Did I miss something?
 Hi L-A,
 
 Hrm, I've only used it with SGE, which uses an environment variable to 
 define the cell location, and LSF, which I don't remember, but I 
 assume it used the default.  I think if you configure the PBS Pro 
 client on the submission host and point DRMAA_LIBRARY_PATH at the 
 correct libdrmaa, it will use your client configuration.  There are 
 other PBS Pro users on the list who can hopefully chime in with more 
 details.
 
 --nate
 
> Best,
> L-A
> 
> 
> Le 19/01/2012 19:43, Nate Coraor a écrit :
>> On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:
>> 
>>> Hello,
>>> 
>>> We want to move Galaxy's jobs from our small TORQUE local install 
>>> to a big cluster running PBS Pro.
>>> 
>>> In the universe_wsgi.ini, I changed the cluster address as follows:
>>> default_cluster_job_runner = pbs:///
>>> to:
>>> default_cluster_job_runner = pbs://sub-master/clng_new/
>>> where sub-master is the name of the machine and clng_new is the 
>>> queue.
>>> 
>>> However, I get an error when trying to run any job:
>>> 
>>> galaxy.jobs.runners.pbs ERROR 2012-01-16 11:10:00,894 Connection to 
>>> PBS server for submit failed: 111: Could not find a text for this 
>>> error, uhhh
>>> 
>>> This corresponds to the qsub error 111 (Cannot connect to 

Re: [galaxy-dev] Switching Torque>PBSpro: qsub error 111 (cannot connect to server)

2012-02-17 Thread Louise-Amélie Schmitt
Thanks a lot Chris, actually we found another place where we found rpms 
for PBS Pro:

http://apps.man.poznan.pl/trac/pbs-drmaa

And it works now! :D

Have a great weekend,
L-A


Le 17/02/2012 17:07, Fields, Christopher J a écrit :

If they used a package install, I think the DRMAA libs for torque are normally 
separate.  For instance:

http://pkgs.org/download/torque-drmaa

If going from source, you can enable DRMAA compilation with --enable-drmaa; I 
don't recall if that is on by default (I don't think it is).

chris

On Feb 17, 2012, at 3:49 AM, Louise-Amélie Schmitt wrote:


You don't need a full PBS server on that machine, just the client libraries and 
configuration.  i.e. if you can run 'qstat' from machine A and have it return 
the queue on machine B, that should be all you need.


We asked the IT guys they did just that :) now qsub works fine from machine A!
But I still have a problem:
They have no libdrmaa.so anywhere! What should I do? If I understood correctly 
it's part of the Fedstage DRMAA service provider, will it be enough installing 
it on our Galaxy server?


Torque's syntax allows you to specify the server name right in the runner URL, 
but pbs_python links against libtorque, which is part of the torque client, 
which must be installed somewhere on the local system.


Ok, thanks a lot!

Best,
L-A


Best,
L-A


Le 30/01/2012 18:20, Nate Coraor a écrit :

On Jan 30, 2012, at 12:07 PM, Louise-Amélie Schmitt wrote:


Hi Nate,

Thanks for the leads!

But setting DRMAA_LIBRARY_PATH means I'm in trouble since the libraries are on 
machine B which is maintained by our IT dept. I cannot access them from machine 
A.

Is it a desperate situation? Will it work if I have a copy of those libs 
somewhere? :/

Hi L-A,

The Galaxy server will need to be a submission host, so I believe it will have 
to have PBS Pro installed.  If it has this, then the FedStage DRMAA library 
should be installable on the same host.  It may be possible copy the libraries, 
although I don't know whether you'd be able to configure the server address 
without access to the directories in which the library will look for its 
configuration.

--nate


Best,
L-A


Le 30/01/2012 17:18, Nate Coraor a écrit :

On Jan 30, 2012, at 7:37 AM, Louise-Amélie Schmitt wrote:


Hi Nate,

Thanks for the info!

I'm trying to understand how the URL for DRMAA works but I don't understand how 
we can set it so it uses a different machine.
Our Galaxy runner runs on machine A and the cluster is on machine B, where do I 
put B in the URL?

In the wiki there is this example:
drmaa://[native_options]/
I'm a bit confused, I would have expected something like:
drmaa://[machine]/[native_options]/
like for TORQUE. Did I miss something?

Hi L-A,

Hrm, I've only used it with SGE, which uses an environment variable to define 
the cell location, and LSF, which I don't remember, but I assume it used the 
default.  I think if you configure the PBS Pro client on the submission host 
and point DRMAA_LIBRARY_PATH at the correct libdrmaa, it will use your client 
configuration.  There are other PBS Pro users on the list who can hopefully 
chime in with more details.

--nate


Best,
L-A


Le 19/01/2012 19:43, Nate Coraor a écrit :

On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:


Hello,

We want to move Galaxy's jobs from our small TORQUE local install to a big 
cluster running PBS Pro.

In the universe_wsgi.ini, I changed the cluster address as follows:
default_cluster_job_runner = pbs:///
to:
default_cluster_job_runner = pbs://sub-master/clng_new/
where sub-master is the name of the machine and clng_new is the queue.

However, I get an error when trying to run any job:

galaxy.jobs.runners.pbs ERROR 2012-01-16 11:10:00,894 Connection to PBS server 
for submit failed: 111: Could not find a text for this error, uhhh

This corresponds to the qsub error 111 (Cannot connect to specified server 
host) which is, for some reason, caught by pbs_python as an error of its own 
(111 not corresponding to any pbs_python error code, hence the 
face-plant-message).

Our guess is that we might need to re-scramble the pbs_python egg with PBS 
pro's libraries, is that correct?
If it's the case, what do we have to set as LIBTORQUE_DIR?

Hi L-A,

pbs_python is only designed for TORQUE, I don't think it is compatible with the 
PBS Pro API.  For that, you need to use the drmaa runner, which uses the 
FedStage libdrmaa for PBS Pro.

--nate


Thanks,
L-A
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/



Re: [galaxy-dev] Switching Torque>PBSpro: qsub error 111 (cannot connect to server)

2012-02-17 Thread Fields, Christopher J
If they used a package install, I think the DRMAA libs for torque are normally 
separate.  For instance:

http://pkgs.org/download/torque-drmaa

If going from source, you can enable DRMAA compilation with --enable-drmaa; I 
don't recall if that is on by default (I don't think it is).

chris

On Feb 17, 2012, at 3:49 AM, Louise-Amélie Schmitt wrote:

> 
>> You don't need a full PBS server on that machine, just the client libraries 
>> and configuration.  i.e. if you can run 'qstat' from machine A and have it 
>> return the queue on machine B, that should be all you need.
>> 
> We asked the IT guys they did just that :) now qsub works fine from machine A!
> But I still have a problem:
> They have no libdrmaa.so anywhere! What should I do? If I understood 
> correctly it's part of the Fedstage DRMAA service provider, will it be enough 
> installing it on our Galaxy server?
> 
>> 
>> Torque's syntax allows you to specify the server name right in the runner 
>> URL, but pbs_python links against libtorque, which is part of the torque 
>> client, which must be installed somewhere on the local system.
>> 
> Ok, thanks a lot!
> 
> Best,
> L-A
> 
>>> Best,
>>> L-A
>>> 
>>> 
>>> Le 30/01/2012 18:20, Nate Coraor a écrit :
 On Jan 30, 2012, at 12:07 PM, Louise-Amélie Schmitt wrote:
 
> Hi Nate,
> 
> Thanks for the leads!
> 
> But setting DRMAA_LIBRARY_PATH means I'm in trouble since the libraries 
> are on machine B which is maintained by our IT dept. I cannot access them 
> from machine A.
> 
> Is it a desperate situation? Will it work if I have a copy of those libs 
> somewhere? :/
 Hi L-A,
 
 The Galaxy server will need to be a submission host, so I believe it will 
 have to have PBS Pro installed.  If it has this, then the FedStage DRMAA 
 library should be installable on the same host.  It may be possible copy 
 the libraries, although I don't know whether you'd be able to configure 
 the server address without access to the directories in which the library 
 will look for its configuration.
 
 --nate
 
> Best,
> L-A
> 
> 
> Le 30/01/2012 17:18, Nate Coraor a écrit :
>> On Jan 30, 2012, at 7:37 AM, Louise-Amélie Schmitt wrote:
>> 
>>> Hi Nate,
>>> 
>>> Thanks for the info!
>>> 
>>> I'm trying to understand how the URL for DRMAA works but I don't 
>>> understand how we can set it so it uses a different machine.
>>> Our Galaxy runner runs on machine A and the cluster is on machine B, 
>>> where do I put B in the URL?
>>> 
>>> In the wiki there is this example:
>>> drmaa://[native_options]/
>>> I'm a bit confused, I would have expected something like:
>>> drmaa://[machine]/[native_options]/
>>> like for TORQUE. Did I miss something?
>> Hi L-A,
>> 
>> Hrm, I've only used it with SGE, which uses an environment variable to 
>> define the cell location, and LSF, which I don't remember, but I assume 
>> it used the default.  I think if you configure the PBS Pro client on the 
>> submission host and point DRMAA_LIBRARY_PATH at the correct libdrmaa, it 
>> will use your client configuration.  There are other PBS Pro users on 
>> the list who can hopefully chime in with more details.
>> 
>> --nate
>> 
>>> Best,
>>> L-A
>>> 
>>> 
>>> Le 19/01/2012 19:43, Nate Coraor a écrit :
 On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:
 
> Hello,
> 
> We want to move Galaxy's jobs from our small TORQUE local install to 
> a big cluster running PBS Pro.
> 
> In the universe_wsgi.ini, I changed the cluster address as follows:
> default_cluster_job_runner = pbs:///
> to:
> default_cluster_job_runner = pbs://sub-master/clng_new/
> where sub-master is the name of the machine and clng_new is the queue.
> 
> However, I get an error when trying to run any job:
> 
> galaxy.jobs.runners.pbs ERROR 2012-01-16 11:10:00,894 Connection to 
> PBS server for submit failed: 111: Could not find a text for this 
> error, uhhh
> 
> This corresponds to the qsub error 111 (Cannot connect to specified 
> server host) which is, for some reason, caught by pbs_python as an 
> error of its own (111 not corresponding to any pbs_python error code, 
> hence the face-plant-message).
> 
> Our guess is that we might need to re-scramble the pbs_python egg 
> with PBS pro's libraries, is that correct?
> If it's the case, what do we have to set as LIBTORQUE_DIR?
 Hi L-A,
 
 pbs_python is only designed for TORQUE, I don't think it is compatible 
 with the PBS Pro API.  For that, you need to use the drmaa runner, 
 which uses the FedStage libdrmaa for PBS Pro.
 
 

Re: [galaxy-dev] Switching Torque>PBSpro: qsub error 111 (cannot connect to server)

2012-02-17 Thread Louise-Amélie Schmitt



You don't need a full PBS server on that machine, just the client libraries and 
configuration.  i.e. if you can run 'qstat' from machine A and have it return 
the queue on machine B, that should be all you need.

We asked the IT guys they did just that :) now qsub works fine from 
machine A!

But I still have a problem:
They have no libdrmaa.so anywhere! What should I do? If I understood 
correctly it's part of the Fedstage DRMAA service provider, will it be 
enough installing it on our Galaxy server?




Torque's syntax allows you to specify the server name right in the runner URL, 
but pbs_python links against libtorque, which is part of the torque client, 
which must be installed somewhere on the local system.


Ok, thanks a lot!

Best,
L-A


Best,
L-A


Le 30/01/2012 18:20, Nate Coraor a écrit :

On Jan 30, 2012, at 12:07 PM, Louise-Amélie Schmitt wrote:


Hi Nate,

Thanks for the leads!

But setting DRMAA_LIBRARY_PATH means I'm in trouble since the libraries are on 
machine B which is maintained by our IT dept. I cannot access them from machine 
A.

Is it a desperate situation? Will it work if I have a copy of those libs 
somewhere? :/

Hi L-A,

The Galaxy server will need to be a submission host, so I believe it will have 
to have PBS Pro installed.  If it has this, then the FedStage DRMAA library 
should be installable on the same host.  It may be possible copy the libraries, 
although I don't know whether you'd be able to configure the server address 
without access to the directories in which the library will look for its 
configuration.

--nate


Best,
L-A


Le 30/01/2012 17:18, Nate Coraor a écrit :

On Jan 30, 2012, at 7:37 AM, Louise-Amélie Schmitt wrote:


Hi Nate,

Thanks for the info!

I'm trying to understand how the URL for DRMAA works but I don't understand how 
we can set it so it uses a different machine.
Our Galaxy runner runs on machine A and the cluster is on machine B, where do I 
put B in the URL?

In the wiki there is this example:
drmaa://[native_options]/
I'm a bit confused, I would have expected something like:
drmaa://[machine]/[native_options]/
like for TORQUE. Did I miss something?

Hi L-A,

Hrm, I've only used it with SGE, which uses an environment variable to define 
the cell location, and LSF, which I don't remember, but I assume it used the 
default.  I think if you configure the PBS Pro client on the submission host 
and point DRMAA_LIBRARY_PATH at the correct libdrmaa, it will use your client 
configuration.  There are other PBS Pro users on the list who can hopefully 
chime in with more details.

--nate


Best,
L-A


Le 19/01/2012 19:43, Nate Coraor a écrit :

On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:


Hello,

We want to move Galaxy's jobs from our small TORQUE local install to a big 
cluster running PBS Pro.

In the universe_wsgi.ini, I changed the cluster address as follows:
default_cluster_job_runner = pbs:///
to:
default_cluster_job_runner = pbs://sub-master/clng_new/
where sub-master is the name of the machine and clng_new is the queue.

However, I get an error when trying to run any job:

galaxy.jobs.runners.pbs ERROR 2012-01-16 11:10:00,894 Connection to PBS server 
for submit failed: 111: Could not find a text for this error, uhhh

This corresponds to the qsub error 111 (Cannot connect to specified server 
host) which is, for some reason, caught by pbs_python as an error of its own 
(111 not corresponding to any pbs_python error code, hence the 
face-plant-message).

Our guess is that we might need to re-scramble the pbs_python egg with PBS 
pro's libraries, is that correct?
If it's the case, what do we have to set as LIBTORQUE_DIR?

Hi L-A,

pbs_python is only designed for TORQUE, I don't think it is compatible with the 
PBS Pro API.  For that, you need to use the drmaa runner, which uses the 
FedStage libdrmaa for PBS Pro.

--nate


Thanks,
L-A
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Switching Torque>PBSpro: qsub error 111 (cannot connect to server)

2012-02-01 Thread Nate Coraor
On Feb 1, 2012, at 10:09 AM, Louise-Amélie Schmitt wrote:

> Hi Nate,
> 
> Yeah, looks like I have no choice...
> 
> I'll try to negociate with the IT guys to run the runner on machine B.

You don't need a full PBS server on that machine, just the client libraries and 
configuration.  i.e. if you can run 'qstat' from machine A and have it return 
the queue on machine B, that should be all you need.

> 
> Thanks for all the information!
> 
> BTW, out of curiosity, is submitting jobs across machines possible with 
> Torque?

Torque's syntax allows you to specify the server name right in the runner URL, 
but pbs_python links against libtorque, which is part of the torque client, 
which must be installed somewhere on the local system.

> 
> Best,
> L-A
> 
> 
> Le 30/01/2012 18:20, Nate Coraor a écrit :
>> On Jan 30, 2012, at 12:07 PM, Louise-Amélie Schmitt wrote:
>> 
>>> Hi Nate,
>>> 
>>> Thanks for the leads!
>>> 
>>> But setting DRMAA_LIBRARY_PATH means I'm in trouble since the libraries are 
>>> on machine B which is maintained by our IT dept. I cannot access them from 
>>> machine A.
>>> 
>>> Is it a desperate situation? Will it work if I have a copy of those libs 
>>> somewhere? :/
>> Hi L-A,
>> 
>> The Galaxy server will need to be a submission host, so I believe it will 
>> have to have PBS Pro installed.  If it has this, then the FedStage DRMAA 
>> library should be installable on the same host.  It may be possible copy the 
>> libraries, although I don't know whether you'd be able to configure the 
>> server address without access to the directories in which the library will 
>> look for its configuration.
>> 
>> --nate
>> 
>>> Best,
>>> L-A
>>> 
>>> 
>>> Le 30/01/2012 17:18, Nate Coraor a écrit :
 On Jan 30, 2012, at 7:37 AM, Louise-Amélie Schmitt wrote:
 
> Hi Nate,
> 
> Thanks for the info!
> 
> I'm trying to understand how the URL for DRMAA works but I don't 
> understand how we can set it so it uses a different machine.
> Our Galaxy runner runs on machine A and the cluster is on machine B, 
> where do I put B in the URL?
> 
> In the wiki there is this example:
> drmaa://[native_options]/
> I'm a bit confused, I would have expected something like:
> drmaa://[machine]/[native_options]/
> like for TORQUE. Did I miss something?
 Hi L-A,
 
 Hrm, I've only used it with SGE, which uses an environment variable to 
 define the cell location, and LSF, which I don't remember, but I assume it 
 used the default.  I think if you configure the PBS Pro client on the 
 submission host and point DRMAA_LIBRARY_PATH at the correct libdrmaa, it 
 will use your client configuration.  There are other PBS Pro users on the 
 list who can hopefully chime in with more details.
 
 --nate
 
> Best,
> L-A
> 
> 
> Le 19/01/2012 19:43, Nate Coraor a écrit :
>> On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:
>> 
>>> Hello,
>>> 
>>> We want to move Galaxy's jobs from our small TORQUE local install to a 
>>> big cluster running PBS Pro.
>>> 
>>> In the universe_wsgi.ini, I changed the cluster address as follows:
>>> default_cluster_job_runner = pbs:///
>>> to:
>>> default_cluster_job_runner = pbs://sub-master/clng_new/
>>> where sub-master is the name of the machine and clng_new is the queue.
>>> 
>>> However, I get an error when trying to run any job:
>>> 
>>> galaxy.jobs.runners.pbs ERROR 2012-01-16 11:10:00,894 Connection to PBS 
>>> server for submit failed: 111: Could not find a text for this error, 
>>> uhhh
>>> 
>>> This corresponds to the qsub error 111 (Cannot connect to specified 
>>> server host) which is, for some reason, caught by pbs_python as an 
>>> error of its own (111 not corresponding to any pbs_python error code, 
>>> hence the face-plant-message).
>>> 
>>> Our guess is that we might need to re-scramble the pbs_python egg with 
>>> PBS pro's libraries, is that correct?
>>> If it's the case, what do we have to set as LIBTORQUE_DIR?
>> Hi L-A,
>> 
>> pbs_python is only designed for TORQUE, I don't think it is compatible 
>> with the PBS Pro API.  For that, you need to use the drmaa runner, which 
>> uses the FedStage libdrmaa for PBS Pro.
>> 
>> --nate
>> 
>>> Thanks,
>>> L-A
>>> ___
>>> Please keep all replies on the list by using "reply all"
>>> in your mail client.  To manage your subscriptions to this
>>> and other Galaxy lists, please use the interface at:
>>> 
>>> http://lists.bx.psu.edu/
> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Switching Torque>PBSpro: qsub error 111 (cannot connect to server)

2012-02-01 Thread Louise-Amélie Schmitt

Hi Nate,

Yeah, looks like I have no choice...

I'll try to negociate with the IT guys to run the runner on machine B.

Thanks for all the information!

BTW, out of curiosity, is submitting jobs across machines possible with 
Torque?


Best,
L-A


Le 30/01/2012 18:20, Nate Coraor a écrit :

On Jan 30, 2012, at 12:07 PM, Louise-Amélie Schmitt wrote:


Hi Nate,

Thanks for the leads!

But setting DRMAA_LIBRARY_PATH means I'm in trouble since the libraries are on 
machine B which is maintained by our IT dept. I cannot access them from machine 
A.

Is it a desperate situation? Will it work if I have a copy of those libs 
somewhere? :/

Hi L-A,

The Galaxy server will need to be a submission host, so I believe it will have 
to have PBS Pro installed.  If it has this, then the FedStage DRMAA library 
should be installable on the same host.  It may be possible copy the libraries, 
although I don't know whether you'd be able to configure the server address 
without access to the directories in which the library will look for its 
configuration.

--nate


Best,
L-A


Le 30/01/2012 17:18, Nate Coraor a écrit :

On Jan 30, 2012, at 7:37 AM, Louise-Amélie Schmitt wrote:


Hi Nate,

Thanks for the info!

I'm trying to understand how the URL for DRMAA works but I don't understand how 
we can set it so it uses a different machine.
Our Galaxy runner runs on machine A and the cluster is on machine B, where do I 
put B in the URL?

In the wiki there is this example:
drmaa://[native_options]/
I'm a bit confused, I would have expected something like:
drmaa://[machine]/[native_options]/
like for TORQUE. Did I miss something?

Hi L-A,

Hrm, I've only used it with SGE, which uses an environment variable to define 
the cell location, and LSF, which I don't remember, but I assume it used the 
default.  I think if you configure the PBS Pro client on the submission host 
and point DRMAA_LIBRARY_PATH at the correct libdrmaa, it will use your client 
configuration.  There are other PBS Pro users on the list who can hopefully 
chime in with more details.

--nate


Best,
L-A


Le 19/01/2012 19:43, Nate Coraor a écrit :

On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:


Hello,

We want to move Galaxy's jobs from our small TORQUE local install to a big 
cluster running PBS Pro.

In the universe_wsgi.ini, I changed the cluster address as follows:
default_cluster_job_runner = pbs:///
to:
default_cluster_job_runner = pbs://sub-master/clng_new/
where sub-master is the name of the machine and clng_new is the queue.

However, I get an error when trying to run any job:

galaxy.jobs.runners.pbs ERROR 2012-01-16 11:10:00,894 Connection to PBS server 
for submit failed: 111: Could not find a text for this error, uhhh

This corresponds to the qsub error 111 (Cannot connect to specified server 
host) which is, for some reason, caught by pbs_python as an error of its own 
(111 not corresponding to any pbs_python error code, hence the 
face-plant-message).

Our guess is that we might need to re-scramble the pbs_python egg with PBS 
pro's libraries, is that correct?
If it's the case, what do we have to set as LIBTORQUE_DIR?

Hi L-A,

pbs_python is only designed for TORQUE, I don't think it is compatible with the 
PBS Pro API.  For that, you need to use the drmaa runner, which uses the 
FedStage libdrmaa for PBS Pro.

--nate


Thanks,
L-A
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Switching Torque>PBSpro: qsub error 111 (cannot connect to server)

2012-01-30 Thread Nate Coraor
On Jan 30, 2012, at 12:07 PM, Louise-Amélie Schmitt wrote:

> Hi Nate,
> 
> Thanks for the leads!
> 
> But setting DRMAA_LIBRARY_PATH means I'm in trouble since the libraries are 
> on machine B which is maintained by our IT dept. I cannot access them from 
> machine A.
> 
> Is it a desperate situation? Will it work if I have a copy of those libs 
> somewhere? :/

Hi L-A,

The Galaxy server will need to be a submission host, so I believe it will have 
to have PBS Pro installed.  If it has this, then the FedStage DRMAA library 
should be installable on the same host.  It may be possible copy the libraries, 
although I don't know whether you'd be able to configure the server address 
without access to the directories in which the library will look for its 
configuration.

--nate

> 
> Best,
> L-A
> 
> 
> Le 30/01/2012 17:18, Nate Coraor a écrit :
>> On Jan 30, 2012, at 7:37 AM, Louise-Amélie Schmitt wrote:
>> 
>>> Hi Nate,
>>> 
>>> Thanks for the info!
>>> 
>>> I'm trying to understand how the URL for DRMAA works but I don't understand 
>>> how we can set it so it uses a different machine.
>>> Our Galaxy runner runs on machine A and the cluster is on machine B, where 
>>> do I put B in the URL?
>>> 
>>> In the wiki there is this example:
>>> drmaa://[native_options]/
>>> I'm a bit confused, I would have expected something like:
>>> drmaa://[machine]/[native_options]/
>>> like for TORQUE. Did I miss something?
>> Hi L-A,
>> 
>> Hrm, I've only used it with SGE, which uses an environment variable to 
>> define the cell location, and LSF, which I don't remember, but I assume it 
>> used the default.  I think if you configure the PBS Pro client on the 
>> submission host and point DRMAA_LIBRARY_PATH at the correct libdrmaa, it 
>> will use your client configuration.  There are other PBS Pro users on the 
>> list who can hopefully chime in with more details.
>> 
>> --nate
>> 
>>> Best,
>>> L-A
>>> 
>>> 
>>> Le 19/01/2012 19:43, Nate Coraor a écrit :
 On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:
 
> Hello,
> 
> We want to move Galaxy's jobs from our small TORQUE local install to a 
> big cluster running PBS Pro.
> 
> In the universe_wsgi.ini, I changed the cluster address as follows:
> default_cluster_job_runner = pbs:///
> to:
> default_cluster_job_runner = pbs://sub-master/clng_new/
> where sub-master is the name of the machine and clng_new is the queue.
> 
> However, I get an error when trying to run any job:
> 
> galaxy.jobs.runners.pbs ERROR 2012-01-16 11:10:00,894 Connection to PBS 
> server for submit failed: 111: Could not find a text for this error, uhhh
> 
> This corresponds to the qsub error 111 (Cannot connect to specified 
> server host) which is, for some reason, caught by pbs_python as an error 
> of its own (111 not corresponding to any pbs_python error code, hence the 
> face-plant-message).
> 
> Our guess is that we might need to re-scramble the pbs_python egg with 
> PBS pro's libraries, is that correct?
> If it's the case, what do we have to set as LIBTORQUE_DIR?
 Hi L-A,
 
 pbs_python is only designed for TORQUE, I don't think it is compatible 
 with the PBS Pro API.  For that, you need to use the drmaa runner, which 
 uses the FedStage libdrmaa for PBS Pro.
 
 --nate
 
> Thanks,
> L-A
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
> http://lists.bx.psu.edu/
> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Switching Torque>PBSpro: qsub error 111 (cannot connect to server)

2012-01-30 Thread Louise-Amélie Schmitt

Hi Nate,

Thanks for the leads!

But setting DRMAA_LIBRARY_PATH means I'm in trouble since the libraries 
are on machine B which is maintained by our IT dept. I cannot access 
them from machine A.


Is it a desperate situation? Will it work if I have a copy of those libs 
somewhere? :/


Best,
L-A


Le 30/01/2012 17:18, Nate Coraor a écrit :

On Jan 30, 2012, at 7:37 AM, Louise-Amélie Schmitt wrote:


Hi Nate,

Thanks for the info!

I'm trying to understand how the URL for DRMAA works but I don't understand how 
we can set it so it uses a different machine.
Our Galaxy runner runs on machine A and the cluster is on machine B, where do I 
put B in the URL?

In the wiki there is this example:
drmaa://[native_options]/
I'm a bit confused, I would have expected something like:
drmaa://[machine]/[native_options]/
like for TORQUE. Did I miss something?

Hi L-A,

Hrm, I've only used it with SGE, which uses an environment variable to define 
the cell location, and LSF, which I don't remember, but I assume it used the 
default.  I think if you configure the PBS Pro client on the submission host 
and point DRMAA_LIBRARY_PATH at the correct libdrmaa, it will use your client 
configuration.  There are other PBS Pro users on the list who can hopefully 
chime in with more details.

--nate


Best,
L-A


Le 19/01/2012 19:43, Nate Coraor a écrit :

On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:


Hello,

We want to move Galaxy's jobs from our small TORQUE local install to a big 
cluster running PBS Pro.

In the universe_wsgi.ini, I changed the cluster address as follows:
default_cluster_job_runner = pbs:///
to:
default_cluster_job_runner = pbs://sub-master/clng_new/
where sub-master is the name of the machine and clng_new is the queue.

However, I get an error when trying to run any job:

galaxy.jobs.runners.pbs ERROR 2012-01-16 11:10:00,894 Connection to PBS server 
for submit failed: 111: Could not find a text for this error, uhhh

This corresponds to the qsub error 111 (Cannot connect to specified server 
host) which is, for some reason, caught by pbs_python as an error of its own 
(111 not corresponding to any pbs_python error code, hence the 
face-plant-message).

Our guess is that we might need to re-scramble the pbs_python egg with PBS 
pro's libraries, is that correct?
If it's the case, what do we have to set as LIBTORQUE_DIR?

Hi L-A,

pbs_python is only designed for TORQUE, I don't think it is compatible with the 
PBS Pro API.  For that, you need to use the drmaa runner, which uses the 
FedStage libdrmaa for PBS Pro.

--nate


Thanks,
L-A
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Switching Torque>PBSpro: qsub error 111 (cannot connect to server)

2012-01-30 Thread Nate Coraor
On Jan 30, 2012, at 7:37 AM, Louise-Amélie Schmitt wrote:

> Hi Nate,
> 
> Thanks for the info!
> 
> I'm trying to understand how the URL for DRMAA works but I don't understand 
> how we can set it so it uses a different machine.
> Our Galaxy runner runs on machine A and the cluster is on machine B, where do 
> I put B in the URL?
> 
> In the wiki there is this example:
> drmaa://[native_options]/
> I'm a bit confused, I would have expected something like:
> drmaa://[machine]/[native_options]/
> like for TORQUE. Did I miss something?

Hi L-A,

Hrm, I've only used it with SGE, which uses an environment variable to define 
the cell location, and LSF, which I don't remember, but I assume it used the 
default.  I think if you configure the PBS Pro client on the submission host 
and point DRMAA_LIBRARY_PATH at the correct libdrmaa, it will use your client 
configuration.  There are other PBS Pro users on the list who can hopefully 
chime in with more details.

--nate

> 
> Best,
> L-A
> 
> 
> Le 19/01/2012 19:43, Nate Coraor a écrit :
>> On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:
>> 
>>> Hello,
>>> 
>>> We want to move Galaxy's jobs from our small TORQUE local install to a big 
>>> cluster running PBS Pro.
>>> 
>>> In the universe_wsgi.ini, I changed the cluster address as follows:
>>> default_cluster_job_runner = pbs:///
>>> to:
>>> default_cluster_job_runner = pbs://sub-master/clng_new/
>>> where sub-master is the name of the machine and clng_new is the queue.
>>> 
>>> However, I get an error when trying to run any job:
>>> 
>>> galaxy.jobs.runners.pbs ERROR 2012-01-16 11:10:00,894 Connection to PBS 
>>> server for submit failed: 111: Could not find a text for this error, uhhh
>>> 
>>> This corresponds to the qsub error 111 (Cannot connect to specified server 
>>> host) which is, for some reason, caught by pbs_python as an error of its 
>>> own (111 not corresponding to any pbs_python error code, hence the 
>>> face-plant-message).
>>> 
>>> Our guess is that we might need to re-scramble the pbs_python egg with PBS 
>>> pro's libraries, is that correct?
>>> If it's the case, what do we have to set as LIBTORQUE_DIR?
>> Hi L-A,
>> 
>> pbs_python is only designed for TORQUE, I don't think it is compatible with 
>> the PBS Pro API.  For that, you need to use the drmaa runner, which uses the 
>> FedStage libdrmaa for PBS Pro.
>> 
>> --nate
>> 
>>> Thanks,
>>> L-A
>>> ___
>>> Please keep all replies on the list by using "reply all"
>>> in your mail client.  To manage your subscriptions to this
>>> and other Galaxy lists, please use the interface at:
>>> 
>>> http://lists.bx.psu.edu/
> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Switching Torque>PBSpro: qsub error 111 (cannot connect to server)

2012-01-30 Thread Louise-Amélie Schmitt

Hi Nate,

Thanks for the info!

I'm trying to understand how the URL for DRMAA works but I don't 
understand how we can set it so it uses a different machine.
Our Galaxy runner runs on machine A and the cluster is on machine B, 
where do I put B in the URL?


In the wiki there is this example:
drmaa://[native_options]/
I'm a bit confused, I would have expected something like:
drmaa://[machine]/[native_options]/
like for TORQUE. Did I miss something?

Best,
L-A


Le 19/01/2012 19:43, Nate Coraor a écrit :

On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:


Hello,

We want to move Galaxy's jobs from our small TORQUE local install to a big 
cluster running PBS Pro.

In the universe_wsgi.ini, I changed the cluster address as follows:
default_cluster_job_runner = pbs:///
to:
default_cluster_job_runner = pbs://sub-master/clng_new/
where sub-master is the name of the machine and clng_new is the queue.

However, I get an error when trying to run any job:

galaxy.jobs.runners.pbs ERROR 2012-01-16 11:10:00,894 Connection to PBS server 
for submit failed: 111: Could not find a text for this error, uhhh

This corresponds to the qsub error 111 (Cannot connect to specified server 
host) which is, for some reason, caught by pbs_python as an error of its own 
(111 not corresponding to any pbs_python error code, hence the 
face-plant-message).

Our guess is that we might need to re-scramble the pbs_python egg with PBS 
pro's libraries, is that correct?
If it's the case, what do we have to set as LIBTORQUE_DIR?

Hi L-A,

pbs_python is only designed for TORQUE, I don't think it is compatible with the 
PBS Pro API.  For that, you need to use the drmaa runner, which uses the 
FedStage libdrmaa for PBS Pro.

--nate


Thanks,
L-A
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Switching Torque>PBSpro: qsub error 111 (cannot connect to server)

2012-01-19 Thread Nate Coraor

On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:

> Hello,
> 
> We want to move Galaxy's jobs from our small TORQUE local install to a big 
> cluster running PBS Pro.
> 
> In the universe_wsgi.ini, I changed the cluster address as follows:
> default_cluster_job_runner = pbs:///
> to:
> default_cluster_job_runner = pbs://sub-master/clng_new/
> where sub-master is the name of the machine and clng_new is the queue.
> 
> However, I get an error when trying to run any job:
> 
> galaxy.jobs.runners.pbs ERROR 2012-01-16 11:10:00,894 Connection to PBS 
> server for submit failed: 111: Could not find a text for this error, uhhh
> 
> This corresponds to the qsub error 111 (Cannot connect to specified server 
> host) which is, for some reason, caught by pbs_python as an error of its own 
> (111 not corresponding to any pbs_python error code, hence the 
> face-plant-message).
> 
> Our guess is that we might need to re-scramble the pbs_python egg with PBS 
> pro's libraries, is that correct?
> If it's the case, what do we have to set as LIBTORQUE_DIR?

Hi L-A,

pbs_python is only designed for TORQUE, I don't think it is compatible with the 
PBS Pro API.  For that, you need to use the drmaa runner, which uses the 
FedStage libdrmaa for PBS Pro.

--nate

> 
> Thanks,
> L-A
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
> http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/