Re: [galaxy-dev] pbs_python can't be installed via scramble

2015-09-23 Thread Will Holtz
Hi Makis,

The thread I posted does have the exact same container.hpp error and
proposes a solution.

Did you try setting your PBS_PYTHON_INCLUDEDIR to point to your
torque/include directory, as mentioned in that thread?

from the thread:

export PBS_PYTHON_INCLUDEDIR=/usr/local/torque/include/
export PBSCONFIG=/usr/local/torque/bin/pbs-config
export LIBTORQUE_DIR=/usr/local/torque/lib/libtorque.so


-Will


On Wed, Sep 23, 2015 at 5:49 AM, Makis Ladoukakis 
wrote:

> Hello Will,
>
> Thank you for your reply. I had already checked this thread before as I
> browsed through the mailing list for "pbs_python" issues. It doesn't have
> the same errors (can't locate log.h and when i provide it it wants the
> container.hpp ) and up to one point i've already followed the
> "rescrambling" of the pbs_python.
>
> Anyone else had any experience with that?
>
> Thank you,
> Makis
>
> --
> Date: Tue, 22 Sep 2015 09:35:11 -0700
> Subject: Re: [galaxy-dev] pbs_python can't be installed via scramble
> From: who...@lygos.com
> To: makis4e...@hotmail.com
> CC: galaxy-dev@lists.galaxyproject.org
>
>
> Here is an old thread that looks rather similar to your problems:
>
> https://www.mail-archive.com/galaxy-dev@lists.galaxyproject.org/msg00078.html
>
> -Will
>
>
> On Tue, Sep 22, 2015 at 2:43 AM, Makis Ladoukakis 
> wrote:
>
> Hello everyone,
>
> I'm trying to set up a Galaxy instance on a multi-core server in my
> university so according to the instructions here:
>
> https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster#PBS
>
> I installed Torque, changed the eggs.ini file by adding the 4.4.0. version
> of pbs_python
>
> and tried to setup pbs_python via scramble:
>
> LIBTORQUE_DIR=/usr/local/lib/ python scripts/scramble.py -e pbs_python
>
> When I did that I got the error:
>
> Failed to find log.h in inlcude dir /usr/include/torque. (Set incude dir
> via PBS_PYTHON_INCLUDEDIR variable)
> Traceback (most recent call last):
>   File "scripts/scramble.py", line 50, in 
> egg.scramble()
>   File "/home/galaxy/galaxy/lib/galaxy/eggs/scramble.py", line 57, in
> scramble
> self.run_scramble_script()
>   File "/home/galaxy/galaxy/lib/galaxy/eggs/scramble.py", line 210, in
> run_scramble_script
> raise ScrambleFailure( self, "%s(): Egg build failed for %s %s" % (
> sys._getframe().f_code.co_name, self.name, self.version ) )
> galaxy.eggs.scramble.ScrambleFailure: run_scramble_script(): Egg build
> failed for pbs_python 4.4.0
>
> so I did some digging around and found that the file I need is in
>
> /scripts/scramble/build/py2.7-linux-x86_64-ucs4/pbs_python/src/C++/
>
> (please correct me if I am wrong)
>
> So I tried again using:
>
>  
> PBS_PYTHON_INCLUDEDIR=/home/galaxy/galaxy/scripts/scramble/build/py2.7-linux-x86_64-ucs4/pbs_python/src/C++/
> LIBTORQUE_DIR=/usr/local/lib/ python scripts/scramble.py -e pbs_python
>
> but then I got the error:
>
> In file included from src/C++/pbs_ifl.h:90:0,
>  from src/C++/pbs_wrap.cxx:2978:
> /usr/local/include/u_hash_map_structs.h:82:25: fatal error: container.hpp:
> No such file or directory
>  #include "container.hpp"
>
>
> Can someone help me please?
>
> Kind regards,
> Makis
>
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   https://lists.galaxyproject.org/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
>
>
>
>
> --
> The information contained in this e-mail message or any attachment(s) may
> be confidential and/or privileged and is intended for use only by the
> individual(s) to whom this message is addressed.  If you are not the
> intended recipient, any dissemination, distribution, copying, or use is
> strictly prohibited.  If you receive this e-mail message in error, please
> e-mail the sender at who...@lygos.com and destroy this message and remove
> the transmission from all computer directories (including e-mail servers).
>
> Please consider the environment before printing this email.
>



-- 
The information contained in this e-mail message or any attachment(s) may
be confidential and/or privileged and is intended for use only by the
individual(s) to whom this message is addressed.  If you are not the
intended recipient, any dissemination, distribution, copying, or use is
strictly prohibited.  If you receive this e-mail message in error, please
e-mail the sender at who...@lygos.com and destroy this message and remove
the transmission from all computer directories (including e-mail servers).

Please consider the environment before printing this email.
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to 

Re: [galaxy-dev] Galaxy on a Cluster -- Active Directory LDAP configuration

2015-09-23 Thread Carlos Lijeron
Good day Nicola,

Thank you for the advise on LDAP authentication.  Did  you need to run Galaxy 
on Apache to enable this, or did you simply change some configuration files to 
enable this.  I think we’ll be able to write  some scripts using SLURM commands 
to pull data about resource utilization, but my main concern is whether we need 
Apache to enable LDAP authentication of galaxy users.

Thanks again !


Carlos

From: Nicola Soranzo 
> on behalf of Nicola 
Soranzo >
Date: Tuesday, September 22, 2015 at 6:59 AM
To: Carlos Lijeron >, 
"galaxy-dev@lists.galaxyproject.org" 
>
Subject: Re: [galaxy-dev] Galaxy on a Cluster -- Active Directory LDAP 
configuration

Hi Carlos,
we are using Active Directory LDAP for user authentication, which works pretty 
well, but only the Galaxy user to submit jobs to the LSF cluster queue, so I 
can't help with the resource tracking.

Cheers,
Nicola

On 21/09/15 20:19, Carlos Lijeron wrote:
Everyone,

We are setting up Galaxy to work with our cluster and SLURM as the work 
manager.   The cluster itself authenticates to our local Active Directory, so 
I’m wondering if the best way to track resource utilization of Galaxy users on 
the cluster is to also have Galaxy authenticate to the same Active Directory 
LDAP.

Is anyone on this list using the same configuration and tracking resource 
utilization from Galaxy users submitting jobs to the cluster nodes?   Please 
advise.

Thank you all !


Carlos.



___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Galaxy on a Cluster -- Active Directory LDAP configuration

2015-09-23 Thread Carlos Lijeron
David,

Thank you for the valuable feedback.  I will look into the galaxy-pulsar server 
app and determine how we can implement it to connect our galaxy instance to the 
working nodes.  Perhaps I’ll direct  a few relevant questions while we get on 
board with the implementation, if you don’t mind.If you have any notes or 
lessons learned that you could share that would be very helpful to us. However, 
this idea seems already like a good solution for us.I’ll keep all my notes 
and setup a step by step instructions after we are done.


Carlos.


From: David Trudgian 
>
Date: Tuesday, September 22, 2015 at 11:53 AM
To: Nicola Soranzo >, Carlos 
Lijeron >, 
"galaxy-dev@lists.galaxyproject.org" 
>
Subject: RE: [galaxy-dev] Galaxy on a Cluster -- Active Directory LDAP 
configuration

Hi Carlos,

We aren’t using Active Directory here – but we do use an OpenLDAP directory 
which our cluster authenticates against, as does Galaxy. In order to track 
cluster usage for users running Galaxy jobs we use galaxy-pulsar between Galaxy 
itself and our SLURM cluster. Pulsar is configured to submit all jobs as a real 
user on the cluster, via SLURM-DRMAA. This means that all of the Galaxy usage 
on the cluster appears in our SLURM accounting database just like any other job 
would.  The complexity here is file ownership. Pulsar has to copy all input 
files into a staging directory, and change ownership to the real user for the 
job to run on the cluster. It is/was(?) a little complex to setup, as there are 
more parts involved then a typical Galaxy install, but it works great for us 
here.

When I was getting this setup John Chilton mentioned adding a PulsarEmbedded 
runner into Galaxy at some point. Not sure whether this has/is happening, but 
it’d make these situations easier:

https://trello.com/c/4YwVZBtq/1865-embedded-pulsar-job-runner


DT

--
David Trudgian Ph.D.
Computational Scientist, BioHPC
UT Southwestern Medical Center
Dallas, TX 75390-9039
Tel: (214) 648-4833

From: galaxy-dev [mailto:galaxy-dev-boun...@lists.galaxyproject.org] On Behalf 
Of Nicola Soranzo
Sent: Tuesday, September 22, 2015 5:59 AM
To: Carlos Lijeron >; 
galaxy-dev@lists.galaxyproject.org
Subject: Re: [galaxy-dev] Galaxy on a Cluster -- Active Directory LDAP 
configuration

Hi Carlos,
we are using Active Directory LDAP for user authentication, which works pretty 
well, but only the Galaxy user to submit jobs to the LSF cluster queue, so I 
can't help with the resource tracking.

Cheers,
Nicola
On 21/09/15 20:19, Carlos Lijeron wrote:
Everyone,

We are setting up Galaxy to work with our cluster and SLURM as the work 
manager.   The cluster itself authenticates to our local Active Directory, so 
I’m wondering if the best way to track resource utilization of Galaxy users on 
the cluster is to also have Galaxy authenticate to the same Active Directory 
LDAP.

Is anyone on this list using the same configuration and tracking resource 
utilization from Galaxy users submitting jobs to the cluster nodes?   Please 
advise.

Thank you all !


Carlos.




___

Please keep all replies on the list by using "reply all"

in your mail client.  To manage your subscriptions to this

and other Galaxy lists, please use the interface at:

  https://lists.galaxyproject.org/



To search Galaxy mailing lists use the unified search at:

  http://galaxyproject.org/search/mailinglists/




UT Southwestern


Medical Center



The future of medicine, today.

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Galaxy Interactive Environment devs

2015-09-23 Thread Eric Rasche
Howdy y'all,

I'm building some changes to the IE infrastructure that will fix some
initial hacks we did, however they will be backwards incompatible.
Specifically these changes will fix some of the enterprise deployment
pains (running with an external upstream proxy like apache) and allow
for safely running IEs on a separate host.

I'm emailing the dev list to see if anyone else has started building
IEs, or if it just Björn and myself. Feel free to make yourselves known
to us so we know who to contact when we're planning new features :)

If you're just building on top of our existing containers, this won't be
an issue for you, you'll just need to rebuild your containers once we
release 15.10 versions.

Cheers,
Eric

-- 
Eric Rasche
Programmer II

Center for Phage Technology
Rm 312A, BioBio
Texas A University
College Station, TX 77843
404-692-2048
e...@tamu.edu
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Running Galaxy on a cluster with SLURM ?

2015-09-23 Thread Sven E. Templer
Hi Carlos,

next steps might include:

* install slurm drmaa

http://apps.man.poznan.pl/trac/slurm-drmaa

e.g.

curl -#o slurm-drmaa-1.0.7.tar.gz
http://apps.man.poznan.pl/trac/slurm-drmaa/downloads/9
tar -xf slurm-drmaa-1.0.7.tar.gz
cd slurm-drmaa-1.0.7
p=$(which srun)
p=${p%/bin/srun}/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH="$p"
./configure #CFLAGS="-g -O0"
make
sudo make install
# rm -r slurm-drmaa*
# Make the config file writeable by group_galaxy:
sudo touch /etc/slurm_drmaa.conf
sudo cp /etc/slurm_drmaa.conf /etc/slurm_drmaa.conf.original
sudo chown :group_galaxy /etc/slurm_drmaa.conf
sudo chmod g+w /etc/slurm_drmaa.conf
# Test the drmaa-run binary by submitting a small job:
export DRMAA_LIBRARY_PATH=/usr/local/lib/libdrmaa.so
echo 'echo "Test executed on host $(hostname) by user $USER"' > test.drmaa
drmaa-run bash test.drmaa

* configure /etc/slurm_drmaa.conf [
https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster#DRMAA], e.g.

job_categories: {
  default: "-J galaxy -p galaxy_partition",
},

* configure config/job_conf.xml [
https://wiki.galaxyproject.org/Admin/Config/Jobs], e.g.





/usr/local/lib/libdrmaa.so









  
  
--ntasks=12







  



* Have fun!

Sven

On 21 September 2015 at 21:14, Carlos Lijeron 
 wrote:

> Hello everyone,
>
> I¹m setting up Galaxy to run on our cluster which uses SLURM as the job
> manager.   I am wondering if any of you have any expertise in this area,
> ideas to share or lessons learned.   I carried out the following steps
> listed below, but I can¹t make the final connection just yet.  Any
> pointers will be greatly appreciated.
>
> 1. Setup Galaxy to run on the head node
> 2. Setup MySQL and it¹s running well with Galaxy
> 3. Setup the drmaa-python library inside our Galaxy folder
>
>
> Main questions:
>
> What would be the next step to take?
>
> Where to setup Galaxy to use an exclusive pool of resources (called
> partition in SLURM).
>
>
>
> Thank you all !
>
>
> Carlos.
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   https://lists.galaxyproject.org/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/



On 21 September 2015 at 21:14, Carlos Lijeron 
wrote:

> Hello everyone,
>
> I¹m setting up Galaxy to run on our cluster which uses SLURM as the job
> manager.   I am wondering if any of you have any expertise in this area,
> ideas to share or lessons learned.   I carried out the following steps
> listed below, but I can¹t make the final connection just yet.  Any
> pointers will be greatly appreciated.
>
> 1. Setup Galaxy to run on the head node
> 2. Setup MySQL and it¹s running well with Galaxy
> 3. Setup the drmaa-python library inside our Galaxy folder
>
>
> Main questions:
>
> What would be the next step to take?
>
> Where to setup Galaxy to use an exclusive pool of resources (called
> partition in SLURM).
>
>
>
> Thank you all !
>
>
> Carlos.
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   https://lists.galaxyproject.org/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/