Re: [galaxy-dev] Tool Shed Workflow

2012-06-08 Thread John Chilton
On Fri, Jun 8, 2012 at 3:27 PM, Greg Von Kuster  wrote:
> Hi John,
>
> On Jun 8, 2012, at 1:22 PM, John Chilton wrote:
>
>> Hello Greg,
>>
>> Thanks for the prompt and detailed response (though it did make me
>> sad). I think deploying tested, static components and configurations
>> to production environments and having production environments not
>> depending on outside services (like the tool shed) should be
>> considered best practices.
>
> I'm not sure I understand this issue.  What processes are you using to 
> upgrade your test and production servers with new Galaxy distributions?  If 
> you are pulling
> new Galaxy distributions from our Galaxy dist repository in bitbucket, then 
> pulling tools from the Galaxy tool shed is not much different - both are 
> outside services.  Updating your test environment, determining it is 
> functionally correct, and then updating your production environment using the 
> same approach would generally follow a best practice approach.  This is the 
> approach we are currently using for our public test and main Galaxy instances 
> at Penn State.

We don't pull down from bitbucket directly to our production
environment, we pull galaxy-dist changes into our testing repository,
merge (that can be quite complicated, sometimes a multihour process),
auto-deploy to a testing server, and then finally we push the tested
changes into a bare production repo. Our sys admins then pull in
changes from that bare production repo in our production environment.
We also prebuild eggs in our testing environment not live on our
production system. Given the complicated merges we need to do and the
configuration files that need to be updated each dist update it would
seem making those changes on a live production system would be
problematic.

Even if one was pulling changes directly from bitbucket into a
production codebase, I think the dependency on bitbucket would be very
different than on N toolsheds. If our sys admin is going to update
Galaxy and bitbucket is down, that is no problem he or she can just
bring Galaxy back up and update later. Now lets imagine they shutdown
our galaxy instance, updated the code base, did a database migration,
and went to do a toolshed migration and that failed. In this case
instead of just bringing Galaxy back up they will now need to restore
the database from backup and pullout of the mercurial changes.

Anyway all of that is a digression right, I understand that we will
need to have the deploy-time dependencies on tool sheds and make these
tool migration script calls part of our workflow. My lingering hope is
for a way of programmatically importing and updating new tools that
were never part of Galaxy (Qiime, upload_local_file, etc...) using
tool sheds. My previous e-mail was proposing or positing a mechanism
for doing that, but I think you read it like I was trying to describe
a way to script the migrations of the existing official Galaxy tools
(I definitely get that you have done that).

Thanks again for your time and detailed responses,
-John

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Proftpd Problem

2012-06-08 Thread CHEBBI Mohamed Amine
Hi Nate !
Thank-you for your response. I have finally fixed the problem. In fact I
had to add the line above in the /etc/proftpd.modules.conf :
AuthOrder mod_sql.c mod_auth_unix.c

This enables the checking of user login and password from SQL tables.
So Now I can upload data by FTP from my Galaxy session.
Thank-you
Amine

2012/6/8 Nate Coraor 

> On Jun 4, 2012, at 8:58 AM, CHEBBI Mohamed Amine wrote:
>
> >
> >
> > -- Forwarded message --
> > From: CHEBBI Mohamed Amine 
> > Date: 2012/5/25
> > Subject: Proftpd Problem
> > To: galaxy-dev@lists.bx.psu.edu
> >
> >
> >  Hi,
> > I'am trying to set up ProFTP on Debain  to enable FTP upload on Galaxy
> installed  locally? I followed the instructions on this link (
> http://wiki.g2.bx.psu.edu/Admin/Config/Upload%20via%20FTP) but it doesn't
> connect when using Useremail .
> > the proftpd.log get this message :
> > juin 04 14:43:56 rosalind proftpd[31368] rosalind
> (193.55.24.4[193.55.24.4]): FTP session opened.
> > juin 04 14:43:56 rosalind proftpd[31368] rosalind
> (193.55.24.4[193.55.24.4]): notice: unable to use '~/' [resolved to
> '/usr/local/appli/Galaxy/$
> > juin 04 14:43:56 rosalind proftpd[31368] rosalind
> (193.55.24.4[193.55.24.4]): Preparing to chroot to directory '~/'
> > juin 04 14:43:56 rosalind proftpd[31368] rosalind
> (193.55.24.4[193.55.24.4]): chroot to '~/' failed for user '
> chebbimam...@hotmail.com': Opérat$
> > juin 04 14:43:56 rosalind proftpd[31368] rosalind
> (193.55.24.4[193.55.24.4]): error: unable to set default root directory
> > juin 04 14:43:56 rosalind proftpd[31368] rosalind
> (193.55.24.4[193.55.24.4]): FTP session closed.
> >
> > Could someone help me .
>
> Hi Amine,
>
> Could you post your proftpd.conf, with any sensitive information (e.g.
> database server address/password) removed?
>
> Thanks,
> --nate
>
> > Thanks
> > Amine
> > ___
> > Please keep all replies on the list by using "reply all"
> > in your mail client.  To manage your subscriptions to this
> > and other Galaxy lists, please use the interface at:
> >
> >  http://lists.bx.psu.edu/
>
>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] How does Galaxy access datasets?

2012-06-08 Thread Dorset, Daniel C
Thanks Nate! That's very helpful.

-Original Message-
From: Nate Coraor [mailto:n...@bx.psu.edu] 
Sent: Friday, June 08, 2012 3:31 PM
To: Dorset, Daniel C
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] How does Galaxy access datasets?

On Jun 5, 2012, at 6:08 PM, Dorset, Daniel C wrote:

> I'm trying to troubleshoot why I can't retrieve output from my Galaxy cluster 
> instance. I notice that when I click on any output, the URL is something like:
>  
> http://[root galaxy address]/datasets/[some 16-character 
> hash]/display/[file name]
>  
> I'm not able to find the "datasets" directory on the local machine, and I 
> couldn't figure anything out by searching the paster.log file and the apache 
> access and error logs. Everytime I try to access output, it downloads a 
> zero-byte file. The files that I want to download through Galaxy are in the 
> correct subdirectory of /database/files/... If someone could explain to be 
> what's going on "behind the scenes," it would help me quite a bit. I'm 
> guessing that the absolute path is stored in a database, but beyond that I 
> don't know any specifics. If that's the case, knowing the relevant database 
> tables would be a huge hint.

Hi Dan,

The hash is decoded and converted to an ID using code in 
lib/galaxy/web/security/__init__.py and the value of id_secret in 
universe_wsgi.ini.  The decoded id is then passed to code in 
lib/galaxy/objectstore/__init__.py to assemble the path underneath 
galaxy-dist/database/files/

You may find galaxy-dist/scripts/helper.py in converting history dataset IDs to 
filesystem paths (they are also available directly in the web interface if you 
are an administrator or set expose_dataset_path = True in universe_wsgi.ini).

--nate



___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] How does Galaxy access datasets?

2012-06-08 Thread Nate Coraor
On Jun 5, 2012, at 6:08 PM, Dorset, Daniel C wrote:

> I’m trying to troubleshoot why I can’t retrieve output from my Galaxy cluster 
> instance. I notice that when I click on any output, the URL is something like:
>  
> http://[root galaxy address]/datasets/[some 16-character hash]/display/[file 
> name]
>  
> I’m not able to find the “datasets” directory on the local machine, and I 
> couldn’t figure anything out by searching the paster.log file and the apache 
> access and error logs. Everytime I try to access output, it downloads a 
> zero-byte file. The files that I want to download through Galaxy are in the 
> correct subdirectory of /database/files/… If someone could explain to be 
> what’s going on “behind the scenes,” it would help me quite a bit. I’m 
> guessing that the absolute path is stored in a database, but beyond that I 
> don’t know any specifics. If that’s the case, knowing the relevant database 
> tables would be a huge hint.

Hi Dan,

The hash is decoded and converted to an ID using code in 
lib/galaxy/web/security/__init__.py and the value of id_secret in 
universe_wsgi.ini.  The decoded id is then passed to code in 
lib/galaxy/objectstore/__init__.py to assemble the path underneath 
galaxy-dist/database/files/

You may find galaxy-dist/scripts/helper.py in converting history dataset IDs to 
filesystem paths (they are also available directly in the web interface if you 
are an administrator or set expose_dataset_path = True in universe_wsgi.ini).

--nate

>  
> Thanks!
>  
> Dan
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Tool Shed Workflow

2012-06-08 Thread Greg Von Kuster
Hi John,

On Jun 8, 2012, at 1:22 PM, John Chilton wrote:

> Hello Greg,
> 
> Thanks for the prompt and detailed response (though it did make me
> sad). I think deploying tested, static components and configurations
> to production environments and having production environments not
> depending on outside services (like the tool shed) should be
> considered best practices.

I'm not sure I understand this issue.  What processes are you using to upgrade 
your test and production servers with new Galaxy distributions?  If you are 
pulling
new Galaxy distributions from our Galaxy dist repository in bitbucket, then 
pulling tools from the Galaxy tool shed is not much different - both are 
outside services.  Updating your test environment, determining it is 
functionally correct, and then updating your production environment using the 
same approach would generally follow a best practice approach.  This is the 
approach we are currently using for our public test and main Galaxy instances 
at Penn State.

> 
> Oh well, I guess. Would there be a way to at least automate the
> pulling of tools in.

The process is completely automated - all you need to do execute a script, 
something like this:

sh ./scripts/migrate_tools/0002_tools.sh

This is the same process used when the Galaxy database schema migrates as part 
of a new Galaxy release, except in that case you would run a script like this:

sh manage_db.sh upgrade


> For instance, would it make sense to tweak
> InstallManager to parse a new kind of migration file that is a lot
> like the "official" migration files, but with the sections defined in
> the file. For this new kind of migration,  the InstallManager would
> then import everything in the file and not just the tools that are
> also in a tool_conf? Does that make sense? If yes, I imagine it could
> be modified to handle updates the same way?

If I understand this correctly, this is how the InstallManage works.  The 
entire tool shed repository is installed into your local Galaxy environment, 
but only the tools that are currently defined in your tool_conf.xml file are 
loaded into your tool panel.  

> 
> Rephrased, I guess the idea would be to have the sequence of official
> galaxy migrations that check tool_conf, and then have a sequence of
> migration defined by the deployer that could be used to install new
> tools or update existing ones.
> 
> My concern isn't just with the dev to production transition, it is
> also the ability to sort of programmatically define Galaxy
> installations the way I am doing with the galaxy-vm-launcher
> (https://bitbucket.org/jmchilton/galaxy-vm-launcher) or the way
> mi-deployment works.

You have complete control with these migrations.  You can chose to not install 
any tools shed repositories, and just start your Galaxy server.  If you choose 
to install the defined repositories, you have control over what specific tools 
included in the repository are loaded into your tool panel by having them 
defined in your tool_conf.xml prior to the installation.  This whole process is 
associated only with tools that have moved from the Galaxy distribution to the 
tool shed.


> 
> Thanks again for your time and patience in explaining these things to me,
> -John
> 
> On Fri, Jun 8, 2012 at 4:19 AM, Greg Von Kuster  wrote:
>> Hi John,
>> 
>> On Jun 7, 2012, at 11:55 PM, John Chilton wrote:
>> 
>>> I have read through the documentation a couple times, but I still have
>>> a few questions about the recent tool shed enhancements.
>>> 
>>> At MSI we have a testing environment and a production environment and
>>> I want to make sure the tool versions and configurations don't get out
>>> of sync, I would also like to test everything in our testing
>>> environment before it reaches production.
>>> 
>>> Is there a recommended way to accomplish this rather than just
>>> manually repeating the same set of UI interactions twice?
>>> 
>>> Can I just import tools through the testing UI and run the
>>> ./scripts/migrate_tools/ scripts on our testing repository and
>>> then move the resulting migrated_tools_conf.xml and
>>> integrated_tool_panel.xml files into production? I have follow up
>>> questions, but I will wait for a response on this point.
>> 
>> Tools that used to be in the Galaxy distribution but have been moved to the 
>> main Galaxy tool shed are automatically installed when you start up your 
>> Galaxy server and presented with the option of running the migration script 
>> to automatically install the tools that were migrated in the specific Galaxy 
>> distribution release.  If you choose to install the tools, they are 
>> installed only in that specific Galaxy instance.  Installation produces 
>> mercurial repositories that include the tools on disk in your Galaxy server 
>> environment.  Several other things are produced as well, including database 
>> records for the installation.  Each Galaxy instance consists of it's own 
>> separate set of components, this installation

Re: [galaxy-dev] Remote Job Submission from Local Install?

2012-06-08 Thread Gallant, Jason
Hi Nate,

Excellent to hear!  I will stay tuned for this most welcome update.  In the 
meantime, I welcome any other user's input on their solutions :)

Cheers,
Jason

-
Jason Gallant, Ph.D.
Postdoctoral Research Associate
Department of Biology
BRB 224, 5 Cummington Street
Boston University
Boston, MA 02215

Lab: 617-358-4590
Office: 617-358-3291
www.jgallant.org


On Jun 8, 2012, at 4:21 PM, Nate Coraor wrote:

> On Jun 4, 2012, at 4:26 PM, Gallant, Jason wrote:
> 
>> Greetings Galaxy Devs!
>> 
>> I've been doing a fair amount of poking around in the forums and on the wiki 
>> but haven't found a satisfying answer to my query...
>> 
>> Has anyone come up with a workable solution to run jobs from a local 
>> installation of galaxy on a campus cluster or on XSEDE/TeraGrid?  While it 
>> might be possible for us (though probably a headache) to install galaxy on 
>> one of the head nodes on campus, it seems very unlikely for me to be able to 
>> do this on something like this on a resource such as blacklight.  
> 
> Hi Jason,
> 
> There's work under way for Galaxy to support resources at a higher level so 
> that it doesn't need direct access to a cluster or a shared filesystem.  This 
> would include support for XSEDE resources.  We hope to complete it this 
> summer.
> 
> In addition, some people on this list have made their own modifications to 
> Galaxy to support environments like this, so perhaps some of them will post 
> up with their solutions.
> 
> --nate
> 
>> It seems that over the years past there have been a few mentions different 
>> strategies taken to accomplish something like this, but the details on this 
>> seem scarce.  If this is something that someone on the list has worked out, 
>> does anyone have time to describe their setup in more detail?  
>> 
>> More generally, are there plans in the Galaxy roadmap to include this as a 
>> potential feature in future releases?  It seems that many small labs like 
>> ours would find this feature quite useful.
>> 
>> Appreciative of any help you might be able to provide-- keep up the great 
>> work Galaxy developers!
>> -
>> Jason Gallant, Ph.D.
>> Postdoctoral Research Associate
>> Department of Biology
>> BRB 224, 5 Cummington Street
>> Boston University
>> Boston, MA 02215
>> 
>> Lab: 617-358-4590
>> Office: 617-358-3291
>> www.jgallant.org
>> 
>> 
>> 
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>> 
>> http://lists.bx.psu.edu/
> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Remote Job Submission from Local Install?

2012-06-08 Thread Nate Coraor
On Jun 4, 2012, at 4:26 PM, Gallant, Jason wrote:

> Greetings Galaxy Devs!
> 
> I've been doing a fair amount of poking around in the forums and on the wiki 
> but haven't found a satisfying answer to my query...
> 
> Has anyone come up with a workable solution to run jobs from a local 
> installation of galaxy on a campus cluster or on XSEDE/TeraGrid?  While it 
> might be possible for us (though probably a headache) to install galaxy on 
> one of the head nodes on campus, it seems very unlikely for me to be able to 
> do this on something like this on a resource such as blacklight.  

Hi Jason,

There's work under way for Galaxy to support resources at a higher level so 
that it doesn't need direct access to a cluster or a shared filesystem.  This 
would include support for XSEDE resources.  We hope to complete it this summer.

In addition, some people on this list have made their own modifications to 
Galaxy to support environments like this, so perhaps some of them will post up 
with their solutions.

--nate

> It seems that over the years past there have been a few mentions different 
> strategies taken to accomplish something like this, but the details on this 
> seem scarce.  If this is something that someone on the list has worked out, 
> does anyone have time to describe their setup in more detail?  
> 
> More generally, are there plans in the Galaxy roadmap to include this as a 
> potential feature in future releases?  It seems that many small labs like 
> ours would find this feature quite useful.
> 
> Appreciative of any help you might be able to provide-- keep up the great 
> work Galaxy developers!
> -
> Jason Gallant, Ph.D.
> Postdoctoral Research Associate
> Department of Biology
> BRB 224, 5 Cummington Street
> Boston University
> Boston, MA 02215
> 
> Lab: 617-358-4590
> Office: 617-358-3291
> www.jgallant.org
> 
> 
> 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Unnamed histories

2012-06-08 Thread Nate Coraor
Hi Jelle and Sarah,

I can't work on this issue right this moment, but have created an issue to 
track so it's not forgotten:

https://bitbucket.org/galaxy/galaxy-central/issue/767/when-remote_user-true-a-lot-of-new-empty

--nate

On Jun 3, 2012, at 7:43 AM, Jelle Scholtalbers wrote:

> Hi Sarah,
> 
> although I do not have the answer to the issue, I had the same problem when I 
> was using remote_user with apache and ldap authentication. 
> I have too little knowledge of session ids/cookies, and it has been a while 
> (>1,5yr), but somehow galaxy "thinks" you are in a new session and therefor 
> creates a new history.
> 
> Cheers,
> Jelle
> 
> On Fri, Jun 1, 2012 at 11:58 AM, Sarah Maman  
> wrote:
> Dear all,
> 
> Each time a user connects to our local instance of Galaxy, a new history is 
> automatically created. Therefore, the list of "Unnamed history" can become 
> very long.
> Is there a way to reuse the same empty "Unnamed history" to avoid creating a 
> new one every time?
> 
> Thank you in advance,
> Sarah
> 
> 
> 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/
> 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] human genome variation tool error

2012-06-08 Thread Nate Coraor
On Jun 4, 2012, at 11:34 AM, Shantanu Pavgi wrote:

> 
> I am getting following errors in the galaxy log:
> 
> {{{
> Error reading tool from path: human_genome_variation/sift.xml
> Traceback (most recent call last):
>  File "/home/shantanu/galaxy/galaxy-uab/lib/galaxy/tools/__init__.py", line 
> 315, in load_tool_tag_set
>tool = self.load_tool( os.path.join( tool_path, path ), guid=guid )
>  File "/home/shantanu/galaxy/galaxy-uab/lib/galaxy/tools/__init__.py", line 
> 420, in load_tool
>tree = util.parse_xml( config_file )
>  File "/home/shantanu/galaxy/galaxy-uab/lib/galaxy/util/__init__.py", line 
> 105, in parse_xml
>tree = ElementTree.parse(fname)
>  File 
> "/home/shantanu/galaxy/galaxy-uab/eggs/elementtree-1.2.6_20050316-py2.6.egg/elementtree/ElementTree.py",
>  line 859, in parse
>tree.parse(source, parser)
>  File 
> "/home/shantanu/galaxy/galaxy-uab/eggs/elementtree-1.2.6_20050316-py2.6.egg/elementtree/ElementTree.py",
>  line 576, in parse
>source = open(source, "rb")
> IOError: [Errno 2] No such file or directory: 
> './tools/human_genome_variation/sift.xml'
> galaxy.tools ERROR 2012-06-04 09:57:18,028 Error reading tool from path: 
> human_genome_variation/linkToGProfile.xml
> Traceback (most recent call last):
>  File "/home/shantanu/galaxy/galaxy-uab/lib/galaxy/tools/__init__.py", line 
> 315, in load_tool_tag_set
>tool = self.load_tool( os.path.join( tool_path, path ), guid=guid )
>  File "/home/shantanu/galaxy/galaxy-uab/lib/galaxy/tools/__init__.py", line 
> 420, in load_tool
>tree = util.parse_xml( config_file )
>  File "/home/shantanu/galaxy/galaxy-uab/lib/galaxy/util/__init__.py", line 
> 105, in parse_xml
>tree = ElementTree.parse(fname)
>  File 
> "/home/shantanu/galaxy/galaxy-uab/eggs/elementtree-1.2.6_20050316-py2.6.egg/elementtree/ElementTree.py",
>  line 859, in parse
>tree.parse(source, parser)
>  File 
> "/home/shantanu/galaxy/galaxy-uab/eggs/elementtree-1.2.6_20050316-py2.6.egg/elementtree/ElementTree.py",
>  line 576, in parse
>source = open(source, "rb")
> 
> }}}
> 
> This tool was renamed or moved in the latest galaxy-dist update. So I think 
> the error should get resolved after removing corresponding 'hgv' entry from 
> tool_conf.xml file. Is that correct way to resolve it or do I need to perform 
> any other/additional steps?

That's correct.  These tools are now available through the Tool Shed.

--nate

> 
> 
> --
> Thanks,
> Shantanu
> 
> 
> 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Proftpd Problem

2012-06-08 Thread Nate Coraor
On Jun 4, 2012, at 8:58 AM, CHEBBI Mohamed Amine wrote:

> 
> 
> -- Forwarded message --
> From: CHEBBI Mohamed Amine 
> Date: 2012/5/25
> Subject: Proftpd Problem
> To: galaxy-dev@lists.bx.psu.edu
> 
> 
>  Hi,
> I'am trying to set up ProFTP on Debain  to enable FTP upload on Galaxy 
> installed  locally? I followed the instructions on this link 
> (http://wiki.g2.bx.psu.edu/Admin/Config/Upload%20via%20FTP) but it doesn't 
> connect when using Useremail .
> the proftpd.log get this message :
> juin 04 14:43:56 rosalind proftpd[31368] rosalind (193.55.24.4[193.55.24.4]): 
> FTP session opened.
> juin 04 14:43:56 rosalind proftpd[31368] rosalind (193.55.24.4[193.55.24.4]): 
> notice: unable to use '~/' [resolved to '/usr/local/appli/Galaxy/$
> juin 04 14:43:56 rosalind proftpd[31368] rosalind (193.55.24.4[193.55.24.4]): 
> Preparing to chroot to directory '~/'
> juin 04 14:43:56 rosalind proftpd[31368] rosalind (193.55.24.4[193.55.24.4]): 
> chroot to '~/' failed for user 'chebbimam...@hotmail.com': Opérat$
> juin 04 14:43:56 rosalind proftpd[31368] rosalind (193.55.24.4[193.55.24.4]): 
> error: unable to set default root directory
> juin 04 14:43:56 rosalind proftpd[31368] rosalind (193.55.24.4[193.55.24.4]): 
> FTP session closed.
> 
> Could someone help me .

Hi Amine,

Could you post your proftpd.conf, with any sensitive information (e.g. database 
server address/password) removed?

Thanks,
--nate

> Thanks
> Amine
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Galaxy cluster jobs - automatic priority setting

2012-06-08 Thread Nate Coraor
On Jun 4, 2012, at 7:24 AM, Peter Cock wrote:

> Dear all,
> 
> Does Galaxy have any mechanisms to set the priority of jobs
> submitted to the cluster? Specifically I am interested in SGE
> via DRMAA, but this is a general issue. If there is some existing
> code, I might be able to use it for the following situation:
> 
> The specific motivation is for balancing competing BLAST
> searches from multiple users: I want to be able to prioritize
> short jobs (e.g. under 100 queries) over large jobs (tens of
> thousands of queries).
> 
> In the (experimental) task splitting code, I would like to be
> able to give short jobs higher priority (e.g. normal) than large
> jobs (e.g. low priority) based on the size of the split file.
> 
> One idea would be a scaling bases on the (split) input file's
> size (in bytes), or perhaps a per-file format size threshold,
> e.g. when splitting FASTA files, 1000 sequences might trigger
> lower priority.
> 
> The advantage of this kind of assessment is it would also work
> on both the current split mechanisms, "number_of_parts" and
> "to_size", as well as any hybrid of the two:
> 
>  />
> 
> http://lists.bx.psu.edu/pipermail/galaxy-dev/2012-May/009647.html
> 
> So, is there any existing code in Galaxy that would be helpful
> here - or existing plans in this area?

Hi Peter,

Nothing existing short of separate copies of the same tool, one to be used for 
short jobs, and one for longer jobs.  It should be possible with John Chilton's 
dynamic job runners[1], which I will accept and merge any day now.

--nate

[1] 
https://bitbucket.org/galaxy/galaxy-central/pull-request/12/dynamic-job-runners

> 
> Thanks,
> 
> Peter
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Implement scaling and load balancing galaxy on a 8-core host with cluster backend

2012-06-08 Thread Nate Coraor
On Jun 4, 2012, at 1:06 AM, Derrick Lin wrote:

> Hi guys,
> 
> Today my institute ran a galaxy training with about 20 people. We didn't 
> implement multiple instances for galaxy python. When everyone started 
> submitting jobs, the thread pool filled up very quickly, soon after the 
> galaxy threw worker queue full error then became unresponsive. It had to be 
> restarted.
> 
> So we decide to implement the multiple instances overnight in prepare for 2nd 
> day of the training. Our setup is a 8-core host runs the galaxy server 
> itself, and a 500-core cluster handles all the jobs.
> 
> So I am wondering how I should distribute 8 cores for different roles (web, 
> manager, job handler). In the wiki, the author said he runs six web server 
> processes on 8 CPUs server, I assume he only runs one manager and one job 
> handler. Is one job handler is more than enough? Even for the public galaxy?

Hi Derrick,

We run 6 job handlers for usegalaxy.org: 3 for typical jobs, 2 for NGS tools, 
and 1 for jobs originating from Trackster.  Also, the processes won't be on-CPU 
constantly, so you can safely run more processes than you have cores.

> Second question is, it makes sense to have all web roles listen on a public 
> IP (with different ports). For manager, job handler, can I just set them to 
> listen 127.0.0.1? Or they have to be listening the same IP as the web roles?

They can listen to any available IP, but I leave them as localhost to prevent 
anyone from bypassing the proxy and accessing them directly.

--nate

> 
> Regards,
> Derrick
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Galaxy update / -- reload option

2012-06-08 Thread Nate Coraor
On Jun 1, 2012, at 5:31 AM, Sarah Maman wrote:

> Hello Nate,
> 
> As agreed at the conference in Lyon, here's a screen printed my instance of 
> Galaxy (please look at the end of this mail), but the layout is not exactly 
> the same as that seen in the instance of Galaxy on the public server.

Hi Sarah,

This error occurs when the cluster job's standard output and standard error 
files aren't written where Galaxy expects them.  The cluster job runners will 
attempt to put them in database/pbs/ by default (configurable via 
cluster_files_dir in universe_wsgi.ini).  If you can successfully execute this 
on the command line:

$ qsub -o /path/to/galaxy/database/pbs/test.o -e 
/path/to/galaxy/database/pbs/test.e test.sh

And have those test.o and test.e files exist once the job is complete, then it 
should work in Galaxy.

> To further explain my approach, here are the steps of my local installation:
> 1 In January, I've installed a local instance of Galaxy (connected to our 
> cluster) from a source repository: tarball availbale here : 
> http://dist.g2.bx.psu.edu/galaxy-dist.b258de1e6cea.tar.gz
> 2 In April / May, I've installed Mercurial, recovered sources with hg clone 
> (Date: Wed March 07 2012, changeset: 6799:40 f1816d6857) , then updated 
> galaxy sources with all changes and settings made between January and March. 
> Therefore, the configuration file universe.ini of March was more complete 
> than universe file of January. Can I just make a tkdiff of these two 
> configuration files (January to March) and copy / paste the code added?

Yes, you can diff and add the new options from the new version of the config 
file.

> Moreover, on my instance, if I use the -- reload option with run.sh script, 
> port 80 is no longer available. So, I must kill the python process underway 
> before running the script run.sh. Therefore, the service is interrupted for 
> users Do you have an idea to avoid this interruption ?

Local jobs will always be lost when you restart the Galaxy server process.  The 
only solution to this problem at the moment is to run all tools on a cluster.  
One trick you can use if you'd like to use your Galaxy server to run most jobs 
is to configure your Galaxy server as a cluster node and submit tools to it by 
default.

> Concerning bug reports, is it possible to configure my local instance of 
> Galaxy so that the mail 'report bug' is sent to the administrator (defines in 
> galaxy configuration file universe) instead of being sent to the Galaxy team?

Yes, in universe_wsgi.ini, see smtp_server, as well as smtp_username and 
smtp_password if necessary, and error_email_to.

--nate

> 
> Thank you in advance,
> Sarah Maman
> 
> 
> 
> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] tool_dependency_dir not being picked up?

2012-06-08 Thread Nate Coraor
On May 31, 2012, at 11:19 PM, Ann Black-Ziegelbein wrote:

> Hi - 
> 
> I have been trying to run a quick test using the tool_dependency_dir 
> configuration option to prove out how it works ... but unfortunately I can't 
> get it to function right.  I was hoping someone could point out my error.  I 
> have written a simple tool in galaxy, version_test, that will just echo out a 
> version number to a text file so I can test out tool dependencies and install 
> locations.  But my required package is not getting resolved by galaxy 
> 
> Snippet from my configuration file: 
> # Directory which contains dependent tool binaries or a env.sh to set env 
> vars in order to find specific versions. 
> tool_dependency_dir = /opt 
> 
> My simple tool: 
> 
>  
>  
> galaxy_test 
>  
> provides simple stats on BAM files 
> test.sh "$output1" 
>  
>  
>  
>  
>  
>  
>  
>  
> 
> My filesystem: 
> 
> [galaxy@galaxy-0-4:galaxy-dist]$ ls -lat /opt/galaxy_test/ 
> total 20 
> drwxr-xr-x  3 root root 4096 May 16 14:10 2.0 
> drwxr-xr-x  3 root root 4096 May 16 14:09 1.0 
> drwxr-xr-x  4 root root 4096 May 16 14:05 . 
> -rw-r--r--  1 root root   50 May 16 14:05 env.sh 
> drwxr-xr-x 72 root root 4096 May 16 14:04 .. 
> 
> 
> My env file: 
> #!/bin/bash 
> PATH=/opt/galaxy_test/1.0/bin:$PATH 
> export PATH 
> 
> 
> Galaxy log messages when invoking my test tool: 
> 
> galaxy.tools DEBUG 2012-05-31 21:59:17,349 Dependency galaxy_test 
> galaxy.tools WARNING 2012-05-31 21:59:17,349 Failed to resolve dependency on 
> 'galaxy_test', ignoring 
> 
> 
> If I manually source the env.sh file , my test.sh is found and executes as 
> appropriately. 
> 
> Where am I going wrong? 

Hi Ann,

env.sh should live inside the version directory, and you'll also need a 
'default' symlink since your requirement tag doesn't have a 'version' 
attribute.  e.g.:

% ln -s 1.0 default
% mv env.sh 1.0

--nate

> 
> Thanks much! 
> 
> Ann
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Error starting Galaxy after upgrade.

2012-06-08 Thread Nate Coraor
On May 30, 2012, at 7:18 PM, Luobin Yang wrote:

> Hi,
> 
> I upgraded Galaxy to the latest version using the command: "hg pull -u", then 
> when I run "sh run.sh" I got the following error messages:
> 
> Initializing openid_conf.xml from openid_conf.xml.sample
> Initializing tool-data/bowtie2_indices.loc from bowtie2_indices.loc.sample
> Some eggs are out of date, attempting to fetch...
> Traceback (most recent call last):
>   File "./scripts/fetch_eggs.py", line 30, in 
> c.resolve() # Only fetch eggs required by the config
>   File "/home/galaxy/galaxy-dist/lib/galaxy/eggs/__init__.py", line 345, in 
> resolve
> egg.resolve()
>   File "/home/galaxy/galaxy-dist/lib/galaxy/eggs/__init__.py", line 195, in 
> resolve
> return self.version_conflict( e.args[0], e.args[1] )
>   File "/home/galaxy/galaxy-dist/lib/galaxy/eggs/__init__.py", line 226, in 
> version_conflict
> r = pkg_resources.working_set.resolve( ( dist.as_requirement(), ), env, 
> egg.fetch )
>   File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 546, in 
> resolve
> raise DistributionNotFound(req)
> pkg_resources.DistributionNotFound: mercurial==2.1.2
> Fetch failed.
> 
> 
> What happened here?

Hi Luobin,

This was most likely due to our recent downtime from an environmental failure 
in our data center.  Please let us know if you're still having problems 
fetching the mercurial egg.

--nate

> 
> Thanks,
> Luobin
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Tool Shed Workflow

2012-06-08 Thread John Chilton
Hello Greg,

Thanks for the prompt and detailed response (though it did make me
sad). I think deploying tested, static components and configurations
to production environments and having production environments not
depending on outside services (like the tool shed) should be
considered best practices.

Oh well, I guess. Would there be a way to at least automate the
pulling of tools in. For instance, would it make sense to tweak
InstallManager to parse a new kind of migration file that is a lot
like the "official" migration files, but with the sections defined in
the file. For this new kind of migration,  the InstallManager would
then import everything in the file and not just the tools that are
also in a tool_conf? Does that make sense? If yes, I imagine it could
be modified to handle updates the same way?

Rephrased, I guess the idea would be to have the sequence of official
galaxy migrations that check tool_conf, and then have a sequence of
migration defined by the deployer that could be used to install new
tools or update existing ones.

My concern isn't just with the dev to production transition, it is
also the ability to sort of programmatically define Galaxy
installations the way I am doing with the galaxy-vm-launcher
(https://bitbucket.org/jmchilton/galaxy-vm-launcher) or the way
mi-deployment works.

Thanks again for your time and patience in explaining these things to me,
-John

On Fri, Jun 8, 2012 at 4:19 AM, Greg Von Kuster  wrote:
> Hi John,
>
> On Jun 7, 2012, at 11:55 PM, John Chilton wrote:
>
>> I have read through the documentation a couple times, but I still have
>> a few questions about the recent tool shed enhancements.
>>
>> At MSI we have a testing environment and a production environment and
>> I want to make sure the tool versions and configurations don't get out
>> of sync, I would also like to test everything in our testing
>> environment before it reaches production.
>>
>> Is there a recommended way to accomplish this rather than just
>> manually repeating the same set of UI interactions twice?
>>
>> Can I just import tools through the testing UI and run the
>> ./scripts/migrate_tools/ scripts on our testing repository and
>> then move the resulting migrated_tools_conf.xml and
>> integrated_tool_panel.xml files into production? I have follow up
>> questions, but I will wait for a response on this point.
>
> Tools that used to be in the Galaxy distribution but have been moved to the 
> main Galaxy tool shed are automatically installed when you start up your 
> Galaxy server and presented with the option of running the migration script 
> to automatically install the tools that were migrated in the specific Galaxy 
> distribution release.  If you choose to install the tools, they are installed 
> only in that specific Galaxy instance.  Installation produces mercurial 
> repositories that include the tools on disk in your Galaxy server 
> environment.  Several other things are produced as well, including database 
> records for the installation.  Each Galaxy instance consists of it's own 
> separate set of components, this installation process must be done for each 
> instance.  The installation is fully automatic, requiring little interaction 
> on the part of the Galaxy admin, and doesn't require much time, so performing 
> the process for each Galaxy instance should not be too intensive.  Also, the 
> tools that are installed into each Galaxy instance's tool panel are only 
> those tools that were originally defined in the tool panel configuration file 
> (tool_conf.xml).  This approach provides for the case where each Galaxy 
> instance having different tools defined will not be altered by the migration 
> process.
>
>
>>
>> Also as you are removing tools from Galaxy and placing them into our
>> tool shed, what is the recommended course of actions for deployers
>> that have made local minor tweaks to those tool configs and scripts
>> and adapt them to our local environments? Along the same lines, what
>> is the recommended course of action if we need to make minor tweaks to
>> tools pulled into through the UI to adapt them to our institution.
>
>
> In both cases you should upload your proprietary tools to either a local 
> Galaxy tool shed that you administer, or the main Galaxy tool shed if you 
> want.  You can choose to not execute any of the tool migration scripts, so 
> the Galaxy tools that were migrated from the distribution will not be 
> installed into your Galaxy environment.  You can use the Galaxy admin UI to 
> install your proprietary versions of the migrated tools from the tool shed in 
> which you chose to upload and store them.  New versions of the tools can be 
> uploaded to respective tool shed repositories over time.
>
>
>>
>> Thanks for your time,
>> -John
>>
>> 
>> John Chilton
>> Senior Software Developer
>> University of Minnesota Supercomputing Institute
>> Office: 612-625-0917
>> Cell: 612-226-9223
>> _

Re: [galaxy-dev] How to upload local files in Galaxy

2012-06-08 Thread Mehmet Belgin
Hi Alban,

Yes, we would very much interested in this tool, since our current workaround 
is not a desired one (a python mini http server!). 

Thank you for sharing it :)

-Mehmet




On Jun 8, 2012, at 4:36 AM, Alban Lermine wrote:

> Hi,
> 
> There is also another solution if you don't want let users being able to 
> create libraries.
> We have implemented this solution in our local production server here at 
> Institut Curie.
> We add a tool call "local upload file", that takes as entry parameters the 
> name of the dataset, the type of file and the path to the file you want to 
> upload.
> At the execution, the bash script behind will remove the output dataset 
> created by Galaxy and replaced it by a symbolic link to the file (so you 
> don't duplicate files).
> Just to warn you, there could be a security failure with this tool, because 
> if it is locally executed, it will be by the Galaxy applicative user that can 
> potentially have more rights on file than the current user.
> To go through this failure, we execute this tool on our local cluster (with 
> pbs/torque) as the current user (so if the current user try to upload a file 
> that he doesn't owned, the tool will not be able to create the symbolic link 
> and the user will receive an error on the Galaxy interface).
> 
> Tell me if you're interested in such a tool, and I send you the xml file and 
> bash script associated.
> 
> Bests,
> 
> Alban
> 
> 
> 
> Le 07/06/2012 20:11, Mehmet Belgin a écrit :
>> 
>> Brad, 
>> 
>> Thank you for your fast reply! Looks like "library_import_dir" is for admins 
>> and there is another library option for users. I will try with that one and 
>> see if the files appear in the GUI.
>> 
>> Thanks!
>> 
>> 
>> =
>> Mehmet Belgin, Ph.D. (mehmet.bel...@oit.gatech.edu)
>> Scientific Computing Consultant | OIT - Academic and Research Technologies
>> Georgia Institute of Technology
>> 258 Fourth Street, Rich Building, Room 326 
>> Atlanta, GA  30332-0700
>> Office: (404) 385-0665
>> 
>> 
>> 
>> 
>> On Jun 7, 2012, at 2:04 PM, Langhorst, Brad wrote:
>> 
>>> Mehmet:
>>> 
>>> It's not important how the files get there, they could be moved via ftp, 
>>> scp, cp, smb - whatever.
>>> Galaxy will use that directory to import from no matter how the files 
>>> arrive.
>>> 
>>> I found that confusing at first too.
>>> 
>>> Brad
>>> On Jun 7, 2012, at 1:55 PM, Mehmet Belgin wrote:
>>> 
 Hi Everyone,
 
 I am helping a research group to use Galaxy on our clusters. Unfortunately 
 I have no previous experience with Galaxy, but learning along the way. We 
 are almost there, but cannot figure out one particular issue. This is 
 about configuration of Galaxy, so I thought developers list is a better 
 place to submit than the user list.
 
 The galaxy web interface allows for either copy/paste of text, or a URL. 
 Unfortunately we cannot setup a FTP server as instructed due to 
 restrictions on the cluster. The files we are trying to upload are large; 
 around 2GB in size. It does not make sense to upload these files to a 
 remote location (which we can provide an URL for) and download them back, 
 since the data and galaxy are on the same system. However, I could not 
 find a way to open these files locally. 
 
 I did some reading, and hoped that "library_import_dir" in 
 "universe_wsgi.ini" would do the trick, but it didn't. Therefore, I
  will really appreciate any suggestions.
 
 Thanks a lot in advance!
 
 -Mehmet
 
 
 
 
 
 ___
 Please keep all replies on the list by using "reply all"
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
 
  http://lists.bx.psu.edu/
>>> 
>>> --
>>> Brad Langhorst
>>> langho...@neb.com
>>> 978-380-7564
>>> 
>>> 
>>> 
>>> 
>> 
>> 
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>> 
>>   http://lists.bx.psu.edu/
> 
> 
> -- 
> Alban Lermine 
> Unité 900 : Inserm - Mines ParisTech - Institut Curie
> « Bioinformatics and Computational Systems Biology of Cancer »
> 11-13 rue Pierre et Marie Curie (1er étage) - 75005 Paris - France
> Tel : +33 (0) 1 56 24 69 84
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and othe

[galaxy-dev] Adding mm10 to available genomes

2012-06-08 Thread Larry Helseth
I'm trying to add mm10 to a Galaxy instance built using Mercurial on May
31st (using Postgres).  I've added the .fa file, prepared bwa, bowtie,
samtool indexes, etc., edited all of the .loc files for individual tools
and restarted the server daemon.  mm10 appears in the list of genomes
available for these tools but doesn't appear in the list of all genomes
when I try uploading a file (Get Data/Upload File from your computer) OR
when I try adding the file through the admin library interface
(Administration/Data/Manage libraries/Add datasets).  I tried editing
~/galaxy-dist/scripts/loc_files/create_all_fasta_loc.py to add a line for
"mm10". restarting Galaxy but that genome still doesn't appear in the list.

Do I need to edit an entry in Postgres?  When I restart Galaxy without the
daemon mode (or tail the paster.log) I see that clicking on Get Data/Upload
File is calling GET /tool_runner?tool_id=upload1 for the /root/tool_menu.

Am I supposed to upload new genomes through the Library?
screencast.g2.bx.psu.edu is still down so I can't watch the video tutorials
about configuring custom genomes.  I'd appreciate any suggestions.  Thanks
in advance!

Adios,
Larry


Larry Helseth, Ph.D.

Center for Molecular Medicine

NorthShore University HealthSystem

Evanston, IL
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] How to upload local files in Galaxy

2012-06-08 Thread Fields, Christopher J
On Jun 8, 2012, at 8:42 AM, Peter Cock wrote:

> On Fri, Jun 8, 2012 at 2:38 PM, Raj Ayyampalayam  wrote:
>> Hello,
>> 
>> I am interested in this "local upload file" tool. Can you please send the
>> files to me as well.
>> 
>> Thanks.
>> -Raj
> 
> This seems a potentially very useful tool, so putting it on the Galaxy Tool
> Shed seems like a better idea (with suitable warnings in the documentation).
> 
> Peter

I agree with making the tool more widely available.  Just to note, though, Nate 
and I had a discussion on list about this a while back.  As Brad mentioned, if 
you follow the guidelines for FTP import, any method used (not just FTP, but 
scp, sftp, grid-ftp, etc.) to get data into the 'FTP' import folder works as 
long as permissions on the data are set so the galaxy user on the cluster end 
can read the data.  We had our local cluster admins set up a link to the user's 
galaxy import folder in their home directory, so users can basically do this:

scp mydata.fastq.gz usern...@biocluster.igb.illinois.edu:galaxy-upload

At the moment this is importing directly to the cluster that galaxy resides on, 
but we could set this up to import server side.  Nate had also indicated the 
'FTP-ness' in the documentation and web page would be genericized, but this 
obviously hasn't happened yet…

Aside: are symlinks working with FTP imports?  We have a few users who would 
like the ability to work on the command line and within Galaxy w/o having to 
copy large files, so having a single data source would be nice.

chris
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Toolshed error with proprietary datatypes

2012-06-08 Thread Greg Von Kuster
Hi Bjoren,

What revision are you seeing this in?  I don't think this issues exists in 
Galaxy central's tip, but let me know.

Thanks!

Greg

On Jun 8, 2012, at 10:44 AM, Björn Grüning wrote:

> Hi Greg,
> 
> thanks! I can confirm that it is working now. I can upload new
> datatypes.
> What is not working is the update of such tools from the toolshed.
> I get the following error.
> 
> URL:
> http://localhost/toolshed/repository/check_for_updates?galaxy_url=http://localhost/&name=molecule_datatypes&owner=admin&changeset_revision=2a638740087d&webapp=galaxy&no_reset=true
> Module paste.exceptions.errormiddleware:143 in __call__
>>> app_iter = self.application(environ, start_response)
> Module paste.debug.prints:98 in __call__
>>> environ, self.app)
> Module paste.wsgilib:539 in intercept_output
>>> app_iter = application(environ, replacement_start_response)
> Module paste.recursive:80 in __call__
>>> return self.application(environ, start_response)
> Module paste.httpexceptions:632 in __call__
>>> return self.application(environ, start_response)
> Module galaxy.web.framework.base:160 in __call__
>>> body = method( trans, **kwargs )
> Module galaxy.webapps.community.controllers.repository:684 in
> check_for_updates
>>> url += '&latest_ctx_rev=%s' % str( latest_ctx.rev() )
> UnboundLocalError: local variable 'latest_ctx' referenced before
> assignment
> 
> Removing and installing works fine.
> If you need more information to track that bug done, let me know.
> 
> Cheers,
> Bjoern
> 
> 
>> Hello Bjoern,
>> 
>> This should be resolved in change set 7211:16a93eb6eaf6, which is available 
>> from our central repository.  Thanks for reporting this, and please let me 
>> know if you encounter additional issues.
>> 
>> Greg Von Kuster
>> 
>> 
>> On May 26, 2012, at 10:51 AM, Björn Grüning wrote:
>> 
>>> Hi,
>>> 
>>> has anyone encountered that problem or has an idea how to solve it?
>>> 
>>> Traceback (most recent call last):
>>> File "/home/ctb/galaxy-dist/lib/galaxy/web/buildapp.py", line 82, in
>>> app_factory
>>>   app = UniverseApplication( global_conf = global_conf, **kwargs )
>>> File "/home/ctb/galaxy-dist/lib/galaxy/app.py", line 64, in __init__
>>>   self.installed_repository_manager.load_proprietary_datatypes()
>>> File "/home/ctb/galaxy-dist/lib/galaxy/tool_shed/__init__.py", line
>>> 26, in load_proprietary_datatypes
>>>   load_datatype_items( self.app, tool_shed_repository,
>>> relative_install_dir )
>>> File "/home/ctb/galaxy-dist/lib/galaxy/util/shed_util.py", line 1036,
>>> in load_datatype_items
>>>   app.datatypes_registry.load_datatype_converters( app.toolbox,
>>> installed_repository_dict=repository_dict, deactivate=deactivate )
>>> AttributeError: 'UniverseApplication' object has no attribute 'toolbox'
>>> 
>>> All files are in one repository ... xml, the python-class und the
>>> converters. I was able to successfully upload it to our test-toolshed
>>> but after installing it (also successfully) galaxy is not able to start.
>>> Also the converters are recognised as tools in the toolshed is that
>>> intended?
>>> 
>>> Thanks!
>>> Bjoern
>>> 
>>> ___
>>> Please keep all replies on the list by using "reply all"
>>> in your mail client.  To manage your subscriptions to this
>>> and other Galaxy lists, please use the interface at:
>>> 
>>> http://lists.bx.psu.edu/
>> 
> 
> -- 
> Björn Grüning
> Albert-Ludwigs-Universität Freiburg
> Institute of Pharmaceutical Sciences
> Pharmaceutical Bioinformatics
> Hermann-Herder-Strasse 9
> D-79104 Freiburg i. Br.
> 
> Tel.:  +49 761 203-4872
> Fax.:  +49 761 203-97769
> E-Mail: bjoern.gruen...@pharmazie.uni-freiburg.de
> Web: http://www.pharmaceutical-bioinformatics.org/
> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Toolshed error with proprietary datatypes

2012-06-08 Thread Björn Grüning
Hi Greg,

thanks! I can confirm that it is working now. I can upload new
datatypes.
What is not working is the update of such tools from the toolshed.
I get the following error.

URL:
http://localhost/toolshed/repository/check_for_updates?galaxy_url=http://localhost/&name=molecule_datatypes&owner=admin&changeset_revision=2a638740087d&webapp=galaxy&no_reset=true
Module paste.exceptions.errormiddleware:143 in __call__
>>  app_iter = self.application(environ, start_response)
Module paste.debug.prints:98 in __call__
>>  environ, self.app)
Module paste.wsgilib:539 in intercept_output
>>  app_iter = application(environ, replacement_start_response)
Module paste.recursive:80 in __call__
>>  return self.application(environ, start_response)
Module paste.httpexceptions:632 in __call__
>>  return self.application(environ, start_response)
Module galaxy.web.framework.base:160 in __call__
>>  body = method( trans, **kwargs )
Module galaxy.webapps.community.controllers.repository:684 in
check_for_updates
>>  url += '&latest_ctx_rev=%s' % str( latest_ctx.rev() )
UnboundLocalError: local variable 'latest_ctx' referenced before
assignment

Removing and installing works fine.
If you need more information to track that bug done, let me know.

Cheers,
Bjoern


> Hello Bjoern,
> 
> This should be resolved in change set 7211:16a93eb6eaf6, which is available 
> from our central repository.  Thanks for reporting this, and please let me 
> know if you encounter additional issues.
> 
> Greg Von Kuster
> 
> 
> On May 26, 2012, at 10:51 AM, Björn Grüning wrote:
> 
> > Hi,
> > 
> > has anyone encountered that problem or has an idea how to solve it?
> > 
> > Traceback (most recent call last):
> >  File "/home/ctb/galaxy-dist/lib/galaxy/web/buildapp.py", line 82, in
> > app_factory
> >app = UniverseApplication( global_conf = global_conf, **kwargs )
> >  File "/home/ctb/galaxy-dist/lib/galaxy/app.py", line 64, in __init__
> >self.installed_repository_manager.load_proprietary_datatypes()
> >  File "/home/ctb/galaxy-dist/lib/galaxy/tool_shed/__init__.py", line
> > 26, in load_proprietary_datatypes
> >load_datatype_items( self.app, tool_shed_repository,
> > relative_install_dir )
> >  File "/home/ctb/galaxy-dist/lib/galaxy/util/shed_util.py", line 1036,
> > in load_datatype_items
> >app.datatypes_registry.load_datatype_converters( app.toolbox,
> > installed_repository_dict=repository_dict, deactivate=deactivate )
> > AttributeError: 'UniverseApplication' object has no attribute 'toolbox'
> > 
> > All files are in one repository ... xml, the python-class und the
> > converters. I was able to successfully upload it to our test-toolshed
> > but after installing it (also successfully) galaxy is not able to start.
> > Also the converters are recognised as tools in the toolshed is that
> > intended?
> > 
> > Thanks!
> > Bjoern
> > 
> > ___
> > Please keep all replies on the list by using "reply all"
> > in your mail client.  To manage your subscriptions to this
> > and other Galaxy lists, please use the interface at:
> > 
> >  http://lists.bx.psu.edu/
> 

-- 
Björn Grüning
Albert-Ludwigs-Universität Freiburg
Institute of Pharmaceutical Sciences
Pharmaceutical Bioinformatics
Hermann-Herder-Strasse 9
D-79104 Freiburg i. Br.

Tel.:  +49 761 203-4872
Fax.:  +49 761 203-97769
E-Mail: bjoern.gruen...@pharmazie.uni-freiburg.de
Web: http://www.pharmaceutical-bioinformatics.org/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] How to upload local files in Galaxy

2012-06-08 Thread Alban Lermine
Re,

My bad: upload_local_tool -> upload_local_file tool

Bests,

Alban

> Hi all,
>
> Despite the possible security failure, I push the upload_local_tool in
> main toolshed under the "data source" repository.
>
> I notice about the possible security failure and the way to fix it.
>
> I also have a simple 5 minutes method to modify existing tools to
> allow users to choose the directory where to write the outputs (to
> keep working with your own tree whitout breaking Galaxy database
> links). Tell me if you are interested in this method.. Bests,
>
> Alban
>
>
>
>> Hello,
>>
>> I am interested in this "local upload file" tool. Can you please send
>> the files to me as well.
>>
>> Thanks.
>> -Raj
>>
>> On 6/8/12 4:36 AM, Alban Lermine wrote:
>>> Hi,
>>>
>>> There is also another solution if you don't want let users being
>>> able to create libraries.
>>> We have implemented this solution in our local production server
>>> here at Institut Curie.
>>> We add a tool call "local upload file", that takes as entry
>>> parameters the name of the dataset, the type of file and the path to
>>> the file you want to upload.
>>> At the execution, the bash script behind will remove the output
>>> dataset created by Galaxy and replaced it by a symbolic link to the
>>> file (so you don't duplicate files).
>>> Just to warn you, there could be a security failure with this tool,
>>> because if it is locally executed, it will be by the Galaxy
>>> applicative user that can potentially have more rights on file than
>>> the current user.
>>> To go through this failure, we execute this tool on our local
>>> cluster (with pbs/torque) as the current user (so if the current
>>> user try to upload a file that he doesn't owned, the tool will not
>>> be able to create the symbolic link and the user will receive an
>>> error on the Galaxy interface).
>>>
>>> Tell me if you're interested in such a tool, and I send you the xml
>>> file and bash script associated.
>>>
>>> Bests,
>>>
>>> Alban
>>>
>>>
>>>
>>> Le 07/06/2012 20:11, Mehmet Belgin a écrit :
 Brad, 

 Thank you for your fast reply! Looks like "library_import_dir" is
 for admins and there is another library option for users. I will
 try with that one and see if the files appear in the GUI.

 Thanks!


 =
 Mehmet Belgin, Ph.D. (mehmet.bel...@oit.gatech.edu
 )
 Scientific Computing Consultant | OIT - Academic and Research
 Technologies
 Georgia Institute of Technology
 258 Fourth Street, Rich Building, Room 326 
 Atlanta, GA  30332-0700
 Office: (404) 385-0665




 On Jun 7, 2012, at 2:04 PM, Langhorst, Brad wrote:

> Mehmet:
>
> It's not important how the files get there, they could be moved
> via ftp, scp, cp, smb - whatever.
> Galaxy will use that directory to import from no matter how the
> files arrive.
>
> I found that confusing at first too.
>
> Brad
> On Jun 7, 2012, at 1:55 PM, Mehmet Belgin wrote:
>
>> Hi Everyone,
>>
>> I am helping a research group to use Galaxy on our clusters.
>> Unfortunately I have no previous experience with Galaxy, but
>> learning along the way. We are almost there, but cannot figure
>> out one particular issue. This is about configuration of Galaxy,
>> so I thought developers list is a better place to submit than the
>> user list.
>>
>> The galaxy web interface allows for either copy/paste of text, or
>> a URL. Unfortunately we cannot setup a FTP server as instructed
>> due to restrictions on the cluster. The files we are trying to
>> upload are large; around 2GB in size. It does not make sense to
>> upload these files to a remote location (which we can provide an
>> URL for) and download them back, since the data and galaxy are on
>> the same system. However, I could not find a way to open these
>> files locally. 
>>
>> I did some reading, and hoped that "library_import_dir" in
>> "universe_wsgi.ini" would do the trick, but it didn't. Therefore,
>> I will really appreciate any suggestions.
>>
>> Thanks a lot in advance!
>>
>> -Mehmet
>>
>>
>>
>>
>>
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>>
>>  http://lists.bx.psu.edu/
>
> --
> Brad Langhorst
> langho...@neb.com 
> 978-380-7564
>
>
>
>


 ___
 Please keep all replies on the list by using "reply all"
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the inter

Re: [galaxy-dev] How to upload local files in Galaxy

2012-06-08 Thread Alban Lermine
Hi all,

Despite the possible security failure, I push the upload_local_tool in
main toolshed under the "data source" repository.

I notice about the possible security failure and the way to fix it.

I also have a simple 5 minutes method to modify existing tools to allow
users to choose the directory where to write the outputs (to keep
working with your own tree whitout breaking Galaxy database links). Tell
me if you are interested in this method.. Bests,

Alban



> Hello,
>
> I am interested in this "local upload file" tool. Can you please send
> the files to me as well.
>
> Thanks.
> -Raj
>
> On 6/8/12 4:36 AM, Alban Lermine wrote:
>> Hi,
>>
>> There is also another solution if you don't want let users being able
>> to create libraries.
>> We have implemented this solution in our local production server here
>> at Institut Curie.
>> We add a tool call "local upload file", that takes as entry
>> parameters the name of the dataset, the type of file and the path to
>> the file you want to upload.
>> At the execution, the bash script behind will remove the output
>> dataset created by Galaxy and replaced it by a symbolic link to the
>> file (so you don't duplicate files).
>> Just to warn you, there could be a security failure with this tool,
>> because if it is locally executed, it will be by the Galaxy
>> applicative user that can potentially have more rights on file than
>> the current user.
>> To go through this failure, we execute this tool on our local cluster
>> (with pbs/torque) as the current user (so if the current user try to
>> upload a file that he doesn't owned, the tool will not be able to
>> create the symbolic link and the user will receive an error on the
>> Galaxy interface).
>>
>> Tell me if you're interested in such a tool, and I send you the xml
>> file and bash script associated.
>>
>> Bests,
>>
>> Alban
>>
>>
>>
>> Le 07/06/2012 20:11, Mehmet Belgin a écrit :
>>> Brad, 
>>>
>>> Thank you for your fast reply! Looks like "library_import_dir" is
>>> for admins and there is another library option for users. I will try
>>> with that one and see if the files appear in the GUI.
>>>
>>> Thanks!
>>>
>>>
>>> =
>>> Mehmet Belgin, Ph.D. (mehmet.bel...@oit.gatech.edu
>>> )
>>> Scientific Computing Consultant | OIT - Academic and Research
>>> Technologies
>>> Georgia Institute of Technology
>>> 258 Fourth Street, Rich Building, Room 326 
>>> Atlanta, GA  30332-0700
>>> Office: (404) 385-0665
>>>
>>>
>>>
>>>
>>> On Jun 7, 2012, at 2:04 PM, Langhorst, Brad wrote:
>>>
 Mehmet:

 It's not important how the files get there, they could be moved via
 ftp, scp, cp, smb - whatever.
 Galaxy will use that directory to import from no matter how the
 files arrive.

 I found that confusing at first too.

 Brad
 On Jun 7, 2012, at 1:55 PM, Mehmet Belgin wrote:

> Hi Everyone,
>
> I am helping a research group to use Galaxy on our clusters.
> Unfortunately I have no previous experience with Galaxy, but
> learning along the way. We are almost there, but cannot figure out
> one particular issue. This is about configuration of Galaxy, so I
> thought developers list is a better place to submit than the user
> list.
>
> The galaxy web interface allows for either copy/paste of text, or
> a URL. Unfortunately we cannot setup a FTP server as instructed
> due to restrictions on the cluster. The files we are trying to
> upload are large; around 2GB in size. It does not make sense to
> upload these files to a remote location (which we can provide an
> URL for) and download them back, since the data and galaxy are on
> the same system. However, I could not find a way to open these
> files locally. 
>
> I did some reading, and hoped that "library_import_dir" in
> "universe_wsgi.ini" would do the trick, but it didn't. Therefore,
> I will really appreciate any suggestions.
>
> Thanks a lot in advance!
>
> -Mehmet
>
>
>
>
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>
>  http://lists.bx.psu.edu/

 --
 Brad Langhorst
 langho...@neb.com 
 978-380-7564




>>>
>>>
>>> ___
>>> Please keep all replies on the list by using "reply all"
>>> in your mail client.  To manage your subscriptions to this
>>> and other Galaxy lists, please use the interface at:
>>>
>>>   http://lists.bx.psu.edu/
>>
>>
>> -- 
>> Alban Lermine 
>> Unité 900 : Inserm - Mines ParisTech - Institut Curie
>> « Bioinformatics and Computational Systems Biology of Cancer »
>> 11-13 rue Pierre et Marie

Re: [galaxy-dev] How to upload local files in Galaxy

2012-06-08 Thread Peter Cock
On Fri, Jun 8, 2012 at 2:38 PM, Raj Ayyampalayam  wrote:
> Hello,
>
> I am interested in this "local upload file" tool. Can you please send the
> files to me as well.
>
> Thanks.
> -Raj

This seems a potentially very useful tool, so putting it on the Galaxy Tool
Shed seems like a better idea (with suitable warnings in the documentation).

Peter
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] How to upload local files in Galaxy

2012-06-08 Thread Raj Ayyampalayam

Hello,

I am interested in this "local upload file" tool. Can you please send 
the files to me as well.


Thanks.
-Raj

On 6/8/12 4:36 AM, Alban Lermine wrote:

Hi,

There is also another solution if you don't want let users being able 
to create libraries.
We have implemented this solution in our local production server here 
at Institut Curie.
We add a tool call "local upload file", that takes as entry parameters 
the name of the dataset, the type of file and the path to the file you 
want to upload.
At the execution, the bash script behind will remove the output 
dataset created by Galaxy and replaced it by a symbolic link to the 
file (so you don't duplicate files).
Just to warn you, there could be a security failure with this tool, 
because if it is locally executed, it will be by the Galaxy 
applicative user that can potentially have more rights on file than 
the current user.
To go through this failure, we execute this tool on our local cluster 
(with pbs/torque) as the current user (so if the current user try to 
upload a file that he doesn't owned, the tool will not be able to 
create the symbolic link and the user will receive an error on the 
Galaxy interface).


Tell me if you're interested in such a tool, and I send you the xml 
file and bash script associated.


Bests,

Alban



Le 07/06/2012 20:11, Mehmet Belgin a écrit :

Brad,

Thank you for your fast reply! Looks like "library_import_dir" is for 
admins and there is another library option for users. I will try with 
that one and see if the files appear in the GUI.


Thanks!


=
Mehmet Belgin, Ph.D. (mehmet.bel...@oit.gatech.edu 
)
Scientific Computing Consultant | OIT - Academic and Research 
Technologies

Georgia Institute of Technology
258 Fourth Street, Rich Building, Room 326
Atlanta, GA  30332-0700
Office: (404) 385-0665




On Jun 7, 2012, at 2:04 PM, Langhorst, Brad wrote:


Mehmet:

It's not important how the files get there, they could be moved via 
ftp, scp, cp, smb - whatever.
Galaxy will use that directory to import from no matter how the 
files arrive.


I found that confusing at first too.

Brad
On Jun 7, 2012, at 1:55 PM, Mehmet Belgin wrote:


Hi Everyone,

I am helping a research group to use Galaxy on our clusters. 
Unfortunately I have no previous experience with Galaxy, but 
learning along the way. We are almost there, but cannot figure out 
one particular issue. This is about configuration of Galaxy, so I 
thought developers list is a better place to submit than the user list.


The galaxy web interface allows for either copy/paste of text, or a 
URL. Unfortunately we cannot setup a FTP server as instructed due 
to restrictions on the cluster. The files we are trying to upload 
are large; around 2GB in size. It does not make sense to upload 
these files to a remote location (which we can provide an URL for) 
and download them back, since the data and galaxy are on the same 
system. However, I could not find a way to open these files locally.


I did some reading, and hoped that "library_import_dir" in 
"universe_wsgi.ini" would do the trick, but it didn't. Therefore, I 
will really appreciate any suggestions.


Thanks a lot in advance!

-Mehmet





___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/


--
Brad Langhorst
langho...@neb.com 
978-380-7564







___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/



--
Alban Lermine
Unité 900 : Inserm - Mines ParisTech - Institut Curie
« Bioinformatics and Computational Systems Biology of Cancer »
11-13 rue Pierre et Marie Curie (1er étage) - 75005 Paris - France
Tel : +33 (0) 1 56 24 69 84


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/


--
Bio-informatics consultant
GGF (http://dna.uga.edu) and QBCG (http://qbcg.uga.edu)
706-542-6092 (8-12 Tuesday, Thursday and Friday)
706-542-6877 (8-12 Monday, Wednesday and Friday)
706-583-0442 (12-5 All week)



___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] input and output extensions

2012-06-08 Thread Peter Cock
On Fri, Jun 8, 2012 at 9:15 AM, Nicholas Robinson
 wrote:
> Hi,
>
> 1. INPUT
> I am trying to make a simple tool that sends a command to run crimap without
> a wrapper ie.
> .
> 
> crimap $param_file  $option  >$output1
> 
> ..
>
> However, crimap expects a .par file as input (ie. $param_file should be a
> .par file). Because of this I get the error:
> ERROR: Parameter file
> '/home/galaxy/galaxy-dist/database/job_working_directory/001/1434/tmpe5hErw'
> does not have '.par' suffix
>
> ...
>
> Is there a simple way around this (using more lines in the command)?

You could try something like this (untested):


ln -s $param_file $param_file.par;
crimap $param_file  $option  >$output1


The idea here is to tell Galaxy to issue two commands, the first a shell
call to create a symlink with the desired extension pointing at the input
filename from Galaxy. i.e. If $param_file was /path/to/file/dataset_123.dat
you'd make a symlink /path/to/file/dataset_123.dat.par - but check if you
need to remove the symlink afterwards.

Anything more that that and I would use a wrapper script in whatever
you are most comfortable with (e.g. shell, Perl, or Python). e.g.


my_script $param_file $option $output1


where the my_script file is marked executable with a hashbang
(or set the interpreter in the command tag), and the script could
be just:

#/bin/bash
ln -s $param_file $param_file.par
crimap $param_file  $option > $output1
rm $param_file.par

Peter

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Composite output with self-declarated datatype

2012-06-08 Thread Marine Rohmer
Hi everyone,

I'm trying to add a tool which generates 2 files, that I will call ".xxx" (a 
text file) and ".yyy" (a binary file)  . Both files are needed to use the 
result of my tool with an other tool I've added. 
So I wanted to create a composite datatype , that I will call ".composite", 
whose components are ".xxx" and ".yyy".

I've declared the datatype ".xxx", ".yyy" and ".composite" in the 
datatypes_conf.xml file, and written the required python files . Now, ".xxx", 
".yyy" and ".composite" appear in Get Data's "file format" .


These are my files :

In datatype_conf.xml :


    
    


xxx.py (summarized) :

import logging
from metadata import MetadataElement
from data import Text

log = logging.getLogger(__name__)

class xxx(Text):  
    file_ext = "xxx"

    def __init__( self, **kwd ):
        Text.__init__( self, **kwd )
    


yyy.py (summarized) :  

import logging
from metadata import MetadataElement
from data import Text

log = logging.getLogger(__name__)

# yyy is a binary file, don't know what to put instead of "Text". "Binary" and 
"Bin" don't work.
class yyy(Text):     
    file_ext = "yyy"

    def __init__( self, **kwd ):
        Text.__init__( self, **kwd )
        


composite.py (summarized) :

import logging
from metadata import MetadataElement
from data import Text

log = logging.getLogger(__name__)

class Composite(Text):  
    composite_type = 'auto_primary_file'
    MetadataElement( name="base_name", desc="base name for all transformed 
versions of this index dataset", default="your_index", readonly=True, 
set_in_upload=True)
    file_ext = 'composite'

    def __init__( self, **kwd ):
        Text.__init__( self, **kwd )
        self.add_composite_file( '%s.xxx', description = "XXX file", 
substitute_name_with_metadata = 'base_name')
        self.add_composite_file( '%s.yyy', description = "YYY file", 
substitute_name_with_metadata = 'base_name', is_binary = True )



Atfer having read Composite Datatypes in the wiki, my myTool.xml looks like :


    path/to/crac-index-wrapper.sh 
   ${os.path.join( $output_name_yyy.extra_files_path, '%s.yyy')} 
${os.path.join( $output_name_xxx.extra_files_path, '%s.xxx' )} $input_file
   
   
  
  
     
   
  
  
    
 
   




I have 2 main problems  :

When I upload a xxx file via "Get Data", there's no problem. However, when I 
upload a yyy file (the binary one),history bloc rests eternally blue 
("uploading dataset") , even for a small file. 


The second problem is that I want my tool to only generate the .composite file 
on the history, and not each of .xxx and .yyy. 
. But when I run my tool I still have 2 outputs displayed in the history : one 
for xxx and one for yyy. Furthermore, neither of them work, and I have the 
following message :

path/to/myTool-wrapper.sh: 6: path/to/myTool-wrapper.sh.sh: cannot create 
/home/myName/work/galaxy-dist/database/files/000/dataset_302_files/%s.yyy.xxx: 
Directory nonexistent
path/to/myTool-wrapper.sh: 6: path/to/myTool-wrapper.sh: cannot create 
/home/myName/work/galaxy-dist/database/files/000/dataset_302_files/%s.yyy.yyy: 
Directory nonexistent
path/to/myTool-wrapper.sh: 11: path/to/myTool-wrapper.sh: Syntax error: 
redirection unexpected


So I've checked manually in /home/myName/work/galaxy-dist/database/files/000/ 
and there's only "dataset_302.dat", an empty file.
(And whatsmore, I don't understand why I get in the message "%s.yyy.xxx" and 
"%s.yyy.yyy" instead of "%s.yyy" and "%s.xxx" ...)


Then I've looked the example of rgenetics.xml, and tried to change the command 
line and the output :
 

    path/to/myTool-wrapper.sh 
'$output_name.extra_files_path/$output_name.metadata.base_name' $input_file
   
   
  
  
   
       
  
   


This gave me :

Traceback (most recent call last):
  File "/home/myName/work/galaxy-dist/lib/galaxy/jobs/runners/local.py", line 
59, in run_job
    job_wrapper.prepare()
  File "/home/myName/work/galaxy-dist/lib/galaxy/jobs/__init__.py", line 429, 
in prepare
    self.command_line = self.tool.build_command_line( param_dict )
  File "/home/myName/work/galaxy-dist/lib/galaxy/tools/__init__.py", line 1971, 
in build_command_line
    command_line = fill_template( self.command, context=param_dict )
  File "/home/myName/work/galaxy-dist/lib/galaxy/util/template.py", line 9, in 
fill_template
    return str( Template( source=template_text, searchList=[context] ) )
  File 
"/home/myName/work/galaxy-dist/eggs/Cheetah-2.2.2-py2.7-linux-x86_64-ucs4.egg/Cheetah/Template.py",
 line 1004, in __str__
    return getattr(self, mainMethName)()
  File "cheetah_DynamicallyCompiledCheetahTemplate_1339157051_58_87978.py", 
line 83, in respond
NotFound: cannot find 'extra_files_path' while searching for 
'output_name.extra_files_path'


So now I don't know which way is the one to follow : the first one inspired by 
the example in the wiki, or the second one inspired by rgenetics.xml. And 
what's wrong with it... 
I will really appr

[galaxy-dev] input and output extensions

2012-06-08 Thread Nicholas Robinson
Hi,

1. INPUT 
I am trying to make a simple tool that sends a command to run crimap 
without a wrapper ie.
.
 
crimap $param_file  $option  >$output1

..

However, crimap expects a .par file as input (ie. $param_file should be a 
.par file). Because of this I get the error:
ERROR: Parameter file 
'/home/galaxy/galaxy-dist/database/job_working_directory/001/1434/tmpe5hErw' 
does not have '.par' suffix

Is there a simple way around this (using more lines in the command)? 
If I need to create a wrapper script, does somebody have an example that 
could help me to understand how it should be created? 
I am not familiar with python or perl and have written my other tools with 
R scripts embedded in the xml file ie.
r_wrapper.sh $script_file


etc etc



2. OUTPUT 
This is a similar issue.
This time I am trying to run another program called "crigen" without a 
wrapper as follows:

 
crigen -g $myData_gen -o $crimap_number -size $num_individ -gen $num_gen  
>$output1


 
 
 
 
 





In this case you need to use a number to identify the output in crigen. 
Running the tool is successful, but I get an empty output.

Appreciate any wrapper examples that deal with this sort of requirement 
too.

Thanks and cheers, Nick___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Tool Shed Workflow

2012-06-08 Thread Greg Von Kuster
Hi John,

On Jun 7, 2012, at 11:55 PM, John Chilton wrote:

> I have read through the documentation a couple times, but I still have
> a few questions about the recent tool shed enhancements.
> 
> At MSI we have a testing environment and a production environment and
> I want to make sure the tool versions and configurations don't get out
> of sync, I would also like to test everything in our testing
> environment before it reaches production.
> 
> Is there a recommended way to accomplish this rather than just
> manually repeating the same set of UI interactions twice?
> 
> Can I just import tools through the testing UI and run the
> ./scripts/migrate_tools/ scripts on our testing repository and
> then move the resulting migrated_tools_conf.xml and
> integrated_tool_panel.xml files into production? I have follow up
> questions, but I will wait for a response on this point.

Tools that used to be in the Galaxy distribution but have been moved to the 
main Galaxy tool shed are automatically installed when you start up your Galaxy 
server and presented with the option of running the migration script to 
automatically install the tools that were migrated in the specific Galaxy 
distribution release.  If you choose to install the tools, they are installed 
only in that specific Galaxy instance.  Installation produces mercurial 
repositories that include the tools on disk in your Galaxy server environment.  
Several other things are produced as well, including database records for the 
installation.  Each Galaxy instance consists of it's own separate set of 
components, this installation process must be done for each instance.  The 
installation is fully automatic, requiring little interaction on the part of 
the Galaxy admin, and doesn't require much time, so performing the process for 
each Galaxy instance should not be too intensive.  Also, the tools that are!
  installed into each Galaxy instance's tool panel are only those tools that 
were originally defined in the tool panel configuration file (tool_conf.xml).  
This approach provides for the case where each Galaxy instance having different 
tools defined will not be altered by the migration process.


> 
> Also as you are removing tools from Galaxy and placing them into our
> tool shed, what is the recommended course of actions for deployers
> that have made local minor tweaks to those tool configs and scripts
> and adapt them to our local environments? Along the same lines, what
> is the recommended course of action if we need to make minor tweaks to
> tools pulled into through the UI to adapt them to our institution.


In both cases you should upload your proprietary tools to either a local Galaxy 
tool shed that you administer, or the main Galaxy tool shed if you want.  You 
can choose to not execute any of the tool migration scripts, so the Galaxy 
tools that were migrated from the distribution will not be installed into your 
Galaxy environment.  You can use the Galaxy admin UI to install your 
proprietary versions of the migrated tools from the tool shed in which you 
chose to upload and store them.  New versions of the tools can be uploaded to 
respective tool shed repositories over time.


> 
> Thanks for your time,
> -John
> 
> 
> John Chilton
> Senior Software Developer
> University of Minnesota Supercomputing Institute
> Office: 612-625-0917
> Cell: 612-226-9223
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] How to upload local files in Galaxy

2012-06-08 Thread Alban Lermine
Hi,

There is also another solution if you don't want let users being able to
create libraries.
We have implemented this solution in our local production server here at
Institut Curie.
We add a tool call "local upload file", that takes as entry parameters
the name of the dataset, the type of file and the path to the file you
want to upload.
At the execution, the bash script behind will remove the output dataset
created by Galaxy and replaced it by a symbolic link to the file (so you
don't duplicate files).
Just to warn you, there could be a security failure with this tool,
because if it is locally executed, it will be by the Galaxy applicative
user that can potentially have more rights on file than the current user.
To go through this failure, we execute this tool on our local cluster
(with pbs/torque) as the current user (so if the current user try to
upload a file that he doesn't owned, the tool will not be able to create
the symbolic link and the user will receive an error on the Galaxy
interface).

Tell me if you're interested in such a tool, and I send you the xml file
and bash script associated.

Bests,

Alban



Le 07/06/2012 20:11, Mehmet Belgin a écrit :
> Brad, 
>
> Thank you for your fast reply! Looks like "library_import_dir" is for
> admins and there is another library option for users. I will try with
> that one and see if the files appear in the GUI.
>
> Thanks!
>
>
> =
> Mehmet Belgin, Ph.D. (mehmet.bel...@oit.gatech.edu
> )
> Scientific Computing Consultant | OIT - Academic and Research Technologies
> Georgia Institute of Technology
> 258 Fourth Street, Rich Building, Room 326 
> Atlanta, GA  30332-0700
> Office: (404) 385-0665
>
>
>
>
> On Jun 7, 2012, at 2:04 PM, Langhorst, Brad wrote:
>
>> Mehmet:
>>
>> It's not important how the files get there, they could be moved via
>> ftp, scp, cp, smb - whatever.
>> Galaxy will use that directory to import from no matter how the files
>> arrive.
>>
>> I found that confusing at first too.
>>
>> Brad
>> On Jun 7, 2012, at 1:55 PM, Mehmet Belgin wrote:
>>
>>> Hi Everyone,
>>>
>>> I am helping a research group to use Galaxy on our clusters.
>>> Unfortunately I have no previous experience with Galaxy, but
>>> learning along the way. We are almost there, but cannot figure out
>>> one particular issue. This is about configuration of Galaxy, so I
>>> thought developers list is a better place to submit than the user list.
>>>
>>> The galaxy web interface allows for either copy/paste of text, or a
>>> URL. Unfortunately we cannot setup a FTP server as instructed due to
>>> restrictions on the cluster. The files we are trying to upload are
>>> large; around 2GB in size. It does not make sense to upload these
>>> files to a remote location (which we can provide an URL for) and
>>> download them back, since the data and galaxy are on the same
>>> system. However, I could not find a way to open these files locally. 
>>>
>>> I did some reading, and hoped that "library_import_dir" in
>>> "universe_wsgi.ini" would do the trick, but it didn't. Therefore, I
>>> will really appreciate any suggestions.
>>>
>>> Thanks a lot in advance!
>>>
>>> -Mehmet
>>>
>>>
>>>
>>>
>>>
>>> ___
>>> Please keep all replies on the list by using "reply all"
>>> in your mail client.  To manage your subscriptions to this
>>> and other Galaxy lists, please use the interface at:
>>>
>>>  http://lists.bx.psu.edu/
>>
>> --
>> Brad Langhorst
>> langho...@neb.com 
>> 978-380-7564
>>
>>
>>
>>
>
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>
>   http://lists.bx.psu.edu/


-- 
Alban Lermine 
Unité 900 : Inserm - Mines ParisTech - Institut Curie
« Bioinformatics and Computational Systems Biology of Cancer »
11-13 rue Pierre et Marie Curie (1er étage) - 75005 Paris - France
Tel : +33 (0) 1 56 24 69 84

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] PBS error 15023: Bad user - no password entry

2012-06-08 Thread Marc Bras
Hi,


When I try to use pbs job runners, I get this message in the red box:
Unable to run this job due to a cluster error, please retry it later

And I've got this message in my nohup:
galaxy.jobs.runners.pbs DEBUG 2012-06-08 10:12:12,030 (453) pbs_submit failed, 
PBS error 15023: Bad user - no password entry

I use a galaxy user to launch my Galaxy server.
I already done all steps described in "Unified Method" like this command:

sudo usermod -s /bin/bash galaxy
Please, I need help !!

Thanks a lot,



Marc BRAS
Fondation Imagine
156, rue Vaugirard
75015 PARIS





___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Task Manager: This Galaxy instance is not the job manager.

2012-06-08 Thread Rémy Dernat
Hi,

Same issue here with multi servers. When I connect to the job manager
instance, I have "This Galaxy instance is not the job manager. If using
multiple servers, please directly access the job manager instance to manage
jobs."
On the web server instance :

Server ErrorURL: http://162.38.181.30:8000/galaxy/admin/jobs
Module paste.exceptions.errormiddleware:*143* in __call__
>>   app_iter *=* self*.*
application*(*environ*,* start_response*)*
Module paste.debug.prints:*98* in __call__
>>   environ*,* self*.*app*)*
Module paste.wsgilib:*539* in intercept_output
>>   app_iter *=* application*
(*environ*,* replacement_start_response*)*
Module paste.recursive:*80* in __call__
>>   *return* self*.*
application*(*environ*,* start_response*)*
Module paste.httpexceptions:*632* in __call__
>>   *return* self*.*
application*(*environ*,* start_response*)*
Module galaxy.web.framework.base:*160* in __call__
>>   body *=* method*(* trans*
,* kwargs *)*
Module galaxy.web.framework:*184* in decorator
>>   *return* func*(* self*,*
 trans*,* ***args*,* kwargs *)*
Module galaxy.web.base.controller:*2428* in jobs
>>   job_lock *=* trans*.*app*
.*job_manager*.*job_queue*.*job_lock *)*
*AttributeError: 'NoopQueue' object has no attribute 'job_lock'*
extra data 

full traceback 
URL: http://162.38.181.30:8000/galaxy/admin/jobs
Module paste.exceptions.errormiddleware:*143* in __call__
>>   app_iter *=* self*.*
application*(*environ*,* start_response*)*
Module paste.debug.prints:*98* in __call__
>>   environ*,* self*.*app*)*
Module paste.wsgilib:*539* in intercept_output
>>   app_iter *=* application*
(*environ*,* replacement_start_response*)*
Module paste.recursive:*80* in __call__
>>   *return* self*.*
application*(*environ*,* start_response*)*
Module paste.httpexceptions:*632* in __call__
>>   *return* self*.*
application*(*environ*,* start_response*)*
Module galaxy.web.framework.base:*160* in __call__
>>   body *=* method*(* trans*
,* kwargs *)*
Module galaxy.web.framework:*184* in decorator
>>   *return* func*(* self*,*
 trans*,* ***args*,* kwargs *)*
Module galaxy.web.base.controller:*2428* in jobs
>>   job_lock *=* trans*.*app*
.*job_manager*.*job_queue*.*job_lock *)*
*AttributeError: 'NoopQueue' object has no attribute 'job_lock'*
*
*

Regards,
Rem

2012/6/7 Edward Kirton 

> yes, i've had the same error ever since the last galaxy-dist release.  i
> previously had multiple servers and switched to the one manager, two
> handlers.  rewrite rules didn't need to be changed.
>
>
> On Thu, May 24, 2012 at 8:14 AM, Sarah Diehl wrote:
>
>> **
>> Hi all,
>>
>> I have a similar, maybe related problem. I'm running a configuration as
>> described at
>> http://wiki.g2.bx.psu.edu/Admin/Config/Performance/Web%20Application%20Scaling.
>> I have three webservers, one manager and two handlers. Everything is behind
>> an Apache and the rewrite rules are set accordingly.
>>
>> When I try to access "Manage Jobs", I also get the error "This Galaxy
>> instance is not the job manager. If using multiple servers, please directly
>> access the job manager instance to manage jobs.". I have set the rewrite
>> rule for admin/jobs to point to the manager server. When I access the
>> manager directly from localhost I get the same error, while all other
>> servers (web and handler) throw a server error:
>>
>> 127.0.0.1 - - [24/May/2012:15:37:50 +0200] "GET /admin/jobs HTTP/1.1" 500
>> - "-" "Mozilla/5.0 (X11; Linux x86_64; rv:10.0.4) Gecko/20120424
>> Firefox/10.0.4"
>> Error - : 'NoopQueue' object has no
>> attribute 'job_lock'
>> URL: http://localhost:8080/admin/jobs
>> File
>> '/galaxy/galaxy_server/eggs/Paste-1.6-py2.7.egg/paste/exceptions/errormiddleware.py',
>> line 143 in __call__
>>   app_iter = self.application(environ, start_response)
>> File '/galaxy/galaxy_server/eggs/Paste-1.6-py2.7.egg/paste/recursive.py',
>> line 80 in __call__
>>   return self.application(environ, start_response)
>> File
>> '/galaxy/galaxy_server/eggs/Paste-1.6-py2.7.egg/paste/httpexceptions.py',
>> line 632 in __call__
>>   return self.application(environ, start_response)
>> File '/galaxy/galaxy_server/lib/galaxy/web/fr