Re: [galaxy-dev] [galaxy-iuc] writing datatypes

2014-07-22 Thread Greg Von Kuster
Before we go too much further down this path with dataytpes, I'm wondering if 
some of us should put together a spec of some kind that allows us to all agree 
on the direction.  For example, I'm wondering if datatyps should be versioned 
and have a name-spaced identifier much like the Tool Shed's guid identifier for 
tools.  I haven't thought too much about whether this would pose backward 
compatibility issues or not.   Discussion is welcomed on this.

Greg Von Kuster


On Jul 22, 2014, at 7:19 PM, Greg Von Kuster  wrote:

> Hi Björn,
> 
> 
> On Jul 22, 2014, at 6:01 PM, Björn Grüning  wrote:
> 
>> Hi Greg,
>> 
>> thanks for the clarification. Please see my comments below.
>> 
>>> On Jul 20, 2014, at 3:22 PM, Peter Cock  wrote:
>>> 
 On Sun, Jul 20, 2014 at 6:23 PM, Björn Grüning  wrote:
> Hi,
> 
> single datatype definitions only work if you haven’t defined any 
> converters.
> Let's assume I have a datatype X and want to ship a X -> Y converter (Y 
> -> X
> is also possible), we will end up with a dependency loop, or? The X
> repository will depend on the Y repository, but Y is depending on X, 
> because
> we want to include a Y -> X converter.
> 
> Any idea how to solve that?
>>> 
>>> I don't see a problem here, so I'm hoping I'm correctly understanding the 
>>> issue.
>>> 
>>> If we have:
>>> 
>>> repo_x contains the single datatype X
>>> repo_y contains the single datatype Y
>>> repo_x_to_y_converter contains a tool that converts datatype X to datatype 
>>> Y (this repository also defines 2 dependency relationships, one to repo_x 
>>> and another to repo_y)
>>> repo_y_to_x_cenverter contains a tool that converts datatype Y to datatype 
>>> X (this repository also defines 2 dependency relationships, one to repo_x 
>>> and another to repo_y)
>>> 
>>> Now if we want to install both the repo_x_to_y_converter and the 
>>> repo_y_to_x_cenverter automatically whenever either one is installed, we 
>>> have 2 options:
>>> 
>>> 1) define a 3rd dependency relationshiop for repo_x_to_y_converter to 
>>> depend on repo_y_to_x_cenverter and, similarly a 3rd dependency 
>>> relationshiop for repo_y_to_x_cenverter on repo_x_to_y_converter.  This 
>>> does indeed
>>> create a circular repository dependency relationship, but the Tool Shed 
>>> installation process will handle it correctly, installing all 4 
>>> repositories with proper dependency relationships created between them
>> 
>> Does that mean, circular dependencies will be no problem at all?
> 
> 
> Yes, the Tool Shed handles circular dependency definitions of any variety, so 
> circular dependency definitions pose no problem.
> 
> 
>> Do you consider including the converters into the datatypes as 
>> best-practise? (These converters are implicit-galaxy-converters).
>> I would have only two repositories with circular dependencies.
> 
> 
> Yes, however, there are some current limitations in the framework detailed on 
> this Trello card:
> https://trello.com/c/Ho3ra4b9/206-add-support-for-datatype-converters-and-display-applications
> 
> Tag sets like the following that are defined in a datatypes_conf.xml file 
> contained in a repository should be correctly loaded into the in-memory 
> datatypes registry when the repository is instlled into Galaxy.  However, it 
> has been quite a while since I've worked in this area, so let me know if you 
> encounter any issues.  The current best practice is probaly that the 
> converters themselved would each individually be in separate repositories 
> (just like all Galaxy tools), but this can certainly be discussed if 
> appropriate.  Community thoughts are welcome here!
> 
>mimetype="application/octet-stream" display_in_upload="true">
> 
> 
> 
> 
> 
> 
>   
> 
> 
>> 
>>> 2) Instead of creating a circlular dependency relationship between 
>>> repo_x_to_y_converter and repo_y_to_x_cenverter, create an additional 
>>> suite_definition_x_y repository (of type "repository_suite_definition" that 
>>> defines relationships to repo_x_to_y_converter and  repo_y_to_x_cenverter, 
>>> ultimately installing all 4 repositories, but without defining any circular 
>>> dependency relationships.
>> 
>> repo_x_to_y_converter and repo_y_to_x_converter would have dependencies on 
>> datatype X and Y, so I do not see the need for a suite_definition ... or it 
>> is some collection like the emboss_datatypes …
> 
> I agree.
> 
> 
>> 
>> My scenario is more that the converters are not tools, they are implicit 
>> converters and should _not_ be displayed in the tool panel.
>> As far as I know they need to be defined inside the datatypes_conf.xml file.
> 
> 
> Yes, they must be defined inside the datatypes_conf.xml file.  However, 
> converters are just special Galaxy Tools (they are "special" in the same way 
> that Data Manager tools are special).  They are loaded into the in-memory 
> Galaxy tools registry, but not displayed in the tool panel.
> 
> 
>> 
>> I think 

Re: [galaxy-dev] [galaxy-iuc] writing datatypes

2014-07-22 Thread Greg Von Kuster
Hi Björn,


On Jul 22, 2014, at 6:01 PM, Björn Grüning  wrote:

> Hi Greg,
> 
> thanks for the clarification. Please see my comments below.
> 
>> On Jul 20, 2014, at 3:22 PM, Peter Cock  wrote:
>> 
>>> On Sun, Jul 20, 2014 at 6:23 PM, Björn Grüning  wrote:
 Hi,
 
 single datatype definitions only work if you haven’t defined any 
 converters.
 Let's assume I have a datatype X and want to ship a X -> Y converter (Y -> 
 X
 is also possible), we will end up with a dependency loop, or? The X
 repository will depend on the Y repository, but Y is depending on X, 
 because
 we want to include a Y -> X converter.
 
 Any idea how to solve that?
>> 
>> I don't see a problem here, so I'm hoping I'm correctly understanding the 
>> issue.
>> 
>> If we have:
>> 
>> repo_x contains the single datatype X
>> repo_y contains the single datatype Y
>> repo_x_to_y_converter contains a tool that converts datatype X to datatype Y 
>> (this repository also defines 2 dependency relationships, one to repo_x and 
>> another to repo_y)
>> repo_y_to_x_cenverter contains a tool that converts datatype Y to datatype X 
>> (this repository also defines 2 dependency relationships, one to repo_x and 
>> another to repo_y)
>> 
>> Now if we want to install both the repo_x_to_y_converter and the 
>> repo_y_to_x_cenverter automatically whenever either one is installed, we 
>> have 2 options:
>> 
>> 1) define a 3rd dependency relationshiop for repo_x_to_y_converter to depend 
>> on repo_y_to_x_cenverter and, similarly a 3rd dependency relationshiop for 
>> repo_y_to_x_cenverter on repo_x_to_y_converter.  This does indeed
>> create a circular repository dependency relationship, but the Tool Shed 
>> installation process will handle it correctly, installing all 4 repositories 
>> with proper dependency relationships created between them
> 
> Does that mean, circular dependencies will be no problem at all?


Yes, the Tool Shed handles circular dependency definitions of any variety, so 
circular dependency definitions pose no problem.


> Do you consider including the converters into the datatypes as best-practise? 
> (These converters are implicit-galaxy-converters).
> I would have only two repositories with circular dependencies.


Yes, however, there are some current limitations in the framework detailed on 
this Trello card:
https://trello.com/c/Ho3ra4b9/206-add-support-for-datatype-converters-and-display-applications

Tag sets like the following that are defined in a datatypes_conf.xml file 
contained in a repository should be correctly loaded into the in-memory 
datatypes registry when the repository is instlled into Galaxy.  However, it 
has been quite a while since I've worked in this area, so let me know if you 
encounter any issues.  The current best practice is probaly that the converters 
themselved would each individually be in separate repositories (just like all 
Galaxy tools), but this can certainly be discussed if appropriate.  Community 
thoughts are welcome here!

   
 
 
 
 
 
 
   


> 
>> 2) Instead of creating a circlular dependency relationship between 
>> repo_x_to_y_converter and repo_y_to_x_cenverter, create an additional 
>> suite_definition_x_y repository (of type "repository_suite_definition" that 
>> defines relationships to repo_x_to_y_converter and  repo_y_to_x_cenverter, 
>> ultimately installing all 4 repositories, but without defining any circular 
>> dependency relationships.
> 
> repo_x_to_y_converter and repo_y_to_x_converter would have dependencies on 
> datatype X and Y, so I do not see the need for a suite_definition ... or it 
> is some collection like the emboss_datatypes …

I agree.


> 
> My scenario is more that the converters are not tools, they are implicit 
> converters and should _not_ be displayed in the tool panel.
> As far as I know they need to be defined inside the datatypes_conf.xml file.


Yes, they must be defined inside the datatypes_conf.xml file.  However, 
converters are just special Galaxy Tools (they are "special" in the same way 
that Data Manager tools are special).  They are loaded into the in-memory 
Galaxy tools registry, but not displayed in the tool panel.


> 
> I think if circular dependencies are not a problem I will try to implement a 
> proof of concept. EMBOSS is now splitted:

Sounds goos - circular dependencies should pose no problems.


> 
> https://github.com/bgruening/galaxytools/tree/master/datatypes/emboss_datatypes
> 
> Thanks Greg!
> Bjoern
> 
>> Either of the above 2 scenarios will correctly install the 4 repositories.
>> 
>> Let me know if I'm missing something here.
>> 
>> Thanks!
>> 
>> Greg
>> 
>>> 
>>> Excellent example!
>>> 
 How to handle versions of datatypes? Extra repositories for stockholm 1.0
 and 1.1? If so ... the associated python file (sniffing, splitting ...)
 should be also versioned, or? What happend if I have two stockholm.py files
 in my system?
>>> 
>>> P

Re: [galaxy-dev] [galaxy-iuc] writing datatypes

2014-07-22 Thread Björn Grüning

Hi Greg,

thanks for the clarification. Please see my comments below.


On Jul 20, 2014, at 3:22 PM, Peter Cock  wrote:


On Sun, Jul 20, 2014 at 6:23 PM, Björn Grüning  wrote:

Hi,

single datatype definitions only work if you haven’t defined any converters.
Let's assume I have a datatype X and want to ship a X -> Y converter (Y -> X
is also possible), we will end up with a dependency loop, or? The X
repository will depend on the Y repository, but Y is depending on X, because
we want to include a Y -> X converter.

Any idea how to solve that?


I don't see a problem here, so I'm hoping I'm correctly understanding the issue.

If we have:

repo_x contains the single datatype X
repo_y contains the single datatype Y
repo_x_to_y_converter contains a tool that converts datatype X to datatype Y 
(this repository also defines 2 dependency relationships, one to repo_x and 
another to repo_y)
repo_y_to_x_cenverter contains a tool that converts datatype Y to datatype X 
(this repository also defines 2 dependency relationships, one to repo_x and 
another to repo_y)

Now if we want to install both the repo_x_to_y_converter and the 
repo_y_to_x_cenverter automatically whenever either one is installed, we have 2 
options:

1) define a 3rd dependency relationshiop for repo_x_to_y_converter to depend on 
repo_y_to_x_cenverter and, similarly a 3rd dependency relationshiop for 
repo_y_to_x_cenverter on repo_x_to_y_converter.  This does indeed
create a circular repository dependency relationship, but the Tool Shed 
installation process will handle it correctly, installing all 4 repositories 
with proper dependency relationships created between them


Does that mean, circular dependencies will be no problem at all?
Do you consider including the converters into the datatypes as 
best-practise? (These converters are implicit-galaxy-converters).

I would have only two repositories with circular dependencies.


2) Instead of creating a circlular dependency relationship between repo_x_to_y_converter 
and repo_y_to_x_cenverter, create an additional suite_definition_x_y repository (of type 
"repository_suite_definition" that defines relationships to 
repo_x_to_y_converter and  repo_y_to_x_cenverter, ultimately installing all 4 
repositories, but without defining any circular dependency relationships.


repo_x_to_y_converter and repo_y_to_x_converter would have dependencies 
on datatype X and Y, so I do not see the need for a suite_definition ... 
or it is some collection like the emboss_datatypes ...


My scenario is more that the converters are not tools, they are implicit 
converters and should _not_ be displayed in the tool panel.
As far as I know they need to be defined inside the datatypes_conf.xml 
file.


I think if circular dependencies are not a problem I will try to 
implement a proof of concept. EMBOSS is now splitted:


https://github.com/bgruening/galaxytools/tree/master/datatypes/emboss_datatypes

Thanks Greg!
Bjoern


Either of the above 2 scenarios will correctly install the 4 repositories.

Let me know if I'm missing something here.

Thanks!

Greg



Excellent example!


How to handle versions of datatypes? Extra repositories for stockholm 1.0
and 1.1? If so ... the associated python file (sniffing, splitting ...)
should be also versioned, or? What happend if I have two stockholm.py files
in my system?


Potentially you might need/want to define those as two different
Galaxy datatypes?


@Peter, can we create a striped-down, python only biopython egg? All parsers
should be included, Bio.SeqIO should be sufficient I think.


Right now, yes in principle (and this is fine from the licence point of view),
but in practise this is a fair chunk of work. However, we are looking at
this - see https://github.com/biopython/biopython/issues/349

Peter

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/




___
galaxy-iuc mailing list
galaxy-...@lists.bx.psu.edu
http://lists.bx.psu.edu/listinfo/galaxy-iuc


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Video Datatypes

2014-07-22 Thread Eric Rasche
Howdy all,

This is probably only of interest to a tiny subset of you who work with 
microscopy and microscope video, but I've added video datatypes and an 
associated viz plugin for viewing .mp4 files.

Please feel free to submit bugs/feature requests/PRs on these

https://github.com/erasche/galaxy-video-viz-plugin
https://github.com/erasche/galaxy-video-datatypes

I plan to add more video datatypes as time permits, as well as tools for 
working with those datasets (e.g., ffmpeg wrapper).

Cheers,
Eric

-- 
Eric Rasche
Programmer II
Center for Phage Technology
Texas A&M Univesity
College Station, TX 77843
Ph: 4046922048

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread Peter Cock
On Tue, Jul 22, 2014 at 7:41 PM, Eric Rasche  wrote:
> John,
>
> How are those generated? Would you be amenable to scripting that
> portion and running it once a month? (...say in a cron job, with a
> passwordless ssh key so you never have to touch it again)
>
> Cheers,
> Eric

How to generate it was going to be my next question too ;)

I'm impressed with Eric's zeal to automate things. Having a script
for making the SQLite template would be good - under git in the
same repository?

Peter

P.S. The schema version 120 template works great, thanks!:

https://travis-ci.org/peterjc/pico_galaxy/builds/30592828
https://travis-ci.org/peterjc/galaxy_blast/builds/30592097
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Galaxy error

2014-07-22 Thread John Chilton
Hello,

Thanks for the bug report. Is this on the main public server (usegalaxy.org)
or a local instance at the University of Iowa?

-John


On Wed, Jul 16, 2014 at 10:39 AM, Beck, Emily A 
wrote:

>
> Hello,
>
> I have repeatedly gotten the following error message when attempting to
> use both the mapping and pileup functions in Galaxy.  I am using files that
> I have previously used successfully with these functions, and cannot fix it.
>
> Thank you.
>
> ~Emily Beck
>
>
> user   username emilybeck  quota_percent 19  total_disk_usage 52086193970
> nice_total_disk_usage 48.5 GB  email emily-b...@uiowa.edu  is_admin false
> tags_usedmodel_class User  id 03f0554e2314759esource
> HDACollection(f313d1d65c4ee57e,451)  xhr   readyState 4  responseText 
> {"err_msg":
> "Uncaught exception in exposed API method:", "err_code": 0}  responseJSON
> err_msg Uncaught exception in exposed API method:  err_code 0status
> 500  statusText Internal Server Error  responseHeaders   Server nginx/1.4.7
>  Date Wed, 16 Jul 2014 15:19:52 GMT  Content-Type application/json
> Transfer-Encoding chunked  Connection keep-alive  Cache-Control 
> max-age=0,no-cache,no-store
>  options   dataparse true  emulateHTTP false  emulateJSON false
>
>
>
> *Emily Beck *PhD Candidate
> Llopart Lab
> 469 Biology Building
> Iowa City, IA 52242
> Lab: (319)335-3430
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   http://lists.bx.psu.edu/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] getting API key from user/pass

2014-07-22 Thread John Chilton
On Tue, Jul 22, 2014 at 3:13 PM, Eric Rasche  wrote:
> Hi John,
>
> Okay, not a problem, no need to push new artifacts on my behalf.
> Jetty (basically all of your dependencies) aren't currently ported to android 
> which puts that particular project on hold for me, for the time being.

Hmm... that is too bad. They should do that - seems like Android would
be a key platform for a web service client library. Oh well.

-John

>
> Thanks for being aware of this issue.
>
> Cheers,
> Eric
>
> 22.07.2014, 21:10, "John Chilton" :
>>  Hey Eric,
>>
>>  This is not possible (easily anyway) in the latest release of blend4j.
>>  It was on the TODO list though so I have added the functionality in
>>  the following commit:
>>
>>  
>> https://github.com/jmchilton/blend4j/commit/f92909fbda3616da09614b65810ebd86ce496b19
>>
>>  So instead of using GalaxyInstanceFactory.get(url, key) to create
>>  'GalaxyInstance' objects, you will need to use
>>  GalaxyInstanceFactory.getFromCredentials(url, email, password).
>>
>>  Mirrors work I had previously done in bioblend
>>  
>> (https://github.com/afgane/bioblend/commit/07a07b99495a867a9d17bc5c1e22a3739da052ba)
>>  based on the API endpoint Martin had mentioned.
>>
>>  Any chance you have a setup allowing you to use this without me
>>  needing to do a new release of blend4j or do you need the artifacts to
>>  be in maven central?
>>
>>  -John
>>
>>  On Mon, Jul 21, 2014 at 9:47 AM, Martin Čech  wrote:
>>>   Done.
>>>
>>>   On Mon, Jul 21, 2014 at 10:30 AM, Eric Rasche  
>>> wrote:
   Martin,

   Ah, good to have the API call needed. Could this possibly be added to the
   wiki, perhaps on Learn/API?

   That particular URL doesn't seem to be in blend4j so this answers that
   question as well, thanks!

   Cheers,
   Eric

   21.07.2014, 15:13, "Martin Čech" :

file  api/authenticate.py

class AuthenticationController( BaseAPIController, CreatesApiKeysMixin 
 ):

@web.expose_api_anonymous
def get_api_key( self, trans, **kwd ):

"""
def get_api_key( self, trans, **kwd )
* GET /api/authenticate/baseauth
  returns an API key for authenticated user based on BaseAuth
   headers

:returns: api_key in json format
:rtype:   dict

:raises: ObjectNotFound, HTTPBadRequest
"""

On Mon, Jul 21, 2014 at 10:06 AM, Eric Rasche 
   wrote:

John,

I'm looking to use blend4j, and I was wondering if there was a way to
   obtain the user's API key from a username/password pair?

Cheers,
Eric

--
Eric Rasche
Programmer II
Center for Phage Technology
Texas A&M Univesity

College Station, TX 77843
Ph: 4046922048

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

   --
   Eric Rasche
   Programmer II
   Center for Phage Technology
   Texas A&M Univesity
   College Station, TX 77843
   Ph: 4046922048
>
> --
> Eric Rasche
> Programmer II
> Center for Phage Technology
> Texas A&M Univesity
> College Station, TX 77843
> Ph: 4046922048

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] getting API key from user/pass

2014-07-22 Thread Eric Rasche
Hi John,

Okay, not a problem, no need to push new artifacts on my behalf. 
Jetty (basically all of your dependencies) aren't currently ported to android 
which puts that particular project on hold for me, for the time being.

Thanks for being aware of this issue.

Cheers,
Eric

22.07.2014, 21:10, "John Chilton" :
>  Hey Eric,
>
>  This is not possible (easily anyway) in the latest release of blend4j.
>  It was on the TODO list though so I have added the functionality in
>  the following commit:
>
>  
> https://github.com/jmchilton/blend4j/commit/f92909fbda3616da09614b65810ebd86ce496b19
>
>  So instead of using GalaxyInstanceFactory.get(url, key) to create
>  'GalaxyInstance' objects, you will need to use
>  GalaxyInstanceFactory.getFromCredentials(url, email, password).
>
>  Mirrors work I had previously done in bioblend
>  
> (https://github.com/afgane/bioblend/commit/07a07b99495a867a9d17bc5c1e22a3739da052ba)
>  based on the API endpoint Martin had mentioned.
>
>  Any chance you have a setup allowing you to use this without me
>  needing to do a new release of blend4j or do you need the artifacts to
>  be in maven central?
>
>  -John
>
>  On Mon, Jul 21, 2014 at 9:47 AM, Martin Čech  wrote:
>>   Done.
>>
>>   On Mon, Jul 21, 2014 at 10:30 AM, Eric Rasche  
>> wrote:
>>>   Martin,
>>>
>>>   Ah, good to have the API call needed. Could this possibly be added to the
>>>   wiki, perhaps on Learn/API?
>>>
>>>   That particular URL doesn't seem to be in blend4j so this answers that
>>>   question as well, thanks!
>>>
>>>   Cheers,
>>>   Eric
>>>
>>>   21.07.2014, 15:13, "Martin Čech" :
>>>
>>>    file  api/authenticate.py
>>>
>>>    class AuthenticationController( BaseAPIController, CreatesApiKeysMixin ):
>>>
>>>    @web.expose_api_anonymous
>>>    def get_api_key( self, trans, **kwd ):
>>>
>>>    """
>>>    def get_api_key( self, trans, **kwd )
>>>    * GET /api/authenticate/baseauth
>>>  returns an API key for authenticated user based on BaseAuth
>>>   headers
>>>
>>>    :returns: api_key in json format
>>>    :rtype:   dict
>>>
>>>    :raises: ObjectNotFound, HTTPBadRequest
>>>    """
>>>
>>>    On Mon, Jul 21, 2014 at 10:06 AM, Eric Rasche 
>>>   wrote:
>>>
>>>    John,
>>>
>>>    I'm looking to use blend4j, and I was wondering if there was a way to
>>>   obtain the user's API key from a username/password pair?
>>>
>>>    Cheers,
>>>    Eric
>>>
>>>    --
>>>    Eric Rasche
>>>    Programmer II
>>>    Center for Phage Technology
>>>    Texas A&M Univesity
>>>
>>>    College Station, TX 77843
>>>    Ph: 4046922048
>>>
>>>    ___
>>>    Please keep all replies on the list by using "reply all"
>>>    in your mail client.  To manage your subscriptions to this
>>>    and other Galaxy lists, please use the interface at:
>>>  http://lists.bx.psu.edu/
>>>
>>>    To search Galaxy mailing lists use the unified search at:
>>>  http://galaxyproject.org/search/mailinglists/
>>>
>>>   --
>>>   Eric Rasche
>>>   Programmer II
>>>   Center for Phage Technology
>>>   Texas A&M Univesity
>>>   College Station, TX 77843
>>>   Ph: 4046922048

-- 
Eric Rasche
Programmer II
Center for Phage Technology
Texas A&M Univesity
College Station, TX 77843
Ph: 4046922048
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] getting API key from user/pass

2014-07-22 Thread John Chilton
Hey Eric,

This is not possible (easily anyway) in the latest release of blend4j.
It was on the TODO list though so I have added the functionality in
the following commit:

https://github.com/jmchilton/blend4j/commit/f92909fbda3616da09614b65810ebd86ce496b19

So instead of using GalaxyInstanceFactory.get(url, key) to create
'GalaxyInstance' objects, you will need to use
GalaxyInstanceFactory.getFromCredentials(url, email, password).

Mirrors work I had previously done in bioblend
(https://github.com/afgane/bioblend/commit/07a07b99495a867a9d17bc5c1e22a3739da052ba)
based on the API endpoint Martin had mentioned.

Any chance you have a setup allowing you to use this without me
needing to do a new release of blend4j or do you need the artifacts to
be in maven central?

-John

On Mon, Jul 21, 2014 at 9:47 AM, Martin Čech  wrote:
> Done.
>
>
> On Mon, Jul 21, 2014 at 10:30 AM, Eric Rasche  wrote:
>>
>> Martin,
>>
>> Ah, good to have the API call needed. Could this possibly be added to the
>> wiki, perhaps on Learn/API?
>>
>> That particular URL doesn't seem to be in blend4j so this answers that
>> question as well, thanks!
>>
>> Cheers,
>> Eric
>>
>> 21.07.2014, 15:13, "Martin Čech" :
>>
>>  file  api/authenticate.py
>>
>>  class AuthenticationController( BaseAPIController, CreatesApiKeysMixin ):
>>
>>  @web.expose_api_anonymous
>>  def get_api_key( self, trans, **kwd ):
>>
>>  """
>>  def get_api_key( self, trans, **kwd )
>>  * GET /api/authenticate/baseauth
>>returns an API key for authenticated user based on BaseAuth
>> headers
>>
>>  :returns: api_key in json format
>>  :rtype:   dict
>>
>>  :raises: ObjectNotFound, HTTPBadRequest
>>  """
>>
>>  On Mon, Jul 21, 2014 at 10:06 AM, Eric Rasche 
>> wrote:
>>
>>  John,
>>
>>  I'm looking to use blend4j, and I was wondering if there was a way to
>> obtain the user's API key from a username/password pair?
>>
>>  Cheers,
>>  Eric
>>
>>  --
>>  Eric Rasche
>>  Programmer II
>>  Center for Phage Technology
>>  Texas A&M Univesity
>>
>>  College Station, TX 77843
>>  Ph: 4046922048
>>
>>  ___
>>  Please keep all replies on the list by using "reply all"
>>  in your mail client.  To manage your subscriptions to this
>>  and other Galaxy lists, please use the interface at:
>>http://lists.bx.psu.edu/
>>
>>  To search Galaxy mailing lists use the unified search at:
>>http://galaxyproject.org/search/mailinglists/
>>
>> --
>> Eric Rasche
>> Programmer II
>> Center for Phage Technology
>> Texas A&M Univesity
>> College Station, TX 77843
>> Ph: 4046922048
>>
>

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread Eric Rasche
John,

How are those generated? Would you be amenable to scripting that portion and 
running it once a month? (...say in a cron job, with a passwordless ssh key so 
you never have to touch it again)

Cheers,
Eric

22.07.2014, 19:08, "John Chilton" :
> On Tue, Jul 22, 2014 at 7:51 AM, Peter Cock  wrote:
>>  On Tue, Jul 22, 2014 at 1:15 PM, Eric Rasche  wrote:
>>>  Hi Peter,
>>>
>>>  On July 22, 2014 3:15:41 AM CDT, Peter Cock  
>>> wrote:
 Given how close you can get now for minimal effort,
 this seem unnecessary.

 http://blastedbio.blogspot.co.uk/2013/09/using-travis-ci-for-testing-galaxy-tools.html

 My TravisCI setup this fetches the latest Galaxy as
 a tar ball (from a GitHub mirror as it was faster than
 a git clone which was faster than getting the tar ball
 from BitBucket, which in turn was faster than using
 hg clone),
>>>  Yes, that post was at least part of the thinking behind this.
>>  :)
  .., and a per-migrated SQLite database
 (using a bit of Galaxy functionality originally with
 $GALAXY_TEST_DB_TEMPLATE added to speed
 up running the functional tests).
>>  Apologies for grammatical error - I pasted in the environment
>>  variable at the wrong point in the sentence.
>>>  I know I've seen that used but was never able to get that
>>>  working in practice (then again I didn't try that hard). If
>>>  that's a working/usable feature, then that is already the
>>>  majority of setup time.
>>  Yes, the creation of the test-database and all the migrations
>>  was an obvious low-hanging fruit when we were looking at
>>  making running the tool functional tests faster - although
>>  originally in the context of running the tests on a local
>>  development Galaxy instance.
>>
>>  As to using this in practise, currently my TravisCI setup has:
>>
>>  export 
>> GALAXY_TEST_DB_TEMPLATE=https://github.com/jmchilton/galaxy-downloads/raw/master/db_gx_rev_0117.sqlite
>>
>>  I also added that line at the start of my local copy of script
>>  run_functional_tests.sh to benefit from this while doing
>>  development. That should be all there is to it (but from
>>  memory, this is only for use with the SQLite backend).
>>
>>  John - could you add a current schema snapshot to
>>  https://github.com/jmchilton/galaxy-downloads/ please?
>
> Hey All,
>
> Love this thread and effort! Keep up the good work - would love to
> replace say blend4j's automatic travisci testing to be backed by
> dockerized -stable and -central instance.
>
> At any rate, I have uploaded a more updated sqlite template:
> https://github.com/jmchilton/galaxy-downloads/raw/master/db_gx_rev_0120.sqlite.
> The old template still exists at the same URL so hopefully this
> doesn't break anything.
>
> -John
 Note this does not cache the eggs and all the other
 side effects of the first run like creating config files,
 so there is room for some speed up.
>>>  Eggs would be nice but not the biggest thing in the world.
>>  Right. I do like your idea of automatically generated
>>  cutting-edge or each stable release Docker images
>>  though (even if I have no personal need for them at
>>  the moment).
>>
>>  Regards,
>>
>>  Peter

-- 
Eric Rasche
Programmer II
Center for Phage Technology
Texas A&M Univesity
College Station, TX 77843
Ph: 4046922048

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Run Jobs as Real User - How to Configure it for TORQUE

2014-07-22 Thread John Chilton
Running jobs as the "real" user is not available with the PBS job
runner - one has to use the DRMAA interface to submit jobs as the real
user.

I have created a Trello card to add this functionality:
https://trello.com/c/OddS8bMP

Would be happy to field pull requests to add this - because I doubt
anyone on the core team will be able to get to this anytime soon.
Sorry I don't have better news.

-John

On Sun, Jul 20, 2014 at 11:26 PM, Ping Luo  wrote:
> I have installed pbs_python module on our cluster to interface Galaxy with
> TORQUE. I am able to submit and run jobs on our cluster as the Galaxy user.
> I need to run Galaxy jobs as the real user. The instruction in the user
> guide is for DRMAA interface. How can I configure running jobs as real user
> for TORQUE?
>
> thank,
>
> Ping
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   http://lists.bx.psu.edu/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread John Chilton
On Tue, Jul 22, 2014 at 7:51 AM, Peter Cock  wrote:
> On Tue, Jul 22, 2014 at 1:15 PM, Eric Rasche  wrote:
>> Hi Peter,
>>
>> On July 22, 2014 3:15:41 AM CDT, Peter Cock  
>> wrote:
>>>
>>>Given how close you can get now for minimal effort,
>>>this seem unnecessary.
>>>
>>>http://blastedbio.blogspot.co.uk/2013/09/using-travis-ci-for-testing-galaxy-tools.html
>>>
>>>My TravisCI setup this fetches the latest Galaxy as
>>>a tar ball (from a GitHub mirror as it was faster than
>>>a git clone which was faster than getting the tar ball
>>>from BitBucket, which in turn was faster than using
>>>hg clone),
>>
>> Yes, that post was at least part of the thinking behind this.
>
> :)
>
>>> .., and a per-migrated SQLite database
>>>(using a bit of Galaxy functionality originally with
>>>$GALAXY_TEST_DB_TEMPLATE added to speed
>>>up running the functional tests).
>
> Apologies for grammatical error - I pasted in the environment
> variable at the wrong point in the sentence.
>
>> I know I've seen that used but was never able to get that
>> working in practice (then again I didn't try that hard). If
>> that's a working/usable feature, then that is already the
>> majority of setup time.
>
> Yes, the creation of the test-database and all the migrations
> was an obvious low-hanging fruit when we were looking at
> making running the tool functional tests faster - although
> originally in the context of running the tests on a local
> development Galaxy instance.
>
> As to using this in practise, currently my TravisCI setup has:
>
> export 
> GALAXY_TEST_DB_TEMPLATE=https://github.com/jmchilton/galaxy-downloads/raw/master/db_gx_rev_0117.sqlite
>
> I also added that line at the start of my local copy of script
> run_functional_tests.sh to benefit from this while doing
> development. That should be all there is to it (but from
> memory, this is only for use with the SQLite backend).
>
> John - could you add a current schema snapshot to
> https://github.com/jmchilton/galaxy-downloads/ please?

Hey All,

Love this thread and effort! Keep up the good work - would love to
replace say blend4j's automatic travisci testing to be backed by
dockerized -stable and -central instance.

At any rate, I have uploaded a more updated sqlite template:
https://github.com/jmchilton/galaxy-downloads/raw/master/db_gx_rev_0120.sqlite.
The old template still exists at the same URL so hopefully this
doesn't break anything.

-John

>
>>>Note this does not cache the eggs and all the other
>>>side effects of the first run like creating config files,
>>>so there is room for some speed up.
>>
>> Eggs would be nice but not the biggest thing in the world.
>
> Right. I do like your idea of automatically generated
> cutting-edge or each stable release Docker images
> though (even if I have no personal need for them at
> the moment).
>
> Regards,
>
> Peter
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread Eric Rasche
For my part:Script/infra to generate brancheshttps://cpt.tamu.edu/gitlab/rasche.eric/docker-branch-generator/tree/masterGenerated branches:https://cpt.tamu.edu/gitlab/rasche.eric/generated-docker-branches/branches/recentNeed to patch up a couple issues, but I'm pretty much done on my end. Whenever you're ready, we can move this to github and someone can just run a cron job once a month. :)22.07.2014, 15:35, "Björn Grüning" : :) great I like it! Will do it shortly! Am 22.07.2014 15:36, schrieb Eric Rasche:  Hi Björn,  22.07.2014, 14:26, "Björn Grüning" :Hi Eric, That sounds like a pretty good idea.  If there was a pre-built image available for whatever release I wanted to test against I could just cache it and (hopefully) get my tests running a bit faster.  I'm not sure if anyone else is already doing this? Also, I remember there being mentioned pre-building docker images for each release of Galaxy, which would accomplish something similar, but I'm not really sure how that's being handled.  I think Björn's Docker image is kept up to date with Galaxy stable each time it's built https://github.com/bgruening/docker-recipes/blob/master/galaxy/Dockerfile#L51. So, this could be handled by modifying his Dockerfile to build Galaxy at whatever tagged release you want to test against. I will try hard to create with every Galaxy stable release a new Galaxy docker image. You can create a github branch with a specific tag, that will end up as a new tagged version of the main Galaxy Docker image. Try hard to create? No no, what can we do to automatically create these? I'm not so familiar with how one might build a galaxy release specific docker image, but if you can provide a generalized process, let's stick it in a CI server/cron job somewhere and never worry about it again!The hardest part is to remind myself ;)The procedure is:1) create a new branch for the galaxy-docker github account2) change the version string in the git-clone command in the Dockerfile3) login into the docker-index site and re-associate a new tag to thenew branch ... click the build buttonI could simplify that a little bit if the galaxy-docker image has itsown repository. The docker-index has a build-on-push feature.But currently every image (all branches) are build again. There is noway to only trigger one branch build. Until that is fixed in thedocker-index I will do it manually.So you see any improvements in that setup? Let me know!  Naturally! 1 and 2 could be automated out with a script. 3 could probably be fixed with a script, but that requires parsing pages/crafting cURL queries and that's less pleasant.  Let's have a new repository just for galaxy-docker images. I'll write up a script to check for updates and create+push branches as needed, we can place this in a cron/CI job and have it email you whenever it's run to say "hey, associate the branch/trigger a build"Cheers,Bjoern One downside to docker is that you need to get it installed on your CI server, which may or may not be possible (needs a very recent kernel for example). So true. SL7 for the win! :) Docker, Docker, Docker! Bjoern Aaron On Mon, Jul 21, 2014 at 3:12 PM, Eric Rasche  wrote: Hi Aaron, Yeah, absolutely understandable. I want my tools tested early and often. I abuse my CI server for everything, especially for building and packaging software. In this case I was imagining that I might have it produce an archive on every tagged release, as well as producing a "daily" archive. All of these would be available on some ftp/http server somewhere with symlinks for latest archives (e.g. galaxy-latest-release.tgz and galaxy-latest-daily.tgz). Would that work for your use case as well? Eric On 07/21/2014 03:02 PM, Aaron Petkau wrote: Hello Eric, Your right about that, downloading the archive, installing all the eggs, and then updating the database takes a bit of time (especially if you're like me and like re-running tests on nearly every change you make :P). I think it would be cool to have a pre-package Galaxy for integration testing which is quick to setup.  I once thought of downloading Björn's Docker image from Galaxy Bootstrap and using it that way, but thinking is about as far as I got with that one.  One problem I could see is it would have to be re-built on every release of Galaxy you want to test against (whereas mercurial cloning/pulling makes sure you're always up to date with the latest code). Aaron On Mon, Jul 21, 2014 at 2:45 PM, Eric Rasche > wrote:   Hi Aaron,   Good points, I was considering using g

Re: [galaxy-dev] Schema downgraded but galaxy won't launch

2014-07-22 Thread Kandalaft, Iyad
Thank you very much!  I had a feeling about those 119_.pyc and 120_.pyc files 
but I thought it recompiled on every launch and it would ignore them.  I don’t 
know much python

Regards,

Iyad Kandalaft
Microbial Biodiversity Bioinformatics
Agriculture and Agri-Food Canada | Agriculture et Agroalimentaire Canada
960 Carling Ave.| 960 Ave. Carling
Ottawa, ON| Ottawa (ON) K1A 0C6
E-mail Address / Adresse courriel  iyad.kandal...@agr.gc.ca
Telephone | Téléphone 613-759-1228
Facsimile | Télécopieur 613-759-1701
Teletypewriter | Téléimprimeur 613-773-2600
Government of Canada | Gouvernement du Canada



From: Dannon Baker [mailto:dannon.ba...@gmail.com]
Sent: Tuesday, July 22, 2014 11:24 AM
To: Kandalaft, Iyad
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] Schema downgraded but galaxy won't launch

Hi Iyad,

`find  -name '*.pyc' -delete`

Should clean up compiled python files allowing you to run again.

-Dannon

On Tue, Jul 22, 2014 at 11:16 AM, Kandalaft, Iyad 
mailto:iyad.kandal...@agr.gc.ca>> wrote:
Hello everyone

I tried a galaxy update on my development instance of galaxy. It failed due to 
a schema change coupled with a mysql version change.  Don’t worry about why it 
failed – that has been resolved.
I have a backup copy of the database that I restored, which was based on schema 
118.  I issued a “hg update *revision*” to revert back to a code base using the 
118 schema.  I verified that the model folder now only contains up to 
118_py.  When I try to start galaxy, it tells me that the code requires 
schema 120 and that I should upgrade using db_manage.sh.  I tried running 
“db_manage.sh downgrade 118” but since the database is already v118, it just 
exists without any output. What am I doing wrong?

Regards,
Iyad Kandalaft

Bioinformatics Application Developer
Agriculture and Agri-Food Canada | Agriculture et Agroalimentaire Canada
KW Neatby Bldg | éd. KW Neatby
960 Carling Ave| 960, avenue Carling
Ottawa, ON | Ottawa (ON) K1A 0C6
E-mail Address / Adresse courriel: 
iyad.kandal...@agr.gc.ca
Telephone | Téléphone 613- 759-1228
Facsimile | Télécopieur 613-759-1701
Government of Canada | Gouvernement du Canada


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] handlers

2014-07-22 Thread Kandalaft, Iyad
Your overall thought process seems correct.  I suspect you still have a web 
process for galaxy that is the only process being proxied by apache?
Make sure your universe_wsgi.ini has the option set to manage jobs in the 
database (required for multiple handlers).
I would start with 8 handlers and work my way up (despite the python GIC 
issue).  I suspect that each handler with 4 threads would easily saturate your 
24 core server.

I believe you need to set the default attribute and the tags attribute (I could 
be mistaken).
Here's what I have and it seems to work as expected (please correct it if it's 
wrong)









-q all.q -S 
/bin/bash














Iyad Kandalaft
Microbial Biodiversity Bioinformatics
Agriculture and Agri-Food Canada | Agriculture et Agroalimentaire Canada
960 Carling Ave.| 960 Ave. Carling
Ottawa, ON| Ottawa (ON) K1A 0C6
E-mail Address / Adresse courriel  iyad.kandal...@agr.gc.ca
Telephone | Téléphone 613-759-1228
Facsimile | Télécopieur 613-759-1701
Teletypewriter | Téléimprimeur 613-773-2600
Government of Canada | Gouvernement du Canada 



-Original Message-
From: galaxy-dev-boun...@lists.bx.psu.edu 
[mailto:galaxy-dev-boun...@lists.bx.psu.edu] On Behalf Of Shrum, Donald C
Sent: Tuesday, July 22, 2014 10:24 AM
To: galaxy-dev@lists.bx.psu.edu
Subject: [galaxy-dev] handlers

I could use a little bit of help in making some changes to our galaxy server.  

I'm in the process of setting up/testing a production galaxy server for our 
research computing center.  Our server is setup with an apache proxy, ldap 
authentication and jobs will run as the logged in user.

The server running galaxy has 24 cores.  While most of the jobs submitted will 
go to either our HPC or the condor cluster there are some jobs that are small 
and should run on the galaxy server itself.
I was planning to set up a single web handler and 23 job handlers.  I don't 
expect the web server to get bogged down, especially since I have apache 
serving as a proxy.
I expect a smaller (<100) number of users submitting many jobs.  

I just went with 23 handlers for no good reason other than the server has 24 
cores.  Perhaps there is a better way to discern the optimum number of job 
handlers.

I'd like jobs submitted to galaxy to go either to our HPC, Condor, or one of 
the 23 local workers.  Can galaxy effectively load balance itself in this way?
Does the configuration below accomplish this?  

universe_wsgi.ini: 
[server:handler1]
use = egg:Paste#http
port = 8081
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 10
.
.
[server:handler23]



job_conf.xml:


   
  ...



  default python


In my destinations.py script I point tools to the appropriate destination:
if tool_id.startswith('upload1'):
return JobDestination(id="local", runner="local")




___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this and other Galaxy 
lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Schema downgraded but galaxy won't launch

2014-07-22 Thread Dannon Baker
Hi Iyad,

`find  -name '*.pyc' -delete`

Should clean up compiled python files allowing you to run again.

-Dannon


On Tue, Jul 22, 2014 at 11:16 AM, Kandalaft, Iyad 
wrote:

>  Hello everyone
>
>
>
> I tried a galaxy update on my development instance of galaxy. It failed
> due to a schema change coupled with a mysql version change.  Don’t worry
> about why it failed – that has been resolved.
>
> I have a backup copy of the database that I restored, which was based on
> schema 118.  I issued a “hg update **revision**” to revert back to a code
> base using the 118 schema.  I verified that the model folder now only
> contains up to 118_py.  When I try to start galaxy, it tells me that
> the code requires schema 120 and that I should upgrade using db_manage.sh.
> I tried running “db_manage.sh downgrade 118” but since the database is
> already v118, it just exists without any output. What am I doing wrong?
>
>
>
> Regards,
>
> Iyad Kandalaft
>
>
>
> Bioinformatics Application Developer
>
> Agriculture and Agri-Food Canada | Agriculture et Agroalimentaire Canada
>
> KW Neatby Bldg | éd. KW Neatby
>
> 960 Carling Ave| 960, avenue Carling
>
> Ottawa, ON | Ottawa (ON) K1A 0C6
>
> E-mail Address / Adresse courriel: iyad.kandal...@agr.gc.ca
>
> Telephone | Téléphone 613- 759-1228
>
> Facsimile | Télécopieur 613-759-1701
>
> Government of Canada | Gouvernement du Canada
>
>
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   http://lists.bx.psu.edu/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Schema downgraded but galaxy won't launch

2014-07-22 Thread Kandalaft, Iyad
Hello everyone

I tried a galaxy update on my development instance of galaxy. It failed due to 
a schema change coupled with a mysql version change.  Don't worry about why it 
failed - that has been resolved.
I have a backup copy of the database that I restored, which was based on schema 
118.  I issued a "hg update *revision*" to revert back to a code base using the 
118 schema.  I verified that the model folder now only contains up to 
118_py.  When I try to start galaxy, it tells me that the code requires 
schema 120 and that I should upgrade using db_manage.sh.  I tried running 
"db_manage.sh downgrade 118" but since the database is already v118, it just 
exists without any output. What am I doing wrong?

Regards,
Iyad Kandalaft

Bioinformatics Application Developer
Agriculture and Agri-Food Canada | Agriculture et Agroalimentaire Canada
KW Neatby Bldg | éd. KW Neatby
960 Carling Ave| 960, avenue Carling
Ottawa, ON | Ottawa (ON) K1A 0C6
E-mail Address / Adresse courriel: 
iyad.kandal...@agr.gc.ca
Telephone | Téléphone 613- 759-1228
Facsimile | Télécopieur 613-759-1701
Government of Canada | Gouvernement du Canada

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Basic Questions

2014-07-22 Thread Peter Cock
Set yourself as an administrator, and you can import the files
from disk (and link to them if you wish to avoid a copy) as part
of a data library. See:

https://wiki.galaxyproject.org/Admin/DataLibraries/UploadingLibraryFiles

Peter

On Tue, Jul 22, 2014 at 3:52 PM, Mark Lindsay  wrote:
> Apologies if this sounds like a basic question or if I am enquiring of the 
> incorrect list.
>
> I have just had a local instance of galaxy installed on my MacPro.
>
> Could somebody inform me of the best options for loading large BAM files 
> (5Gb) from the same hard drive into this instance of Galaxy. It states o that 
> it is not possible to load files > 2Gb and that you must use either a URL or 
> FTP.
>
> My scripting knowledge is virtually non-existant….although I have access to 
> people that do.
>
> Cheers
>
> Mark
>
>
>
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   http://lists.bx.psu.edu/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Basic Questions

2014-07-22 Thread Dannon Baker
Hi Mark,

You may want to look into using Data Libraries for large imports like this
-- it'll allow you to import the files into Galaxy without making an extra
copy.  See
https://wiki.galaxyproject.org/Admin/DataLibraries/UploadingLibraryFiles
for more information -- specifically the 'allow_library_path_paste' portion.

-Dannon


On Tue, Jul 22, 2014 at 10:52 AM, Mark Lindsay 
wrote:

> Apologies if this sounds like a basic question or if I am enquiring of the
> incorrect list.
>
> I have just had a local instance of galaxy installed on my MacPro.
>
> Could somebody inform me of the best options for loading large BAM files
> (5Gb) from the same hard drive into this instance of Galaxy. It states o
> that it is not possible to load files > 2Gb and that you must use either a
> URL or FTP.
>
> My scripting knowledge is virtually non-existant….although I have access
> to people that do.
>
> Cheers
>
> Mark
>
>
>
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   http://lists.bx.psu.edu/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/mailinglists/
>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Basic Questions

2014-07-22 Thread Mark Lindsay
Apologies if this sounds like a basic question or if I am enquiring of the 
incorrect list.

I have just had a local instance of galaxy installed on my MacPro.

Could somebody inform me of the best options for loading large BAM files (5Gb) 
from the same hard drive into this instance of Galaxy. It states o that it is 
not possible to load files > 2Gb and that you must use either a URL or FTP. 

My scripting knowledge is virtually non-existant….although I have access to 
people that do.

Cheers

Mark




___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread Aaron Petkau
That's awesome Björn and Eric.  And, I'll also have to go through your
Travis CI document Peter.  It looks really cool.

Aaron


On Tue, Jul 22, 2014 at 9:35 AM, Björn Grüning 
wrote:

> :) great I like it!
> Will do it shortly!
>
> Am 22.07.2014 15:36, schrieb Eric Rasche:
>
>  Hi Björn,
>>
>> 22.07.2014, 14:26, "Björn Grüning" :
>>
>>>   Hi Eric,
>>>
That sounds like a pretty good idea.  If there was a pre-built image
>>available for whatever release I wanted to test against I could
>> just
>>
>cache
>
>>it and (hopefully) get my tests running a bit faster.  I'm not sure
>>
>if
>
>>anyone else is already doing this?
>>
>>Also, I remember there being mentioned pre-building docker images
>> for
>>
>each
>
>>release of Galaxy, which would accomplish something similar, but
>> I'm
>>
>not
>
>>really sure how that's being handled.  I think Björn's Docker image
>>
>is kept
>
>>up to date with Galaxy stable each time it's built
>>
>https://github.com/bgruening/docker-recipes/blob/master/
> galaxy/Dockerfile#L51.
>
>>So, this could be handled by modifying his Dockerfile to build
>> Galaxy
>>
>at
>
>>whatever tagged release you want to test against.
>>
>I will try hard to create with every Galaxy stable release a new
> Galaxy
>
>docker image. You can create a github branch with a specific tag,
> that
>will end up as a new tagged version of the main Galaxy Docker image.
>
Try hard to create? No no, what can we do to automatically create
 these? I'm not so familiar with how one might build a galaxy release
 specific docker image, but if you can provide a generalized process, let's
 stick it in a CI server/cron job somewhere and never worry about it again!

>>>   The hardest part is to remind myself ;)
>>>   The procedure is:
>>>
>>>   1) create a new branch for the galaxy-docker github account
>>>   2) change the version string in the git-clone command in the Dockerfile
>>>   3) login into the docker-index site and re-associate a new tag to the
>>>   new branch ... click the build button
>>>
>>>   I could simplify that a little bit if the galaxy-docker image has its
>>>   own repository. The docker-index has a build-on-push feature.
>>>   But currently every image (all branches) are build again. There is no
>>>   way to only trigger one branch build. Until that is fixed in the
>>>   docker-index I will do it manually.
>>>
>>>   So you see any improvements in that setup? Let me know!
>>>
>>
>> Naturally! 1 and 2 could be automated out with a script. 3 could probably
>> be fixed with a script, but that requires parsing pages/crafting cURL
>> queries and that's less pleasant.
>>
>> Let's have a new repository just for galaxy-docker images. I'll write up
>> a script to check for updates and create+push branches as needed, we can
>> place this in a cron/CI job and have it email you whenever it's run to say
>> "hey, associate the branch/trigger a build"
>>
>>Cheers,
>>>   Bjoern
>>>
One downside to docker is that you need to get it installed on your
>>
>CI
>
>>server, which may or may not be possible (needs a very recent
>> kernel
>>
>for
>
>>example).
>>
>So true. SL7 for the win! :)
>
>Docker, Docker, Docker!
>Bjoern
>
>>Aaron
>>
>>On Mon, Jul 21, 2014 at 3:12 PM, Eric Rasche <
>> rasche.e...@yandex.ru>
>>
>wrote:
>
>>Hi Aaron,
>>>
>>>Yeah, absolutely understandable. I want my tools tested early and
>>>
>>often.
>
>>I abuse my CI server for everything, especially for building and
>>>packaging software. In this case I was imagining that I might have
>>>
>>it
>
>>produce an archive on every tagged release, as well as producing a
>>>"daily" archive. All of these would be available on some ftp/http
>>>
>>server
>
>>somewhere with symlinks for latest archives (e.g.
>>>galaxy-latest-release.tgz and galaxy-latest-daily.tgz). Would that
>>>
>>work
>
>>for your use case as well?
>>>
>>>Eric
>>>
>>>On 07/21/2014 03:02 PM, Aaron Petkau wrote:
>>>
Hello Eric,

Your right about that, downloading the archive, installing all
 the

>>>eggs,
>
>>and then updating the database takes a bit of time (especially if

>>>you're
>
>>like me and like re-running tests on nearly every change you make

>>>:P).
>
>>I think it would be cool to have a pre-package Galaxy for

>>>integration
>
>>testing which is quick to se

Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread Björn Grüning

:) great I like it!
Will do it shortly!

Am 22.07.2014 15:36, schrieb Eric Rasche:

Hi Björn,

22.07.2014, 14:26, "Björn Grüning" :

  Hi Eric,

   That sounds like a pretty good idea.  If there was a pre-built image
   available for whatever release I wanted to test against I could just

   cache

   it and (hopefully) get my tests running a bit faster.  I'm not sure

   if

   anyone else is already doing this?

   Also, I remember there being mentioned pre-building docker images for

   each

   release of Galaxy, which would accomplish something similar, but I'm

   not

   really sure how that's being handled.  I think Björn's Docker image

   is kept

   up to date with Galaxy stable each time it's built

   
https://github.com/bgruening/docker-recipes/blob/master/galaxy/Dockerfile#L51.

   So, this could be handled by modifying his Dockerfile to build Galaxy

   at

   whatever tagged release you want to test against.

   I will try hard to create with every Galaxy stable release a new Galaxy

   docker image. You can create a github branch with a specific tag, that
   will end up as a new tagged version of the main Galaxy Docker image.

   Try hard to create? No no, what can we do to automatically create these? I'm 
not so familiar with how one might build a galaxy release specific docker 
image, but if you can provide a generalized process, let's stick it in a CI 
server/cron job somewhere and never worry about it again!

  The hardest part is to remind myself ;)
  The procedure is:

  1) create a new branch for the galaxy-docker github account
  2) change the version string in the git-clone command in the Dockerfile
  3) login into the docker-index site and re-associate a new tag to the
  new branch ... click the build button

  I could simplify that a little bit if the galaxy-docker image has its
  own repository. The docker-index has a build-on-push feature.
  But currently every image (all branches) are build again. There is no
  way to only trigger one branch build. Until that is fixed in the
  docker-index I will do it manually.

  So you see any improvements in that setup? Let me know!


Naturally! 1 and 2 could be automated out with a script. 3 could probably be 
fixed with a script, but that requires parsing pages/crafting cURL queries and 
that's less pleasant.

Let's have a new repository just for galaxy-docker images. I'll write up a script to 
check for updates and create+push branches as needed, we can place this in a cron/CI job 
and have it email you whenever it's run to say "hey, associate the branch/trigger a 
build"


  Cheers,
  Bjoern

   One downside to docker is that you need to get it installed on your

   CI

   server, which may or may not be possible (needs a very recent kernel

   for

   example).

   So true. SL7 for the win! :)

   Docker, Docker, Docker!
   Bjoern

   Aaron

   On Mon, Jul 21, 2014 at 3:12 PM, Eric Rasche 

   wrote:

   Hi Aaron,

   Yeah, absolutely understandable. I want my tools tested early and

   often.

   I abuse my CI server for everything, especially for building and
   packaging software. In this case I was imagining that I might have

   it

   produce an archive on every tagged release, as well as producing a
   "daily" archive. All of these would be available on some ftp/http

   server

   somewhere with symlinks for latest archives (e.g.
   galaxy-latest-release.tgz and galaxy-latest-daily.tgz). Would that

   work

   for your use case as well?

   Eric

   On 07/21/2014 03:02 PM, Aaron Petkau wrote:

   Hello Eric,

   Your right about that, downloading the archive, installing all the

   eggs,

   and then updating the database takes a bit of time (especially if

   you're

   like me and like re-running tests on nearly every change you make

   :P).

   I think it would be cool to have a pre-package Galaxy for

   integration

   testing which is quick to setup.  I once thought of downloading

   Björn's

   Docker image from Galaxy Bootstrap and using it that way, but

   thinking

   is about as far as I got with that one.  One problem I could see is

   it

   would have to be re-built on every release of Galaxy you want to

   test

   against (whereas mercurial cloning/pulling makes sure you're always

   up

   to date with the latest code).

   Aaron

   On Mon, Jul 21, 2014 at 2:45 PM, Eric Rasche mailto:rasche.e...@yandex.ru>> wrote:

 Hi Aaron,

 Good points, I was considering using galaxy bootstrap. This is
 mostly for the CI folk who want to download an archive, unpack

   it,

 and be ready to install/test their tools. The hg clone and

   egg/db

 steps seem like unnecessary overhead for this specific use

   case.

 Cheers,

 Eric

   ___
   Please keep all replies on the list by using "reply all"
   in your mail client.  To manage your subscriptions to this
   and other Galaxy lists, please use the int

[galaxy-dev] handlers

2014-07-22 Thread Shrum, Donald C
I could use a little bit of help in making some changes to our galaxy server.  

I'm in the process of setting up/testing a production galaxy server for our 
research computing center.  Our server is setup with an apache proxy, ldap 
authentication and jobs will run as the logged in user.

The server running galaxy has 24 cores.  While most of the jobs submitted will 
go to either our HPC or the condor cluster there are some jobs that are small 
and should run on the galaxy server itself.
I was planning to set up a single web handler and 23 job handlers.  I don't 
expect the web server to get bogged down, especially since I have apache 
serving as a proxy.
I expect a smaller (<100) number of users submitting many jobs.  

I just went with 23 handlers for no good reason other than the server has 24 
cores.  Perhaps there is a better way to discern the optimum number of job 
handlers.

I'd like jobs submitted to galaxy to go either to our HPC, Condor, or one of 
the 23 local workers.  Can galaxy effectively load balance itself in this way?
Does the configuration below accomplish this?  

universe_wsgi.ini: 
[server:handler1]
use = egg:Paste#http
port = 8081
host = 127.0.0.1
use_threadpool = true
threadpool_workers = 10
.
.
[server:handler23]



job_conf.xml:







...





default
python






In my destinations.py script I point tools to the appropriate destination:
if tool_id.startswith('upload1'):
return JobDestination(id="local", runner="local")




___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread Eric Rasche
Hi Björn,

22.07.2014, 14:26, "Björn Grüning" :
>  Hi Eric,
   That sounds like a pretty good idea.  If there was a pre-built image
   available for whatever release I wanted to test against I could just
>>>   cache
   it and (hopefully) get my tests running a bit faster.  I'm not sure
>>>   if
   anyone else is already doing this?

   Also, I remember there being mentioned pre-building docker images for
>>>   each
   release of Galaxy, which would accomplish something similar, but I'm
>>>   not
   really sure how that's being handled.  I think Björn's Docker image
>>>   is kept
   up to date with Galaxy stable each time it's built
>>>   
>>> https://github.com/bgruening/docker-recipes/blob/master/galaxy/Dockerfile#L51.
   So, this could be handled by modifying his Dockerfile to build Galaxy
>>>   at
   whatever tagged release you want to test against.
>>>   I will try hard to create with every Galaxy stable release a new Galaxy
>>>
>>>   docker image. You can create a github branch with a specific tag, that
>>>   will end up as a new tagged version of the main Galaxy Docker image.
>>   Try hard to create? No no, what can we do to automatically create these? 
>> I'm not so familiar with how one might build a galaxy release specific 
>> docker image, but if you can provide a generalized process, let's stick it 
>> in a CI server/cron job somewhere and never worry about it again!
>  The hardest part is to remind myself ;)
>  The procedure is:
>
>  1) create a new branch for the galaxy-docker github account
>  2) change the version string in the git-clone command in the Dockerfile
>  3) login into the docker-index site and re-associate a new tag to the
>  new branch ... click the build button
>
>  I could simplify that a little bit if the galaxy-docker image has its
>  own repository. The docker-index has a build-on-push feature.
>  But currently every image (all branches) are build again. There is no
>  way to only trigger one branch build. Until that is fixed in the
>  docker-index I will do it manually.
>
>  So you see any improvements in that setup? Let me know!

Naturally! 1 and 2 could be automated out with a script. 3 could probably be 
fixed with a script, but that requires parsing pages/crafting cURL queries and 
that's less pleasant.

Let's have a new repository just for galaxy-docker images. I'll write up a 
script to check for updates and create+push branches as needed, we can place 
this in a cron/CI job and have it email you whenever it's run to say "hey, 
associate the branch/trigger a build"

>  Cheers,
>  Bjoern
   One downside to docker is that you need to get it installed on your
>>>   CI
   server, which may or may not be possible (needs a very recent kernel
>>>   for
   example).
>>>   So true. SL7 for the win! :)
>>>
>>>   Docker, Docker, Docker!
>>>   Bjoern
   Aaron

   On Mon, Jul 21, 2014 at 3:12 PM, Eric Rasche 
>>>   wrote:
>   Hi Aaron,
>
>   Yeah, absolutely understandable. I want my tools tested early and
>>>   often.
>   I abuse my CI server for everything, especially for building and
>   packaging software. In this case I was imagining that I might have
>>>   it
>   produce an archive on every tagged release, as well as producing a
>   "daily" archive. All of these would be available on some ftp/http
>>>   server
>   somewhere with symlinks for latest archives (e.g.
>   galaxy-latest-release.tgz and galaxy-latest-daily.tgz). Would that
>>>   work
>   for your use case as well?
>
>   Eric
>
>   On 07/21/2014 03:02 PM, Aaron Petkau wrote:
>>   Hello Eric,
>>
>>   Your right about that, downloading the archive, installing all the
>>>   eggs,
>>   and then updating the database takes a bit of time (especially if
>>>   you're
>>   like me and like re-running tests on nearly every change you make
>>>   :P).
>>   I think it would be cool to have a pre-package Galaxy for
>>>   integration
>>   testing which is quick to setup.  I once thought of downloading
>>>   Björn's
>>   Docker image from Galaxy Bootstrap and using it that way, but
>>>   thinking
>>   is about as far as I got with that one.  One problem I could see is
>>>   it
>>   would have to be re-built on every release of Galaxy you want to
>>>   test
>>   against (whereas mercurial cloning/pulling makes sure you're always
>>>   up
>>   to date with the latest code).
>>
>>   Aaron
>>
>>   On Mon, Jul 21, 2014 at 2:45 PM, Eric Rasche >   > wrote:
>>
>> Hi Aaron,
>>
>> Good points, I was considering using galaxy bootstrap. This is
>> mostly for the CI folk who want to download an archive, unpack
>>>   it,
>> and be ready to install/test their tools. The hg clone and
>>>   egg/db
>> steps seem like unnecessary overhead for this specific use
>>>

Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread Björn Grüning

Hi Eric,


That sounds like a pretty good idea.  If there was a pre-built image
available for whatever release I wanted to test against I could just

cache

it and (hopefully) get my tests running a bit faster.  I'm not sure

if

anyone else is already doing this?

Also, I remember there being mentioned pre-building docker images for

each

release of Galaxy, which would accomplish something similar, but I'm

not

really sure how that's being handled.  I think Björn's Docker image

is kept

up to date with Galaxy stable each time it's built


https://github.com/bgruening/docker-recipes/blob/master/galaxy/Dockerfile#L51.

So, this could be handled by modifying his Dockerfile to build Galaxy

at

whatever tagged release you want to test against.


I will try hard to create with every Galaxy stable release a new Galaxy

docker image. You can create a github branch with a specific tag, that
will end up as a new tagged version of the main Galaxy Docker image.


Try hard to create? No no, what can we do to automatically create these? I'm 
not so familiar with how one might build a galaxy release specific docker 
image, but if you can provide a generalized process, let's stick it in a CI 
server/cron job somewhere and never worry about it again!


The hardest part is to remind myself ;)
The procedure is:

1) create a new branch for the galaxy-docker github account
2) change the version string in the git-clone command in the Dockerfile
3) login into the docker-index site and re-associate a new tag to the 
new branch ... click the build button


I could simplify that a little bit if the galaxy-docker image has its 
own repository. The docker-index has a build-on-push feature.
But currently every image (all branches) are build again. There is no 
way to only trigger one branch build. Until that is fixed in the 
docker-index I will do it manually.


So you see any improvements in that setup? Let me know!
Cheers,
Bjoern


One downside to docker is that you need to get it installed on your

CI

server, which may or may not be possible (needs a very recent kernel

for

example).


So true. SL7 for the win! :)

Docker, Docker, Docker!
Bjoern


Aaron


On Mon, Jul 21, 2014 at 3:12 PM, Eric Rasche 

wrote:



Hi Aaron,

Yeah, absolutely understandable. I want my tools tested early and

often.


I abuse my CI server for everything, especially for building and
packaging software. In this case I was imagining that I might have

it

produce an archive on every tagged release, as well as producing a
"daily" archive. All of these would be available on some ftp/http

server

somewhere with symlinks for latest archives (e.g.
galaxy-latest-release.tgz and galaxy-latest-daily.tgz). Would that

work

for your use case as well?

Eric

On 07/21/2014 03:02 PM, Aaron Petkau wrote:

Hello Eric,

Your right about that, downloading the archive, installing all the

eggs,

and then updating the database takes a bit of time (especially if

you're

like me and like re-running tests on nearly every change you make

:P).

I think it would be cool to have a pre-package Galaxy for

integration

testing which is quick to setup.  I once thought of downloading

Björn's

Docker image from Galaxy Bootstrap and using it that way, but

thinking

is about as far as I got with that one.  One problem I could see is

it

would have to be re-built on every release of Galaxy you want to

test

against (whereas mercurial cloning/pulling makes sure you're always

up

to date with the latest code).

Aaron


On Mon, Jul 21, 2014 at 2:45 PM, Eric Rasche mailto:rasche.e...@yandex.ru>> wrote:

  Hi Aaron,

  Good points, I was considering using galaxy bootstrap. This is
  mostly for the CI folk who want to download an archive, unpack

it,

  and be ready to install/test their tools. The hg clone and

egg/db

  steps seem like unnecessary overhead for this specific use

case.


  Cheers,

  Eric








___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/




___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread Peter Cock
On Tue, Jul 22, 2014 at 1:15 PM, Eric Rasche  wrote:
> Hi Peter,
>
> On July 22, 2014 3:15:41 AM CDT, Peter Cock  wrote:
>>
>>Given how close you can get now for minimal effort,
>>this seem unnecessary.
>>
>>http://blastedbio.blogspot.co.uk/2013/09/using-travis-ci-for-testing-galaxy-tools.html
>>
>>My TravisCI setup this fetches the latest Galaxy as
>>a tar ball (from a GitHub mirror as it was faster than
>>a git clone which was faster than getting the tar ball
>>from BitBucket, which in turn was faster than using
>>hg clone),
>
> Yes, that post was at least part of the thinking behind this.

:)

>> .., and a per-migrated SQLite database
>>(using a bit of Galaxy functionality originally with
>>$GALAXY_TEST_DB_TEMPLATE added to speed
>>up running the functional tests).

Apologies for grammatical error - I pasted in the environment
variable at the wrong point in the sentence.

> I know I've seen that used but was never able to get that
> working in practice (then again I didn't try that hard). If
> that's a working/usable feature, then that is already the
> majority of setup time.

Yes, the creation of the test-database and all the migrations
was an obvious low-hanging fruit when we were looking at
making running the tool functional tests faster - although
originally in the context of running the tests on a local
development Galaxy instance.

As to using this in practise, currently my TravisCI setup has:

export 
GALAXY_TEST_DB_TEMPLATE=https://github.com/jmchilton/galaxy-downloads/raw/master/db_gx_rev_0117.sqlite

I also added that line at the start of my local copy of script
run_functional_tests.sh to benefit from this while doing
development. That should be all there is to it (but from
memory, this is only for use with the SQLite backend).

John - could you add a current schema snapshot to
https://github.com/jmchilton/galaxy-downloads/ please?

>>Note this does not cache the eggs and all the other
>>side effects of the first run like creating config files,
>>so there is room for some speed up.
>
> Eggs would be nice but not the biggest thing in the world.

Right. I do like your idea of automatically generated
cutting-edge or each stable release Docker images
though (even if I have no personal need for them at
the moment).

Regards,

Peter
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread Eric Rasche
Hi Peter,

On July 22, 2014 3:15:41 AM CDT, Peter Cock  wrote:
>On Mon, Jul 21, 2014 at 6:51 PM, Eric Rasche 
>wrote:
>> Currently the checkout options consist of hg clones, and archives
>that
>> mercurial produces.
>>
>> Having pulled or cloned galaxy a few times lately, I'm wondering if
>anyone
>> would have a use for a once-run galaxy instance in an archive? I.e.,
>I'd
>> clone, run once to grab eggs and do the db migration, then re-tar
>result and
>> store online. Might cut down on build/test times for those who are
>using
>> travis or other CIs. Thoughts/opinions?
>
>Hi Eric,
>
>Given how close you can get now for minimal effort,
>this seem unnecessary.
>
>http://blastedbio.blogspot.co.uk/2013/09/using-travis-ci-for-testing-galaxy-tools.html
>
>My TravisCI setup this fetches the latest Galaxy as
>a tar ball (from a GitHub mirror as it was faster than
>a git clone which was faster than getting the tar ball
>from BitBucket, which in turn was faster than using

Yes, that post was at least part of the thinking behind this.

>hg clone), and a per-migrated SQLite database
>(using a bit of Galaxy functionality originally with
>$GALAXY_TEST_DB_TEMPLATE added to speed
>up running the functional tests).

I know I've seen that used but was never able to get that working in practice 
(then again I didn't try that hard). If that's a working/usable feature, then 
that is already the majority of setup time. Eggs would be nice but not the 
biggest thing in the world.

>Note this does not cache the eggs and all the other
>side effects of the first run like creating config files,
>so there is room for some speed up.
>
>Regards,
>
>Peter

Cheers,
Eric
-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread Eric Rasche
Hi Björn,

On July 22, 2014 3:17:38 AM CDT, "Björn Grüning"  
wrote:
>Hi Aaron and Eric,
>
>Am 21.07.2014 22:58, schrieb Aaron Petkau:
>> Hello Eric,
>>
>> That sounds like a pretty good idea.  If there was a pre-built image
>> available for whatever release I wanted to test against I could just
>cache
>> it and (hopefully) get my tests running a bit faster.  I'm not sure
>if
>> anyone else is already doing this?
>>
>> Also, I remember there being mentioned pre-building docker images for
>each
>> release of Galaxy, which would accomplish something similar, but I'm
>not
>> really sure how that's being handled.  I think Björn's Docker image
>is kept
>> up to date with Galaxy stable each time it's built
>>
>https://github.com/bgruening/docker-recipes/blob/master/galaxy/Dockerfile#L51.
>> So, this could be handled by modifying his Dockerfile to build Galaxy
>at
>> whatever tagged release you want to test against.
>
>I will try hard to create with every Galaxy stable release a new Galaxy
>
>docker image. You can create a github branch with a specific tag, that 
>will end up as a new tagged version of the main Galaxy Docker image.

Try hard to create? No no, what can we do to automatically create these? I'm 
not so familiar with how one might build a galaxy release specific docker 
image, but if you can provide a generalized process, let's stick it in a CI 
server/cron job somewhere and never worry about it again! 

>> One downside to docker is that you need to get it installed on your
>CI
>> server, which may or may not be possible (needs a very recent kernel
>for
>> example).
>
>So true. SL7 for the win! :)
>
>Docker, Docker, Docker!
>Bjoern
>
>> Aaron
>>
>>
>> On Mon, Jul 21, 2014 at 3:12 PM, Eric Rasche 
>wrote:
>>
>>> Hi Aaron,
>>>
>>> Yeah, absolutely understandable. I want my tools tested early and
>often.
>>>
>>> I abuse my CI server for everything, especially for building and
>>> packaging software. In this case I was imagining that I might have
>it
>>> produce an archive on every tagged release, as well as producing a
>>> "daily" archive. All of these would be available on some ftp/http
>server
>>> somewhere with symlinks for latest archives (e.g.
>>> galaxy-latest-release.tgz and galaxy-latest-daily.tgz). Would that
>work
>>> for your use case as well?
>>>
>>> Eric
>>>
>>> On 07/21/2014 03:02 PM, Aaron Petkau wrote:
 Hello Eric,

 Your right about that, downloading the archive, installing all the
>eggs,
 and then updating the database takes a bit of time (especially if
>you're
 like me and like re-running tests on nearly every change you make
>:P).
 I think it would be cool to have a pre-package Galaxy for
>integration
 testing which is quick to setup.  I once thought of downloading
>Björn's
 Docker image from Galaxy Bootstrap and using it that way, but
>thinking
 is about as far as I got with that one.  One problem I could see is
>it
 would have to be re-built on every release of Galaxy you want to
>test
 against (whereas mercurial cloning/pulling makes sure you're always
>up
 to date with the latest code).

 Aaron


 On Mon, Jul 21, 2014 at 2:45 PM, Eric Rasche >>> > wrote:

  Hi Aaron,

  Good points, I was considering using galaxy bootstrap. This is
  mostly for the CI folk who want to download an archive, unpack
>it,
  and be ready to install/test their tools. The hg clone and
>egg/db
  steps seem like unnecessary overhead for this specific use
>case.

  Cheers,

  Eric


>>>
>>
>>
>>
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>>http://lists.bx.psu.edu/
>>
>> To search Galaxy mailing lists use the unified search at:
>>http://galaxyproject.org/search/mailinglists/
>>

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread Björn Grüning

Hi Aaron and Eric,

Am 21.07.2014 22:58, schrieb Aaron Petkau:

Hello Eric,

That sounds like a pretty good idea.  If there was a pre-built image
available for whatever release I wanted to test against I could just cache
it and (hopefully) get my tests running a bit faster.  I'm not sure if
anyone else is already doing this?

Also, I remember there being mentioned pre-building docker images for each
release of Galaxy, which would accomplish something similar, but I'm not
really sure how that's being handled.  I think Björn's Docker image is kept
up to date with Galaxy stable each time it's built
https://github.com/bgruening/docker-recipes/blob/master/galaxy/Dockerfile#L51.
So, this could be handled by modifying his Dockerfile to build Galaxy at
whatever tagged release you want to test against.


I will try hard to create with every Galaxy stable release a new Galaxy 
docker image. You can create a github branch with a specific tag, that 
will end up as a new tagged version of the main Galaxy Docker image.



One downside to docker is that you need to get it installed on your CI
server, which may or may not be possible (needs a very recent kernel for
example).


So true. SL7 for the win! :)

Docker, Docker, Docker!
Bjoern


Aaron


On Mon, Jul 21, 2014 at 3:12 PM, Eric Rasche  wrote:


Hi Aaron,

Yeah, absolutely understandable. I want my tools tested early and often.

I abuse my CI server for everything, especially for building and
packaging software. In this case I was imagining that I might have it
produce an archive on every tagged release, as well as producing a
"daily" archive. All of these would be available on some ftp/http server
somewhere with symlinks for latest archives (e.g.
galaxy-latest-release.tgz and galaxy-latest-daily.tgz). Would that work
for your use case as well?

Eric

On 07/21/2014 03:02 PM, Aaron Petkau wrote:

Hello Eric,

Your right about that, downloading the archive, installing all the eggs,
and then updating the database takes a bit of time (especially if you're
like me and like re-running tests on nearly every change you make :P).
I think it would be cool to have a pre-package Galaxy for integration
testing which is quick to setup.  I once thought of downloading Björn's
Docker image from Galaxy Bootstrap and using it that way, but thinking
is about as far as I got with that one.  One problem I could see is it
would have to be re-built on every release of Galaxy you want to test
against (whereas mercurial cloning/pulling makes sure you're always up
to date with the latest code).

Aaron


On Mon, Jul 21, 2014 at 2:45 PM, Eric Rasche mailto:rasche.e...@yandex.ru>> wrote:

 Hi Aaron,

 Good points, I was considering using galaxy bootstrap. This is
 mostly for the CI folk who want to download an archive, unpack it,
 and be ready to install/test their tools. The hg clone and egg/db
 steps seem like unnecessary overhead for this specific use case.

 Cheers,

 Eric








___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Once-run galaxy archives

2014-07-22 Thread Peter Cock
On Mon, Jul 21, 2014 at 6:51 PM, Eric Rasche  wrote:
> Currently the checkout options consist of hg clones, and archives that
> mercurial produces.
>
> Having pulled or cloned galaxy a few times lately, I'm wondering if anyone
> would have a use for a once-run galaxy instance in an archive? I.e., I'd
> clone, run once to grab eggs and do the db migration, then re-tar result and
> store online. Might cut down on build/test times for those who are using
> travis or other CIs. Thoughts/opinions?

Hi Eric,

Given how close you can get now for minimal effort,
this seem unnecessary.

http://blastedbio.blogspot.co.uk/2013/09/using-travis-ci-for-testing-galaxy-tools.html

My TravisCI setup this fetches the latest Galaxy as
a tar ball (from a GitHub mirror as it was faster than
a git clone which was faster than getting the tar ball
from BitBucket, which in turn was faster than using
hg clone), and a per-migrated SQLite database
(using a bit of Galaxy functionality originally with
$GALAXY_TEST_DB_TEMPLATE added to speed
up running the functional tests).

Note this does not cache the eggs and all the other
side effects of the first run like creating config files,
so there is room for some speed up.

Regards,

Peter
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] How to configure galaxy with a cluster

2014-07-22 Thread Björn Grüning

Hi Ben,

if the job is in waiting in the queue it's unlikely (not impossible) 
that it is Galaxy fault. Can you recheck your Torque setup and how many 
cores and memory your job has requested?


Ciao,
Bjoern

Am 22.07.2014 10:09, schrieb 王渭巍:

Hi, Bjoern,
 I've tried the latest galaxy version with Torque 4.1.7, and it seems all 
right. But torque version > 4.2 won't work.
 And I tried to submit“fastqc readqc” jobs via torque (runner pbs),  
but the job is always in the queue waiting. I submited “fastqc readqc”local  
(runner local) , and the job finished successfully. So the question is , it 
seems not all the tools can be submitted via torque (or other resource 
manager), right?



王渭巍

From: Björn Grüning
Date: 2014-07-21 01:23
To: 王渭巍; Björn Grüning; galaxy-dev
Subject: Re: [galaxy-dev] How to configure galaxy with a cluster
Hi Ben,

sorry but we do not run a Torque setup.

Do you have any concrete questions or error messages?

Cheers,
Bjoern

Am 17.07.2014 04:10, schrieb 王渭巍:

Hi, Bjoern
  Would you share your  procedure to make some tools to run on a 
cluster.
  I have tried 
https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster using Torque, 
but got errors.
  I think maybe it's job_conf.xml. Would you share yours?  Thanks a lot

Ben


From: Björn Grüning
Date: 2014-07-16 16:34
To: 王渭巍; Thomas Bellembois; galaxy-dev
Subject: Re: [galaxy-dev] How to configure galaxy with a cluster
Hi Ben,

that is not possible at the moment. The idea is to keep the
user-inferface as easy as possible for the user. You, as admin, can
decide which resource a specific tool with a specific input will use.
You will never see any options like that in a tool, but you can write a
tool by yourself if you like, or "enhance" the megablast tool.

Cheers,
Bjoern


Am 16.07.2014 09:43, schrieb 王渭巍:

Thanks a lot, Thomas! It really helps, I added tools section followed your 
suggestion...

here is my job_conf.xml ( I am using Torque,  I have 3 servers. One for galaxy 
server, two for cluster computing.  )










walltime=72:00:00,nodes=1:ppn=8
128







and still no cluster options in "megablast" item.  How can I see cluster 
options in the page, for example, the page will let me choose to use local server or a 
cluster.

Ben



From: Thomas Bellembois
Date: 2014-07-15 17:41
To: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] How to configure galaxy with a cluster
Hello Ben,

you can configure your Galaxy instance to use your cluster in the
job_conf.xml file:

https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster

You can set up your instance to use your cluster by default for all jobs
or only for specific jobs.

Here is a part of my job_conf.xml for example:

   

   

   
   
   

   
   
   
   

   
   
   
 -r yes -b n -cwd -S /bin/bash
-V -pe galaxy 1
   
   
 -r yes -b n -cwd -S /bin/bash
-V -pe galaxy 12
   

   

   
   
   
   
   
   
   
   
   
   
   
   


Moreover you Galaxy user and Galaxy server must be allowed to submit
jobs to your scheduler.

Hope it  helps,

Thomas




___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] XML/json code to parse/get/create wrapper files to/from galaxy

2014-07-22 Thread Rémy Dernat
All the feedbacks are welcome.

Regards


2014-07-21 19:15 GMT+02:00 Björn Grüning :

> Very cool, I will give it a try :)
>
> Am 21.07.2014 16:27, schrieb Rémy Dernat:
>
>> Hi,
>>
>> I created a little repository on github to parse or create wrapper files.
>> I
>> have not implemented everything, but it is a good base, i.e. if you want
>> to
>> create a galaxy wrapper from a standard help output !
>>
>> Just clone it and have fun :
>>
>> git clone https://github.com/remyd1/XMLparser-wrapper.git
>> cd XMLparser-wrapper
>> python ParserConverterXML.py --help
>>
>>
>> python ParserConverterXML.py h2gw -c "ls --help" -o ../ls.gw.xml
>>
>> Any help is welcome on this little project. Exemples : json for
>> workflows...
>>
>> Best regards,
>>
>>
>> Remy
>>
>>
>>
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>>http://lists.bx.psu.edu/
>>
>> To search Galaxy mailing lists use the unified search at:
>>http://galaxyproject.org/search/mailinglists/
>>
>>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] How to configure galaxy with a cluster

2014-07-22 Thread 王渭巍
Hi, Bjoern, 
I've tried the latest galaxy version with Torque 4.1.7, and it seems 
all right. But torque version > 4.2 won't work. 
And I tried to submit“fastqc readqc” jobs via torque (runner pbs),  but 
the job is always in the queue waiting. I submited “fastqc readqc”local  
(runner local) , and the job finished successfully. So the question is , it 
seems not all the tools can be submitted via torque (or other resource 
manager), right? 



王渭巍
 
From: Björn Grüning
Date: 2014-07-21 01:23
To: 王渭巍; Björn Grüning; galaxy-dev
Subject: Re: [galaxy-dev] How to configure galaxy with a cluster
Hi Ben,
 
sorry but we do not run a Torque setup.
 
Do you have any concrete questions or error messages?
 
Cheers,
Bjoern
 
Am 17.07.2014 04:10, schrieb 王渭巍:
> Hi, Bjoern
>  Would you share your  procedure to make some tools to run on a 
> cluster.
>  I have tried 
> https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster using Torque, 
> but got errors.
>  I think maybe it's job_conf.xml. Would you share yours?  Thanks a lot
>
> Ben
>
>
> From: Björn Grüning
> Date: 2014-07-16 16:34
> To: 王渭巍; Thomas Bellembois; galaxy-dev
> Subject: Re: [galaxy-dev] How to configure galaxy with a cluster
> Hi Ben,
>
> that is not possible at the moment. The idea is to keep the
> user-inferface as easy as possible for the user. You, as admin, can
> decide which resource a specific tool with a specific input will use.
> You will never see any options like that in a tool, but you can write a
> tool by yourself if you like, or "enhance" the megablast tool.
>
> Cheers,
> Bjoern
>
>
> Am 16.07.2014 09:43, schrieb 王渭巍:
>> Thanks a lot, Thomas! It really helps, I added tools section followed your 
>> suggestion...
>>
>> here is my job_conf.xml ( I am using Torque,  I have 3 servers. One for 
>> galaxy server, two for cluster computing.  )
>>
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> walltime=72:00:00,nodes=1:ppn=8
>> 128
>> 
>> 
>> 
>> 
>> 
>> 
>>
>> and still no cluster options in "megablast" item.  How can I see cluster 
>> options in the page, for example, the page will let me choose to use local 
>> server or a cluster.
>>
>> Ben
>>
>>
>>
>> From: Thomas Bellembois
>> Date: 2014-07-15 17:41
>> To: galaxy-dev@lists.bx.psu.edu
>> Subject: Re: [galaxy-dev] How to configure galaxy with a cluster
>> Hello Ben,
>>
>> you can configure your Galaxy instance to use your cluster in the
>> job_conf.xml file:
>>
>> https://wiki.galaxyproject.org/Admin/Config/Performance/Cluster
>>
>> You can set up your instance to use your cluster by default for all jobs
>> or only for specific jobs.
>>
>> Here is a part of my job_conf.xml for example:
>>
>>   
>> 
>>   > load="galaxy.jobs.runners.local:LocalJobRunner" workers="4"/>
>>
>>   
>>   > load="galaxy.jobs.runners.drmaa:DRMAAJobRunner"/>
>>   
>>
>>   
>>   
>>   
>>   
>>
>>   
>>   
>>   
>> -r yes -b n -cwd -S /bin/bash
>> -V -pe galaxy 1
>>   
>>   
>> -r yes -b n -cwd -S /bin/bash
>> -V -pe galaxy 12
>>   
>>
>>   
>>
>>   
>>   
>>   > id="toolshed.g2.bx.psu.edu/repos/bhaas/trinityrnaseq/trinityrnaseq/0.0.1" 
>> destination="sge_big"/>
>>   
>>   
>>   > id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_blastp_wrapper/0.1.00"
>> destination="sge_big"/>
>>   > id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_tblastn_wrapper/0.1.00"
>> destination="sge_big"/>
>>   > id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_blastx_wrapper/0.1.00"
>> destination="sge_big"/>
>>   > id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_blastn_wrapper/0.1.00"
>> destination="sge_big"/>
>>   > id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_tblastx_wrapper/0.1.00"
>> destination="sge_big"/>
>>   > id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_rpstblastn_wrapper/0.1.00"
>> destination="sge_big"/>
>>   > id="toolshed.g2.bx.psu.edu/repos/devteam/ncbi_blast_plus/ncbi_rpsblast_wrapper/0.1.00"
>> destination="sge_big"/>
>> 
>>
>> Moreover you Galaxy user and Galaxy server must be allowed to submit
>> jobs to your scheduler.
>>
>> Hope it  helps,
>>
>> Thomas
>>
>>
>>
>>
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>> http://lists.bx.psu.edu/
>>
>> To search Galaxy mailing lists use the unified search at:
>> http://galaxyproject.org/search/mailinglists/
>>
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http: