Re: [galaxy-dev] Track Job Runtime

2013-05-14 Thread Geert Vandeweyer

On 05/08/2013 05:38 PM, Geert Vandeweyer wrote:
self.sa_session.execute('UPDATE job SET runtime = :runtime WHERE id = 
:id',{'runtime':runtime,'id':galaxy_job_id}) 


does anybody have a solution to convert this statement to proper 
sqlalchemy syntax, for use in the check_watched_items function in pbs.py ?


Regarding Taylor's suggestion: A separate table is also an option, but 
would take more queries  joins to estimate walltime at startup (join 
table with job table for job type (on job-id), request two rows per 
finished jobid, substract end-start timestamp, average.  An extra 
column in the job table only needs one query on one table (select 
runtime from jobs where type = 'x' and state = 'ok').


Best,

Geert

--

Geert Vandeweyer, Ph.D.
Department of Medical Genetics
University of Antwerp
Prins Boudewijnlaan 43
2650 Edegem
Belgium
Tel: +32 (0)3 275 97 56
E-mail: geert.vandewe...@ua.ac.be
http://ua.ac.be/cognitivegenetics
http://www.linkedin.com/pub/geert-vandeweyer/26/457/726

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] [galaxy-iuc] Suggestion for improved feedback on failing tests

2013-05-14 Thread Peter Cock
On Tue, May 14, 2013 at 3:32 AM, Ira Cooke iraco...@gmail.com wrote:
 Hi All,

 My guess is that one of the most common ways in which tools will fail tests
 on the build-bot is when a dependency fails to install properly.  I think
 this is what is happening to my tool xtandem

 http://testtoolshed.g2.bx.psu.edu/view/iracooke/xtandem

You may be suffering from the same missing test results problem
as me, see the long thread Missing test results on (Test) Tool Shed:
http://dev.list.galaxyproject.org/Missing-test-results-on-Test-Tool-Shed-td4659531.html

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] [galaxy-iuc] Suggestion for improved feedback on failing tests

2013-05-14 Thread Ira Cooke
Hi Peter, 

Yes that's most likely part of it ... thanks for the link to that thread.  I 
think I am suffering from the same issue. 

 ... but I'm also assuming when my test results come back I'll still need to 
figure out where my repository dependencies failed.  Last test result I saw it 
looked like a failure compiling ruby ... but it's hard to know what's missing 
to fix it.

Cheers
Ira

On 14/05/2013, at 6:54 PM, Peter Cock p.j.a.c...@googlemail.com wrote:

 On Tue, May 14, 2013 at 3:32 AM, Ira Cooke iraco...@gmail.com wrote:
 Hi All,
 
 My guess is that one of the most common ways in which tools will fail tests
 on the build-bot is when a dependency fails to install properly.  I think
 this is what is happening to my tool xtandem
 
 http://testtoolshed.g2.bx.psu.edu/view/iracooke/xtandem
 
 You may be suffering from the same missing test results problem
 as me, see the long thread Missing test results on (Test) Tool Shed:
 http://dev.list.galaxyproject.org/Missing-test-results-on-Test-Tool-Shed-td4659531.html
 
 Peter


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] [galaxy-iuc] Suggestion for improved feedback on failing tests

2013-05-14 Thread Peter Cock
On Tue, May 14, 2013 at 10:39 AM, Ira Cooke iraco...@gmail.com wrote:
 Hi Peter,

 Yes that's most likely part of it ... thanks for the link to that thread.
 I think I am suffering from the same issue.

  ... but I'm also assuming when my test results come back I'll still
 need to figure out where my repository dependencies failed.  Last
 test result I saw it looked like a failure compiling ruby ... but it's
 hard to know what's missing to fix it.

 Cheers
 Ira

Me too - I've been struggling with tests failing due to partial installs
(where only the beginning of the tool_dependencies.xml is processed
but no error from the installation process is show on the Tool Shed):
http://dev.list.galaxyproject.org/Handling-of-tool-dependencies-xml-errors-in-Tool-Shed-testing-tt4659720.html

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Suggestion for improved feedback on failing tests

2013-05-14 Thread Greg Von Kuster
We're fairly close to having a new container in the Tool test results 
container.  The new container will be something like Installation errors and 
for each tool it will list the tool dependencies that have installation errors. 
 Any installation errors will result in the tool not being tested.  This new 
feature should hopefully be available today.

Greg Von Kuster


On May 13, 2013, at 10:32 PM, Ira Cooke wrote:

 Hi All, 
 
 My guess is that one of the most common ways in which tools will fail tests 
 on the build-bot is when a dependency fails to install properly.  I think 
 this is what is happening to my tool xtandem 
 
 http://testtoolshed.g2.bx.psu.edu/view/iracooke/xtandem
 
 One potential improvement that I think could make it easier to debug this 
 situation would be to show test status for repositories even when they 
 contain no tools (the test would simply attempt an install and show the 
 installation log under the test details).   This would be particularly useful 
 for repositories that exist purely to install a dependency eg
 
 http://testtoolshed.g2.bx.psu.edu/view/iracooke/galaxy_protk
 
 Naturally a tool with proper functional tests would fail if its dependency 
 installations fail .. but it would be a huge help in narrowing down the issue 
 to be able to see where those failures occurred.  Another point is that this 
 is pretty much the only way of getting feedback on why a tool might fail 
 installation on the build-bot without having a perfect clone of the build-bot 
 locally.
 
 Cheers
 Ira
 
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/
 
 To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] lists.bx.psu.edu is down

2013-05-14 Thread Nate Coraor
On May 14, 2013, at 6:28 AM, Peter Cock wrote:

 Dear all,
 
 Something seems to have happened to the lists.bx.psu.edu
 server recently - which is unfortunate as I've made a habit
 of using this for linking to past email threads via the nice
 mailman archive listing which was here:
 
 http://lists.bx.psu.edu/pipermail/galaxy-dev/
 
 Was this deliberate or does someone need to kick a server? ;)

It's been kicked.  Thanks for the heads up.

--nate

 
 Thanks,
 
 Peter
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/
 
 To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] tool_dependencies inside tool_dependencies

2013-05-14 Thread Nate Coraor
Hi John,

A few of us in the lab here at Penn State actually discussed automatic creation 
of virtualenvs for dependency installations a couple weeks ago.  This was in 
the context of Bjoern's request for supporting compile-time dependencies.  I 
think it's a great idea, but there's a limitation that we'd need to account for.

If you're going to have frequently used and expensive to build libraries (e.g. 
numpy, R + rpy) in dependency-only repositories and then have your tool(s) 
depend on those repositories, the activate method won't work.  virtualenvs 
cannot depend on other virtualenvs or be active at the same time as other 
virtualenvs.  We could work around it by setting PYTHONPATH in the 
dependencies' env.sh like we do now.  But then, other than making installation 
a bit easier (e.g. by allowing the use of pip), we have not gained much.

--nate

On May 13, 2013, at 6:49 PM, John Chilton wrote:

 The proliferation of individual python package install definitions has
 continued and it has spread to some MSI managed tools. I worry about
 the tedium I will have to endure in the future if that becomes an
 established best practice :) so I have implemented the python version
 of what I had described in this thread:
 
 As patch:
 https://github.com/jmchilton/galaxy-central/commit/161d3b288016077a99fb7196b6e08fe7d690f34b.patch
 Pretty version:
 https://github.com/jmchilton/galaxy-central/commit/161d3b288016077a99fb7196b6e08fe7d690f34b
 
 I understand that there are going to be differing opinions as to
 whether this is the best way forward but I thought I would give my
 position a better chance of succeeding by providing an implementation.
 
 Thanks for your consideration,
 -John
 
 
 On Wed, Apr 17, 2013 at 3:56 PM, Peter Cock p.j.a.c...@googlemail.com wrote:
 On Tue, Apr 16, 2013 at 2:46 PM, John Chilton chil...@msi.umn.edu wrote:
 Stepping back a little, is the right way to address Python
 dependencies?
 
 Looks like I missed this thread, hence:
 http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-April/014169.html
 
 I was a big advocate for inter-repository dependencies,
 but I think taking it to the level of individual python packages might
 be going too far - my thought was they were needed for big 100Mb
 programs and stuff like that.
 
 It should work but it is a lot of boilerplate for something which
 should be more automated.
 
 At the Java jar/Python library/Ruby gem
 level I think using some of the platform specific packaging stuff to
 creating isolated environments for each program might be a better way
 to go.
 
 I agree, the best way forward isn't obvious here, and it may make
 sense to have tailored solutions for Python, Perl, Java, R, Ruby,
 etc packages rather than the current Tool Shed package solution.
 
 I've like to be able to just continue to write this kind of thing in my
 tool XML files and have it actually taken care of (rather than ignored):
 
 requirements
 requirement type=python-modulenumpy/requirement
 requirement type=python-moduleBio/requirement
 /requirements
 
 Adding a version key would be sensible, handling min/max etc
 as per Python packaging norms.
 
 Peter
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/
 
 To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Cloudman / galaxyIndicies

2013-05-14 Thread Rob Leclerc
We'd like to trim the the galaxyIndicies/genome directory to human and
blastdb. As we may have several CM instances, we would like to share this
filesystem across all instances.

I would like to simply copy these data to a new volume and mount this
filesystem with all our CM instances. Do I just need to rename specified
filesystem in pd.yaml to ourGalaxyIndicies and then mount our CMs to
/mnt/ourGalaxyIndicies/, or are their other locations I need to modify as
well? Are the size and snap_id required?

Cheers!

Rob

Rob Leclerc, PhD
http://www.linkedin.com/in/robleclerc https://twitter.com/#!/robleclerc
P: (US) +1-(917)-873-3037
P: (Shanghai) +86-1-(861)-612-5469
Personal Email: rob.lecl...@aya.yale.edu
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] tool shed repository update bug

2013-05-14 Thread Greg Von Kuster
Hello Björn,

I've analyzed this and discovered some very important points to make sure 
everyone understands, so I hope my information below is very clear.  Please 
don't hesitate to ask any questions for clarification.  The items fall into 2 
categories.

First item:

The first item for discussion is a known weakness of the tool shed upload 
process (or, to be more correct, the process of adding a new changeset revision 
to a repository).  When you commit a changeset to a repository that has 
previous revisions, the tool shed repository metadata process is executed and 
the contents of the new changeset are analyzed to determine if a new metadata 
record should be created for the changeset or if the previous metadata record 
should be moved up in the repository changelog chain (this is a very complex 
analysis).  

Until about 1 year ago, I reset all metadata on the entire repository 
chjangelog when a new changeset was committed.  This process worked very well, 
and metadata was cleanly set for the entire changelog.  However, this process 
began to take significant periods of time as soon as repositories became larger 
and the changelogs became longer (a good example is 
http://testtoolshed.g2.bx.psu.edu/view/mzytnicki/s_mart).

To improve performance when adding new changesets to repositories, the metadata 
analysis process was enhanced to inspect only those changesets that go back to 
the previous revision with associated metadata.  This process works very well 
in most cases.  However, there are unusual cases where the process breaks, and 
your confab repository fell into this category.  The issue arises when the 
contents of a specific change set do not properly move the previous metadata 
revision forward in the changelog, but instead create a new, additional 
metadata revision.  

When I initially inspected your confab repository had metadata associated with 
the following revisions:

9:dd3ee8e742dc
8:113e876c2ec6
7:9e38b8bd4cdb
6:7593411dcd5a
5:aac0c82ac354
4:6c8f72ee4a51
3:09acaeb233d1
2:e7bb18ef7f54
1:49274c60f392
0:ea7816847e5e

I inspected the main contents (tools and dependencies) of these revisions and 
noticed that some did not seem to have any differences (contents of the 
containers had the same labels between revisions - I did not inspect the 
changelog itself though).  Because of this I decided to reset all metadata on 
the repository (available in the Repository Actions menu), and now metadata is 
associated with only the following revisions:

9:dd3ee8e742dc
6:7593411dcd5a
5:aac0c82ac354
3:09acaeb233d1
2:e7bb18ef7f54
1:49274c60f392
0:ea7816847e5e

This is a very complex issue to handle, as there are many things to consider.  
Obviously automatically inspecting the entire changelog when a new changeset is 
added will not work.  I have added the ability for a repository owner to reset 
all metadata on the repository, but there is nothing to alert them to initiate 
this process (and I'm not sure how / when to do so).  Also, resetting all 
metadata on a repository eliminates the tool test results associated with each 
of the original metadata revisions.

Second item:
==

It seems there is some confusion about the changeset revision that is defined 
in dependency definitions.  This is a fairly complex subject, and it seems I 
may have muddied the waters a bit yesterday.  If so, I apologize, but I think 
Peter's last response to my example clarifies the issue.

Looking at the changelog of your confab repository, I see the following.  You 
have unnecessarily changed the changeset_revision setting in your tool 
dependency definitions, and this has resulted in a new additional metadata 
record being associated with changeset 9:dd3ee8e742dc of your confab 
repository.  This changeset revision setting should not have been changed - see 
my discussion below.

Repository 'confab'
===
Changeset 9e38b8bd4cdb
modified: 
tool_dependencies.xml
tool_dependencies.xml
--- a/tool_dependencies.xmlTue May 14 04:12:06 2013 -0400
+++ b/tool_dependencies.xmlTue May 14 04:17:08 2013 -0400
@@ -1,13 +1,13 @@
 tool_dependency
 package name=eigen version=2.0.17
-repository toolshed=http://testtoolshed.g2.bx.psu.edu/; 
name=package_eigen_2_0 owner=bgruening changeset_revision=294a30630e0b 
prior_installation_required=True /
+repository toolshed=http://testtoolshed.g2.bx.psu.edu/; 
name=package_eigen_2_0 owner=bgruening changeset_revision=09eb05087cd0 
prior_installation_required=True /
 /package
 package name=confab version=1.0.1
 install version=1.0
 actions
 !-- populate the environment variables from the dependend 
repos --
 action type=set_environment_for_install
-repository toolshed=http://testtoolshed.g2.bx.psu.edu/; 
name=package_eigen_2_0 owner=bgruening changeset_revision=294a30630e0b
+repository toolshed=http://testtoolshed.g2.bx.psu.edu/; 

Re: [galaxy-dev] Workflow annotations

2013-05-14 Thread Jeremy Goecks
 On the other hand, if a workflow is imported from a file on disk or from a
 URL, tool level annotations are available while workflow level annotations
 are not.

This has been fixed in this changeset: 
https://bitbucket.org/galaxy/galaxy-central/commits/8882e45504a3/

Thanks for reporting this issue,
J.___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] tool_dependencies inside tool_dependencies

2013-05-14 Thread Nate Coraor
On May 14, 2013, at 10:58 AM, John Chilton wrote:

 Hey Nate,
 
 On Tue, May 14, 2013 at 8:40 AM, Nate Coraor n...@bx.psu.edu wrote:
 Hi John,
 
 A few of us in the lab here at Penn State actually discussed automatic 
 creation of virtualenvs for dependency installations a couple weeks ago.  
 This was in the context of Bjoern's request for supporting compile-time 
 dependencies.  I think it's a great idea, but there's a limitation that we'd 
 need to account for.
 
 If you're going to have frequently used and expensive to build libraries 
 (e.g. numpy, R + rpy) in dependency-only repositories and then have your 
 tool(s) depend on those repositories, the activate method won't work.  
 virtualenvs cannot depend on other virtualenvs or be active at the same time 
 as other virtualenvs.  We could work around it by setting PYTHONPATH in the 
 dependencies' env.sh like we do now.  But then, other than making 
 installation a bit easier (e.g. by allowing the use of pip), we have not 
 gained much.
 
 I don't know what to make of your response. It seems like a no, but
 the word no doesn't appear anywhere.

Sorry about being wishy-washy.  Unless anyone has any objections or can foresee 
other problems, I would say yes to this.  But I believe it should not break the 
concept of common-dependency-only repositories.

I'm pretty sure that as long as the process of creating a venv also adds the 
venv's site-packages to PYTHONPATH in that dependency's env.sh, the problem 
should be automatically dealt with.

 I don't know the particulars of rpy, but numpy installs fine via this
 method and I see no problem with each application having its own copy
 of numpy. I think relying on OS managed python packages for instance
 is something of a bad practice, when developing and distributing
 software I use virtualenvs for everything. I think that stand-alone
 python defined packages in the tool shed are directly analogous to OS
 managed packages.

Completely agree that we want to avoid OS-managed python packages.  I had, in 
the past, considered that for something like numpy, we ought to make it easy 
for an administrator to allow their own version of numpy to be used, since 
numpy can be linked against a number of optimized libraries for significant 
performance gains, and this generally won't happen for versions installed from 
the toolshed unless the system already has stuff like atlas-dev installed.  But 
I think we still allow admins that possibility with reasonable ease since 
dependency management in Galaxy is not a requirement.

What we do want to avoid is the situation where someone clones a new copy of 
Galaxy, wants to install 10 different tools that all depend on numpy, and has 
to wait an hour while 10 versions of numpy compile.  Add that in with other 
tools that will have a similar process (installing R + packages + rpy) plus the 
hope that down the line you'll be able to automatically maintain separate 
builds for remote resources that are not the same (i.e. multiple clusters with 
differing operating systems) and this hopefully highlights why I think reducing 
duplication where possible will be important.

 I also disagree we have not gained much. Setting up these repositories
 is a onerous, brittle process. This patch provides some high-level
 functionality for creating virtualenv's which negates the need for
 creating separate repositories per package.

This is a good point.  I probably also sold short the benefit of being able to 
install with pip, since this does indeed remove a similarly brittle and tedious 
step of downloading and installing modules.

--nate

 
 -John
 
 
 --nate
 
 On May 13, 2013, at 6:49 PM, John Chilton wrote:
 
 The proliferation of individual python package install definitions has
 continued and it has spread to some MSI managed tools. I worry about
 the tedium I will have to endure in the future if that becomes an
 established best practice :) so I have implemented the python version
 of what I had described in this thread:
 
 As patch:
 https://github.com/jmchilton/galaxy-central/commit/161d3b288016077a99fb7196b6e08fe7d690f34b.patch
 Pretty version:
 https://github.com/jmchilton/galaxy-central/commit/161d3b288016077a99fb7196b6e08fe7d690f34b
 
 I understand that there are going to be differing opinions as to
 whether this is the best way forward but I thought I would give my
 position a better chance of succeeding by providing an implementation.
 
 Thanks for your consideration,
 -John
 
 
 On Wed, Apr 17, 2013 at 3:56 PM, Peter Cock p.j.a.c...@googlemail.com 
 wrote:
 On Tue, Apr 16, 2013 at 2:46 PM, John Chilton chil...@msi.umn.edu wrote:
 Stepping back a little, is the right way to address Python
 dependencies?
 
 Looks like I missed this thread, hence:
 http://lists.bx.psu.edu/pipermail/galaxy-dev/2013-April/014169.html
 
 I was a big advocate for inter-repository dependencies,
 but I think taking it to the level of individual python packages might
 be going too far - my 

Re: [galaxy-dev] Next GalaxyAdmins Meetup: May 15; Galaxy @ Pathogen Portal

2013-05-14 Thread Dave Clements
Hello all,

Just a reminder that this is happening tomorrow, Wednesday, May 15.  If you
haven't taken a look at the Pathogen Portal Galaxy Server (*RNA-Rocket*,
http://rnaseq.pathogenportal.org/) yet, please do.

If you will be on the call, please connect a few minutes early, as it will
take that long to get setup.

Thanks,

Dave C.




On Wed, May 8, 2013 at 11:23 AM, Dave Clements
cleme...@galaxyproject.orgwrote:

 Hello all,

 The next 
 meetinghttp://wiki.galaxyproject.org/Community/GalaxyAdmins/Meetups/2013_05_15
  of
 the GalaxyAdmins Grouphttp://wiki.galaxyproject.org/Community/GalaxyAdmins 
 will
 be held on May 15, 2013, at 10 AM Central US 
 timehttp://wiki.galaxyproject.org/Community/GalaxyAdmins/Meetups/2013_05_15
 .

 Andrew Warren of the Cyberinfrastructure 
 Divisionhttp://www.vbi.vt.edu/faculty/group_overview/Cyberinfrastructure_Division
  of
 the Virginia Bioinformatics Institute https://www.vbi.vt.edu/ at
 Virginia Tech will talk about their Galaxy 
 deploymenthttp://rnaseq.pathogenportal.org/
  at Pathogen Portal http://pathogenportal.org/, a highly customized
 Galaxy installation, and also about the group's objectives and future plans.

 Dannon Baker http://wiki.galaxyproject.org/DannonBaker will bring the
 group up to speed on what's happening in the Galaxy project.

 Date

 May 15, 2013

 Time

 10 am Central US Time (-5 GMT)

 Presentations

 *Galaxy http://rnaseq.pathogenportal.org/ at Pathogen 
 Portalhttp://pathogenportal.org/
 *
 Andrew Warren, Virginia Bioinformatics Institute https://www.vbi.vt.edu/,
 Virginia Tech
 *Galaxy Project Update*
 Dannon Baker http://wiki.galaxyproject.org/DannonBaker

 Links

 Meetup 
 Linkhttps://globalcampus.uiowa.edu/join_meeting.html?meetingId=1262346908659
 Add to 
 calendarhttps://globalcampus.uiowa.edu/build_calendar.event?meetingId=1262346908659


 We use the Blackboard Collaborate Web Conferencing 
 systemhttp://wiki.galaxyproject.org/Community/GalaxyAdmins/Meetups/WebinarTech
  for
 the meetup. Downloading the required applets in advance and using a
 headphone with microphone to prevent audio feedback during the call is
 recommended.

 GalaxyAdmins http://wiki.galaxyproject.org/Community/GalaxyAdmins is a
 discussion group for Galaxy community members who are responsible for large
 Galaxy installations.
 Thanks,
 Dave Clements

 --
 http://galaxyproject.org/GCC2013
 http://galaxyproject.org/
 http://getgalaxy.org/
 http://usegalaxy.org/
 http://wiki.galaxyproject.org/




-- 
http://galaxyproject.org/GCC2013
http://galaxyproject.org/
http://getgalaxy.org/
http://usegalaxy.org/
http://wiki.galaxyproject.org/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

Re: [galaxy-dev] tool_dependencies inside tool_dependencies

2013-05-14 Thread Nate Coraor
Greg created the following card, and I'm working on a few changes to your 
commit:

https://trello.com/card/toolshed-consider-enhancing-tool-dependency-definition-framework-per-john-chilton-s-pull-request/506338ce32ae458f6d15e4b3/848

Thanks,
--nate

On May 14, 2013, at 1:45 PM, Nate Coraor wrote:

 On May 14, 2013, at 10:58 AM, John Chilton wrote:
 
 Hey Nate,
 
 On Tue, May 14, 2013 at 8:40 AM, Nate Coraor n...@bx.psu.edu wrote:
 Hi John,
 
 A few of us in the lab here at Penn State actually discussed automatic 
 creation of virtualenvs for dependency installations a couple weeks ago.  
 This was in the context of Bjoern's request for supporting compile-time 
 dependencies.  I think it's a great idea, but there's a limitation that 
 we'd need to account for.
 
 If you're going to have frequently used and expensive to build libraries 
 (e.g. numpy, R + rpy) in dependency-only repositories and then have your 
 tool(s) depend on those repositories, the activate method won't work.  
 virtualenvs cannot depend on other virtualenvs or be active at the same 
 time as other virtualenvs.  We could work around it by setting PYTHONPATH 
 in the dependencies' env.sh like we do now.  But then, other than making 
 installation a bit easier (e.g. by allowing the use of pip), we have not 
 gained much.
 
 I don't know what to make of your response. It seems like a no, but
 the word no doesn't appear anywhere.
 
 Sorry about being wishy-washy.  Unless anyone has any objections or can 
 foresee other problems, I would say yes to this.  But I believe it should not 
 break the concept of common-dependency-only repositories.
 
 I'm pretty sure that as long as the process of creating a venv also adds the 
 venv's site-packages to PYTHONPATH in that dependency's env.sh, the problem 
 should be automatically dealt with.
 
 I don't know the particulars of rpy, but numpy installs fine via this
 method and I see no problem with each application having its own copy
 of numpy. I think relying on OS managed python packages for instance
 is something of a bad practice, when developing and distributing
 software I use virtualenvs for everything. I think that stand-alone
 python defined packages in the tool shed are directly analogous to OS
 managed packages.
 
 Completely agree that we want to avoid OS-managed python packages.  I had, in 
 the past, considered that for something like numpy, we ought to make it easy 
 for an administrator to allow their own version of numpy to be used, since 
 numpy can be linked against a number of optimized libraries for significant 
 performance gains, and this generally won't happen for versions installed 
 from the toolshed unless the system already has stuff like atlas-dev 
 installed.  But I think we still allow admins that possibility with 
 reasonable ease since dependency management in Galaxy is not a requirement.
 
 What we do want to avoid is the situation where someone clones a new copy of 
 Galaxy, wants to install 10 different tools that all depend on numpy, and has 
 to wait an hour while 10 versions of numpy compile.  Add that in with other 
 tools that will have a similar process (installing R + packages + rpy) plus 
 the hope that down the line you'll be able to automatically maintain separate 
 builds for remote resources that are not the same (i.e. multiple clusters 
 with differing operating systems) and this hopefully highlights why I think 
 reducing duplication where possible will be important.
 
 I also disagree we have not gained much. Setting up these repositories
 is a onerous, brittle process. This patch provides some high-level
 functionality for creating virtualenv's which negates the need for
 creating separate repositories per package.
 
 This is a good point.  I probably also sold short the benefit of being able 
 to install with pip, since this does indeed remove a similarly brittle and 
 tedious step of downloading and installing modules.
 
 --nate
 
 
 -John
 
 
 --nate
 
 On May 13, 2013, at 6:49 PM, John Chilton wrote:
 
 The proliferation of individual python package install definitions has
 continued and it has spread to some MSI managed tools. I worry about
 the tedium I will have to endure in the future if that becomes an
 established best practice :) so I have implemented the python version
 of what I had described in this thread:
 
 As patch:
 https://github.com/jmchilton/galaxy-central/commit/161d3b288016077a99fb7196b6e08fe7d690f34b.patch
 Pretty version:
 https://github.com/jmchilton/galaxy-central/commit/161d3b288016077a99fb7196b6e08fe7d690f34b
 
 I understand that there are going to be differing opinions as to
 whether this is the best way forward but I thought I would give my
 position a better chance of succeeding by providing an implementation.
 
 Thanks for your consideration,
 -John
 
 
 On Wed, Apr 17, 2013 at 3:56 PM, Peter Cock p.j.a.c...@googlemail.com 
 wrote:
 On Tue, Apr 16, 2013 at 2:46 PM, John Chilton chil...@msi.umn.edu wrote:
 Stepping 

Re: [galaxy-dev] Workflow annotations

2013-05-14 Thread Sytchev, Ilya
Awesome, thanks!  What about workflows imported from tool shed
repositories?  For example, I have two annotated workflows in this
repository: http://testtoolshed.g2.bx.psu.edu/repos/hackdna/refinery_test.
 When I install them from the repository to my local galaxy-central
instance (9722:6d72b2db32c0) the annotations are still not showing up in
the workflow editor.

Ilya


On 5/14/13 11:55 AM, Jeremy Goecks jeremy.goe...@emory.edu wrote:


On the other hand, if a workflow is imported from a file on disk or from a
URL, tool level annotations are available while workflow level annotations
are not.





This has been fixed in this changeset:
https://bitbucket.org/galaxy/galaxy-central/commits/8882e45504a3/


Thanks for reporting this issue,
J.


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/