Re: [galaxy-dev] zombie undeletable files in toolshed, and versioning question

2012-09-18 Thread kevyin
Hi Greg,
Thanks for the toolshed links I'll make sure to go over them.

I think when the upload failed, mercurial didn't get to the point of
tracking the build folder.
>From a mercurial point of view there seems to be no issue, pulling/cloning
or even installing is fine because it's not tracked.

But the folder seems to be there when I go to "Browse or delete repository
tip files" via the web interface.

>From a user and our point of view this is isn't causing any usability
issues at the moment, though it is weird.

Hope this clarifies things and thanks for the help!
Kevin.

On Wed, Sep 19, 2012 at 4:41 AM, Greg Von Kuster  wrote:

> Hello Kevin,
>
>
> On Sep 14, 2012, at 3:48 AM, kevyin wrote:
>
> Hi,
> I have a tool on the main galaxy toolshed called fastq_groomer_parallel.
> During an update, with a new tar.gz upload I accidentally included in an
> unrelated big folder of stuff under the folder build/
>
> The upload failed (with no helpful error message, only to turn on debug
> mode)
> I went and browsed the files via the web interface and the folder is
> there, but when I tried to delete it it says "No changes to repository. "
> Everything is normal, ie I can install this from a galaxy instance and the
> build/ folder is not there.
> when I hg pull and push etc the folder is not there.
>
>
> I'm not sure how you were seeing anything related to a large folder named
> "build".  Looking at you change log, I see the following changeset
> revisions, none of which include this folder.
>
> Description: First upload 
> 0.3.0
>  Commit: 
> 0:18a08d476d5e
> *added:*
> README
> fastq_groomer_parallel.py
> fastq_groomer_parallel.xml
>
>
> Description: Deleted selected 
> files
>  Commit: 
> 1:2f394cd7db91
> *removed:*
> README
> fastq_groomer_parallel.py
> fastq_groomer_parallel.xml
>
>
> Description: 
> Uploaded
>  Commit: 
> 2:cac848910bd8
> *added:*
> README
> fastq_groomer_parallel.py
> fastq_groomer_parallel.xml
>
>
>
> I have another question with versioning. So far with my experimentation on
> the testtoolshed, The version is only bumped when I upload via an archive
> not when I push to the repo.
> Is this meant to be the case?
>
>
> This is probably not the case.  A new change set is created every time you
> make a change to the repository, either by uploading something new or by
> deleting one or more files.  Whenever a new change set is produced, it is
> associated with a revision number and string.  The following sections of
> the tool shed wiki may provide more clarification.
>
> http://wiki.g2.bx.psu.edu/Tool%20Shed#The_mercurial_repository_change_log
>
> http://wiki.g2.bx.psu.edu/Tool%20Shed#Repository_revisions:_uploading_a_new_version_of_an_existing_tool
>
> http://wiki.g2.bx.psu.edu/Tool%20Shed#Repository_revisions:_valid_tool_versions
>
>
>
> Also "Get Updates" to a toolshed tool as a galaxy Admin. doesnt seem to
> work, I basically have to install a new instance again, is this normal?
>
>
> It depends - the following section of the tool shed wiki should clarify
> this:
>
>
> http://wiki.g2.bx.psu.edu/Tool%20Shed#Getting_updates_for_tool_shed_repositories_installed_in_a_local_Galaxy_instance
>
> Greg Von Kuster
>
>
>
> Regards,
> Kevin.
>
>
> ___
> Please keep all replies on the list by using "reply all"
> in your mail clie

Re: [galaxy-dev] error uploading to galaxy from InterMine

2012-09-18 Thread Fengyuan Hu

Thanks for the quick fix!

On 18/09/12 17:56, Jennifer Jackson wrote:

Hello Fengyuan,

Thanks for reporting the problem. We believe that the issue with 
Object store has been resolved - would you please try this again and 
let us know if there are still problems?


Thanks!

Jen
Galaxy team

On 9/18/12 3:37 AM, Fengyuan Hu wrote:

Hi Dan,

We are trying to export some sequence to galaxy main from FlyMine but
get the error:

*Error executing tool: Unable to create output dataset: object store is
full*

After switching to galaxy test, it works.

The same error occurred in other mines, could anyone in your team take a
look?

Thanks
Fengyuan


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/





___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] creating a galaxy dataset within a galaxy tool

2012-09-18 Thread Dan Tenenbaum
Hi James,

I realize I didn't do a good job explaining, but your suggestion
sounds promising. How can I make something into a dataset?

Here's what I'm hoping to achieve:
User runs Tool1, uploading a text file and specifying some parameters.
Tool1 uses this to write out a serialized R object. Somehow this
object is persisted in Galaxy (made into a dataset).
Now the user can run Tool2 or Tool3, which each take the dataset
created in the previous step, plus some parameters of their own.

So I guess the only part I am missing is how to turn the output of
Tool1 into a dataset. Is there a way to do this other than downloading
it to my local computer and then uploading it to Galaxy? That seems
awkward

Thanks,
Dan




On Tue, Sep 18, 2012 at 3:41 PM, James Taylor  wrote:
> Dan, I may not be following, but why not make the serialized R object
> a dataset (of its own datatype). Then the user can just pass it to the
> downstream tools just by specifying one parameter.
>
> -- jt
>
>
> On Tue, Sep 18, 2012 at 3:00 PM, Dan Tenenbaum  wrote:
>> I want the user to upload a text file, then pass that text file, along
>> with several parameters, (in something we'll call Tool1) which I use
>> to create a serialized R object. That object is then used in several
>> other tools. I don't want the user to have to download and then
>> re-upload the serialized object. I want it to just be available after
>> the user runs Tool1.
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] creating a galaxy dataset within a galaxy tool

2012-09-18 Thread James Taylor
Dan, I may not be following, but why not make the serialized R object
a dataset (of its own datatype). Then the user can just pass it to the
downstream tools just by specifying one parameter.

-- jt


On Tue, Sep 18, 2012 at 3:00 PM, Dan Tenenbaum  wrote:
> I want the user to upload a text file, then pass that text file, along
> with several parameters, (in something we'll call Tool1) which I use
> to create a serialized R object. That object is then used in several
> other tools. I don't want the user to have to download and then
> re-upload the serialized object. I want it to just be available after
> the user runs Tool1.
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Jobs are slow to start on my galaxy instance. Used to be much faster

2012-09-18 Thread Anthonius deBoer
Hi,Jobs that I start on my in-house Galaxy instance now take up to 3-4 min to go from queued to Running, even though there is nothing much going on on the galaxy server...I have been running this instance since June and use a relatively new version of Galaxy-central (Last update, 22-Aug changeset:   7535:bf6517b2b336)I have noticed that my jobs table in the galaxy Postgres database contains about 60,000 jobs...Could that be the culprit? Does it slow to the complete database to see if there are any jobs that need to run?Could I purge the jobs and related tables to speed up the submissions?Thanks,Thon
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Fix for tophat2 output filter on fusions

2012-09-18 Thread Jeremy Goecks
Fixed in -central changeset 05a172868303

Thanks,
J.

On Sep 18, 2012, at 4:57 PM, Jim Johnson wrote:

> The filter on the tophat2  fusions output wasn't being evaluated correctly 
> when settingsType is preSet since the param dict wouldn't then have an entry 
> for 'fusion_search'
> 
> $ hg diff tools/ngs_rna/tophat2_wrapper.xml
> diff -r 3f12146d6d81 tools/ngs_rna/tophat2_wrapper.xml
> --- a/tools/ngs_rna/tophat2_wrapper.xml Tue Sep 18 14:10:56 2012 -0400
> +++ b/tools/ngs_rna/tophat2_wrapper.xml Tue Sep 18 15:47:47 2012 -0500
> @@ -297,7 +297,7 @@
> 
> 
> 
> -(params['fusion_search']['do_search'] == 'Yes')
> +(params['settingsType'] == 'full' and 
> params['fusion_search']['do_search'] == 'Yes')
> 
> 
> 
> 


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Tophat2 Issues

2012-09-18 Thread Jeremy Goecks
> The bug manifests itself when running the tool with "use Defaults" selected 
> results in the following error:
> cp: cannot stat 
> `/data/galaxy-dev/galaxy-dev/database/job_working_directory/000/281/tophat_out/fusions.out':
>  No such file or directory
> 
> It seems that somehow, the filter on the output dataset is not working 
> properly when the value (params['fusion_search']['do_search']) is not 
> explicitly set in the parameters.

Fixed in -central changeset 05a172868303

> The second issue is more minor. It seems that the Tophat2 wrapper assumes 
> that Tophat2 will use Bowtie2 (and not Bowtie).  However, the actual Tophat2 
> program will only use Bowtie2 if it is found in the path.  Otherwise it 
> defaults to using bowtie which results in an error (Could not find Bowtie 
> index files) since the wrapper points the tool to the bowtie2 index instead. 
> To make things a bit more robust, I suggest adding bowtie2 as a requirement 
> to the tophat2 wrapper.  It also might be a good idea to add versions to 
> those requirements.  I've attached a small patch to do that (though it's 
> fairly trivial).

I added the requirements tags for additional packages required by Tophat2; I've 
left out the version information for now because it's fairly brittle. FYI, due 
to limitations with Galaxy's location/index files, the Tophat2 wrapper will run 
with either Bowtie or Bowtie2 but not both.

Best,
J.
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Fix for tophat2 output filter on fusions

2012-09-18 Thread Jim Johnson

The filter on the tophat2  fusions output wasn't being evaluated correctly when 
settingsType is preSet since the param dict wouldn't then have an entry for 
'fusion_search'

$ hg diff tools/ngs_rna/tophat2_wrapper.xml
diff -r 3f12146d6d81 tools/ngs_rna/tophat2_wrapper.xml
--- a/tools/ngs_rna/tophat2_wrapper.xml Tue Sep 18 14:10:56 2012 -0400
+++ b/tools/ngs_rna/tophat2_wrapper.xml Tue Sep 18 15:47:47 2012 -0500
@@ -297,7 +297,7 @@

 
 
-(params['fusion_search']['do_search'] == 'Yes')
+(params['settingsType'] == 'full' and 
params['fusion_search']['do_search'] == 'Yes')
 
 
 

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


[galaxy-dev] creating a galaxy dataset within a galaxy tool

2012-09-18 Thread Dan Tenenbaum
I'd like to have a tool that does not actually return anything (is
that possible?) but writes a file to the directory where Galaxy stores
uploaded data files, and tells Galaxy about it (perhaps that involves
writing to a database?). My tool is written in R.

Here is my scenario:
I want the user to upload a text file, then pass that text file, along
with several parameters, (in something we'll call Tool1) which I use
to create a serialized R object. That object is then used in several
other tools. I don't want the user to have to download and then
re-upload the serialized object. I want it to just be available after
the user runs Tool1.

I realize that workflows could solve my problem here, but I want to
keep the UI as simple as possible. For Tool2..ToolN, I don't want to
have to always ask the user for the parameters for Tool1 as well as
the current tool.

So, is there a way for my tool to make Galaxy aware of an additional dataset?
Thanks
Dan
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] zombie undeletable files in toolshed, and versioning question

2012-09-18 Thread Greg Von Kuster
Hello Kevin,


On Sep 14, 2012, at 3:48 AM, kevyin wrote:

> Hi,
> I have a tool on the main galaxy toolshed called fastq_groomer_parallel.
> During an update, with a new tar.gz upload I accidentally included in an 
> unrelated big folder of stuff under the folder build/
> 
> The upload failed (with no helpful error message, only to turn on debug mode)
> I went and browsed the files via the web interface and the folder is there, 
> but when I tried to delete it it says "No changes to repository. "
> Everything is normal, ie I can install this from a galaxy instance and the 
> build/ folder is not there.
> when I hg pull and push etc the folder is not there. 

I'm not sure how you were seeing anything related to a large folder named 
"build".  Looking at you change log, I see the following changeset revisions, 
none of which include this folder.

Description: First upload 0.3.0
Commit: 0:18a08d476d5e
added: 
README 
fastq_groomer_parallel.py 
fastq_groomer_parallel.xml


Description: Deleted selected files
Commit: 1:2f394cd7db91
removed: 
README 
fastq_groomer_parallel.py 
fastq_groomer_parallel.xml


Description: Uploaded
Commit: 2:cac848910bd8
added: 
README 
fastq_groomer_parallel.py 
fastq_groomer_parallel.xml


> 
> I have another question with versioning. So far with my experimentation on 
> the testtoolshed, The version is only bumped when I upload via an archive not 
> when I push to the repo.
> Is this meant to be the case?

This is probably not the case.  A new change set is created every time you make 
a change to the repository, either by uploading something new or by deleting 
one or more files.  Whenever a new change set is produced, it is associated 
with a revision number and string.  The following sections of the tool shed 
wiki may provide more clarification.

http://wiki.g2.bx.psu.edu/Tool%20Shed#The_mercurial_repository_change_log
http://wiki.g2.bx.psu.edu/Tool%20Shed#Repository_revisions:_uploading_a_new_version_of_an_existing_tool
http://wiki.g2.bx.psu.edu/Tool%20Shed#Repository_revisions:_valid_tool_versions



> Also "Get Updates" to a toolshed tool as a galaxy Admin. doesnt seem to work, 
> I basically have to install a new instance again, is this normal?

It depends - the following section of the tool shed wiki should clarify this:

http://wiki.g2.bx.psu.edu/Tool%20Shed#Getting_updates_for_tool_shed_repositories_installed_in_a_local_Galaxy_instance

Greg Von Kuster


> 
> Regards,
> Kevin.
> 
> 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>  http://lists.bx.psu.edu/

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] Automatic installation of third party dependancies

2012-09-18 Thread Greg Von Kuster
Hello Lance,

I've just committed a fix for getting updates to installed tool shed 
repositories in change set 7713:23107188eab8, which is currently available only 
in the Galaxy central repository.  However, my fix will probably not correct 
the issue you're describing, and I'm still not able to reproduce this behavior. 
 See my inline comments...


On Sep 13, 2012, at 4:41 PM, Lance Parsons wrote:

> Actually, I think that is exactly the issue.  I DO have 3:f7a5b54a8d4f 
> installed. I've run into a related issue before, but didn't fully understand 
> it.
> 
> I believe what happened was:
> 1) I pushed revision 3:f7a5b54a8d4f to the tool shed which contained the 
> first revision of version 0.2 of the htseq-count tool.
> 2) I installed the htseq-count tool from the tool shed, getting revision 
> 3:f7a5b54a8d4f
> 3) I pushed an update to version 0.2 of the htseq-count tool. The only 
> changes were to tool-dependencies so I thought it would be safe to leave the 
> version number alone (perhaps this is problem?)


You are correct in stating that the tool version number should not change just 
because you've added a tool_dependencies.xml file.  This is definitely not 
causing the behavior you're describing.


> 4) I attempted to get updates and ran into the issue I described.
> 
> I also ran into this (I believe it was with freebayes, but not sure) when I 
> removed (uninstalled) a particular revision of a tool. Then the tool was 
> updated. I went to install and and it said that I already had a previous 
> revision installed and should install that. However, I couldn't since the 
> tool shed won't allow installation of old revisions of the same version of a 
> tool.

The following section of the tool shed wiki should provide the details about 
why you are seeing this behavior.  Keep in mind that you will only get certain 
updates to installed repositories from the tool shed.  This behavior enables 
updates to installed tool versions.  To get a completely new version of an 
installed tool (if one exists), you need to install a new (different) changeset 
revision from the tool shed repository.

http://wiki.g2.bx.psu.edu/Tool%20Shed#Getting_updates_for_tool_shed_repositories_installed_in_a_local_Galaxy_instance


> 
> Let me know if there is anything I can do to help sort this out.
> 
> Lance
> 
> Greg Von Kuster wrote:
>> 
>> Hi Lance,
>> 
>> What is the changeset revision that you installed?  It looks like you could 
>> only have installed one of the following 3 revisions:
>> 
>> 1:14e18dc9ed13
>> 2:f5d08224af89
>> 4:14bec14f4290
>> 
>> Since you could not have installed 3:f7a5b54a8d4f, I'm not quite sure how 
>> you could be trying to update to 4.  Did you install 4 and are trying to get 
>> updates?  
>> 
>> I've tried several things but am not able to reproduce this behavior, so 
>> it's difficult to determine what may be causing the problem
>> 
>> Greg Von Kuster
>> 
>> On Sep 12, 2012, at 3:08 PM, Lance Parsons wrote:
>> 
>>> I've updated my development system now, and when I try to get updates for 
>>> that particular tool (htseq_count) I run into the following error.  Any 
>>> ideas on how I can/should fix this?  Thanks.
>>> 
>>> URL: 
>>> http://galaxy-dev.princeton.edu/admin_toolshed/update_to_changeset_revision?tool_shed_url=http://toolshed.g2.bx.psu.edu/&name=htseq_count&owner=lparsons&changeset_revision=f7a5b54a8d4f&latest_changeset_revision=14bec14f4290&latest_ctx_rev=4
>>> File 
>>> '/data/galaxy-dev/galaxy-dev/eggs/WebError-0.8a-py2.6.egg/weberror/evalexception/middleware.py',
>>>  line 364 in respond
>>> app_iter = self.application(environ, detect_start_response)
>>> File 
>>> '/data/galaxy-dev/galaxy-dev/eggs/Paste-1.6-py2.6.egg/paste/debug/prints.py',
>>>  line 98 in __call__
>>> environ, self.app)
>>> File 
>>> '/data/galaxy-dev/galaxy-dev/eggs/Paste-1.6-py2.6.egg/paste/wsgilib.py', 
>>> line 539 in intercept_output
>>> app_iter = application(environ, replacement_start_response)
>>> File 
>>> '/data/galaxy-dev/galaxy-dev/eggs/Paste-1.6-py2.6.egg/paste/recursive.py', 
>>> line 80 in __call__
>>> return self.application(environ, start_response)
>>> File 
>>> '/data/galaxy-dev/galaxy-dev/lib/galaxy/web/framework/middleware/remoteuser.py',
>>>  line 91 in __call__
>>> return self.app( environ, start_response )
>>> File 
>>> '/data/galaxy-dev/galaxy-dev/eggs/Paste-1.6-py2.6.egg/paste/httpexceptions.py',
>>>  line 632 in __call__
>>> return self.application(environ, start_response)
>>> File '/data/galaxy-dev/galaxy-dev/lib/galaxy/web/framework/base.py', line 
>>> 160 in __call__
>>> body = method( trans, **kwargs )
>>> File '/data/galaxy-dev/galaxy-dev/lib/galaxy/web/framework/__init__.py', 
>>> line 184 in decorator
>>> return func( self, trans, *args, **kwargs )
>>> File 
>>> '/data/galaxy-dev/galaxy-dev/lib/galaxy/web/controllers/admin_toolshed.py', 
>>> line 1469 in update_to_changeset_revision
>>> update_repository( repo, latest_ctx_rev )
>>> File '/data/galaxy-dev/galaxy-dev/lib/galaxy/ut

Re: [galaxy-dev] DRMAA: TypeError: check_tool_output() takes exactly 5 arguments (4 given)

2012-09-18 Thread Scott McManus
Sorry - that's changeset 7714:3f12146d6d81

-Scott

- Original Message -
> 
> Ok - that change was made. The difference is that the change
> is applied to the task instead of the job. It's in changeset
> 7713:bfd10aa67c78, and it ran successfully in my environments
> on local, pbs, and drmaa runners. Let me know if there are
> any problems.
> 
> Thanks again for your patience.
> 
> -Scott
> 
> - Original Message -
> > On Tue, Sep 18, 2012 at 3:09 PM, Jorrit Boekel
> >  wrote:
> > > Is it possible that you are looking at different classes?
> > > TaskWrapper's
> > > finish method does not use the job variable in my recently merged
> > > code
> > > either (line ~1045), while JobWrapper's does around line 315.
> > >
> > > cheers,
> > > jorrit
> > 
> > Yes exactly (as per my follow up email sent just before yours ;) )
> > 
> > Peter
> > 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>   http://lists.bx.psu.edu/
> 
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] DRMAA: TypeError: check_tool_output() takes exactly 5 arguments (4 given)

2012-09-18 Thread Scott McManus

Ok - that change was made. The difference is that the change
is applied to the task instead of the job. It's in changeset
7713:bfd10aa67c78, and it ran successfully in my environments
on local, pbs, and drmaa runners. Let me know if there are 
any problems.

Thanks again for your patience.

-Scott

- Original Message -
> On Tue, Sep 18, 2012 at 3:09 PM, Jorrit Boekel
>  wrote:
> > Is it possible that you are looking at different classes?
> > TaskWrapper's
> > finish method does not use the job variable in my recently merged
> > code
> > either (line ~1045), while JobWrapper's does around line 315.
> >
> > cheers,
> > jorrit
> 
> Yes exactly (as per my follow up email sent just before yours ;) )
> 
> Peter
> 
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Create a job

2012-09-18 Thread Alfredo Guilherme Silva Souza
Hello,

I need to create a job to return a file for it, only need to create a job
beyond what is already created during the execution of my tool because the
data they need to return is the result of a process that runs in the
background.

Can someone help me?

Hugs.


-- 

*Alfredo Guilherme*

*
*
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] error uploading to galaxy from InterMine

2012-09-18 Thread Jennifer Jackson

Hello Fengyuan,

Thanks for reporting the problem. We believe that the issue with Object 
store has been resolved - would you please try this again and let us 
know if there are still problems?


Thanks!

Jen
Galaxy team

On 9/18/12 3:37 AM, Fengyuan Hu wrote:

Hi Dan,

We are trying to export some sequence to galaxy main from FlyMine but
get the error:

*Error executing tool: Unable to create output dataset: object store is
full*

After switching to galaxy test, it works.

The same error occurred in other mines, could anyone in your team take a
look?

Thanks
Fengyuan


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/



--
Jennifer Jackson
http://galaxyproject.org
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] Trackster caching

2012-09-18 Thread Fields, Christopher J
On Sep 15, 2012, at 11:05 PM, Jeremy Goecks  wrote:

>> is there a way to determine whether Trackster is the cause?
> 
> The only place where Trackster caches data is in the SummaryTreeDataProvider. 
> In galaxy-central, the relevant line is  709 in 
> lib/galaxy/visualization/genome/data_providers.py :
> 
> --
> CACHE = LRUCache( 20 ) # Store 20 recently accessed indices for performance
> --
> 
> Try setting the cache size low or to 0 and see if that addresses the memory 
> issue.
> 
> Best,
> J.

I'll try this in combination with a bump in mem allocated to the VM, should 
take care of it.  Thanks!

chris


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] error uploading to galaxy from InterMine

2012-09-18 Thread Fengyuan Hu

Hi Dan,

We are trying to export some sequence to galaxy main from FlyMine but 
get the error:


*Error executing tool: Unable to create output dataset: object store is 
full*


After switching to galaxy test, it works.

The same error occurred in other mines, could anyone in your team take a 
look?


Thanks
Fengyuan
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] Cuffmerge

2012-09-18 Thread Jennifer Jackson

Hi Ken,

You will need to install Cuffmerge and then link it in the same way that 
the other tools were added. This wiki has details: a link to the 
dependencies (versions) and how to configure paths/ENV set up:


http://wiki.g2.bx.psu.edu/Admin/Config/Tool%20Dependencies

If you need more help, please let us know. Please be sure to keep the 
galaxy-dev list on the cc - this group is great at troubleshooting local 
install issues.


Thanks!

Jen
Galaxy team

On 9/18/12 8:40 AM, Kenneth R. Auerbach wrote:

Hi Jen,

I have another question. Right now we have some RNA Analysis tools
(cufflinks, cuffdiff, and cuffcompare) but not cuffmerge. Can you tell
me how I would add it to the NGS:RNA Analysis group in our Galaxy
installation?

Thank you.
Ken.



--
Jennifer Jackson
http://galaxyproject.org
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


[galaxy-dev] speed and large file question (after tying galaxy to Pittsburgh)

2012-09-18 Thread Joseph Hargitai
Hi,

General:
Is there a quantifiable difference in data uploads, speed of processing, 
general availability after the connection was/is established to Pittsburgh?

Specific: 
If we loaded data to a common directory on an XSEDE resource, would our 
students be able to fetch this data in a more timely manner to their galaxy 
accounts then from desktops etc... For us it would be easy to populate a common 
directory once then our users could fetch/wget etc... from there, or if 
possible could such directory be directly visible to our set of galaxy users? 
(this is what we do locally) - but one would need admin priv for that on the 
main instance. 

Or would you just do this with one galaxy user sharing out the files for the 
rest of a group working together?


thanks.
joe


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] python egg cache exists error

2012-09-18 Thread James Taylor
Interesting. If I'm reading this correctly the problem is happening
inside pkg_resources? (galaxy.eggs unzips eggs, but I think it does so
on install [fetch_eggs] time not run time which would avoid this). If
so this would seem to be a locking bug in pkg_resources. Dannon, we
could put a guard around the imports in extract_dataset_part.py as an
(overly aggressive and hacky) fix.

-- jt


On Tue, Sep 18, 2012 at 10:37 AM, Jorrit Boekel
 wrote:
> - which lead to unzipping .so libraries from python eggs into the nodes'
> /home/galaxy/.python-eggs
> - this runs into lib/pkg_resources.py and its _bypass_ensure_directory
> method that creates the temporary dir for the egg unzip
> - since there are 8 processes on the node, sometimes this method tries to
> mkdir a directory that was just made by the previous process after the
> isdir.
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] Redirect log files?

2012-09-18 Thread Fields, Christopher J
The server simply wouldn't start, but log files weren't generated either.  This 
could be a permissions issue but the folder is g+rw for galaxy, and if I use a 
modified run.sh that appends /var/spool/galaxy it works fine.  

I'll retry today, could be a heisenbug (or lack of coffee the first time 
`round).

chris

On Sep 16, 2012, at 1:34 PM, James Taylor  wrote:

> 
> 
> On Sunday, September 16, 2012, Fields, Christopher J wrote:
> This seems to correspond with what I have tried; I attempter setting 
> log_destination in the config file (as James suggested) and the server 
> wouldn't start
> 
> What was the error? This should work.
> 
>  
> , but changing run.sh to point to the location worked.  Not a problem, I'll 
> just create a modified run.sh script to not collide with hg updates.
> 
> chris
> 
> On Sep 12, 2012, at 2:32 PM, Nate Coraor  wrote:
> 
> > On Sep 12, 2012, at 1:09 PM, Fields, Christopher J wrote:
> >
> >> Simple question: is there a way within universe_wsgi.ini to point to a 
> >> specific location where Galaxy log (and possibly PID) files can be 
> >> dropped?  I've searched for this but couldn't find an answer beyond 
> >> locally modifying run.sh, but it's entirely possible I'm missing something 
> >> obvious.
> >>
> >> We want to have the logs stored on the server side in /var/log or 
> >> /var/spool, not on our NFS share.  The way run.sh is currently grabbing 
> >> server information it seems as if we need to have something like this in 
> >> the [server:*] config section:
> >>
> >> [server:/var/log/galaxy/web0]
> >>
> >> which doesn't seem right.  Maybe something like the following (per server)?
> >>
> >> [server:web0]
> >> use = egg:Paste#http
> >> port = 8080
> >> host = 127.0.0.1
> >> use_threadpool = true
> >> threadpool_workers = 7
> >> log_dir=/var/log/galaxy/
> >>
> >> or simply (for all log and/or PID files):
> >>
> >> log_dir=/var/log/galaxy/
> >
> > Hi Chris,
> >
> > It can't be done through the config file.  You have to use the --log-file 
> > and --pid-file from the command line.
> >
> > --nate
> >
> >>
> >> chris
> >> ___
> >> Please keep all replies on the list by using "reply all"
> >> in your mail client.  To manage your subscriptions to this
> >> and other Galaxy lists, please use the interface at:
> >>
> >> http://lists.bx.psu.edu/
> >
> 
> 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>   http://lists.bx.psu.edu/
> 
> 
> -- 
> --jt (mobile)


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] python egg cache exists error

2012-09-18 Thread Jorrit Boekel

Hi again,

I have looked into this matter a little bit more, and it looks like this 
is happening:


- tasked job is split
- tasks commands are sent to workers (I am running 8-core high cpu extra 
large workers on EC2)

- per task, worker runs env.sh for the respective tool
- per task, worker runs scripts/extract_dataset_part.py
- this scripts issues import statements (ones forsimplejson and 
galaxy.model.mapping have caused me problems)
- which lead to unzipping .so libraries from python eggs into the nodes' 
/home/galaxy/.python-eggs
- this runs into lib/pkg_resources.py and its _bypass_ensure_directory 
method that creates the temporary dir for the egg unzip
- since there are 8 processes on the node, sometimes this method tries 
to mkdir a directory that was just made by the previous process after 
the isdir.


That last point is my guessing. I don't really know how to solve this in 
a non-hackish way, so until someone finds out, I may use reading from a 
'eggs_extracted.txt'  file to determine if the eggs have been extracted. 
And locking the file when writing to it of course.


cheers,
jorrit

On 09/14/2012 10:57 AM, Jorrit Boekel wrote:

Dear list,

I am running galaxy-dist on Amazon EC2 through Cloudman, and am using 
the enable_tasked_jobs to run jobs in parallel. Yes, I know it's not 
recommended in production. My jobs usually get split in 72 parts, and 
sometimes (but not always, maybe in 30-50% of cases), errors are 
returned concerning the python egg cache, usually:


[Errno 17] File exists: '/home/galaxy/.python-eggs'

or something like

[Errno 17] File exists: 
'/home/galaxy/.python-eggs/simplejson-2.1.1-py2.7-linux-x86_64-ucs4.egg-tmp'


The errors arise AFAIK from when scripts/extract_dataset_part.py is 
run. I am guessing that the tmp python egg dir is created for every 
task of the mentioned 72, that they sometimes coincide and that this 
leads to an error.


I would like to solve this problem, but before doing so, I'd like to 
know if someone else has already fixed it in a galaxy-central changeset.


cheers,
jorrit

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] DRMAA: TypeError: check_tool_output() takes exactly 5 arguments (4 given)

2012-09-18 Thread Peter Cock
On Tue, Sep 18, 2012 at 3:09 PM, Jorrit Boekel
 wrote:
> Is it possible that you are looking at different classes? TaskWrapper's
> finish method does not use the job variable in my recently merged code
> either (line ~1045), while JobWrapper's does around line 315.
>
> cheers,
> jorrit

Yes exactly (as per my follow up email sent just before yours ;) )

Peter
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] DRMAA: TypeError: check_tool_output() takes exactly 5 arguments (4 given)

2012-09-18 Thread Scott McManus

Thanks, Jorrit! That was a good catch. Yes, it's a problem with the TaskWrapper.
I'll see what I can do about it.

-Scott

- Original Message -
> Is it possible that you are looking at different classes?
> TaskWrapper's
> finish method does not use the job variable in my recently merged
> code
> either (line ~1045), while JobWrapper's does around line 315.
> 
> cheers,
> jorrit
> 
> 
> 
> 
> On 09/18/2012 03:55 PM, Scott McManus wrote:
> > I have to admit that I'm a little confused as to why you would
> > be getting this error at all - the "job" variable is introduced
> > at line 298 in the same file, and it's used as the last variable
> > to check_tool_output in the changeset you pointed to.
> > (Also, thanks for pointing to it - that made investigating easier.)
> >
> > Is it possible that there was a merge problem when you pulled the
> > latest set of code? For my own sanity, would you mind downloading
> > a fresh copy of galaxy-central or galaxy-dist into a separate
> > directory and see if the problem is still there? (I fully admit
> > that there could be a bug that I left in, but all job runners
> > should have stumbled across the same problem - the "finish" method
> > should be called by all job runners.)
> >
> > Thanks again!
> >
> > -Scott
> >
> > - Original Message -
> >> I'll check it out. Thanks.
> >>
> >> - Original Message -
> >>> Hi all (and in particular, Scott),
> >>>
> >>> I've just updated my development server and found the following
> >>> error when running jobs on our SGE cluster via DRMMA:
> >>>
> >>> galaxy.jobs.runners.drmaa ERROR 2012-09-18 09:43:20,698 Job
> >>> wrapper
> >>> finish method failed
> >>> Traceback (most recent call last):
> >>>File
> >>>"/mnt/galaxy/galaxy-central/lib/galaxy/jobs/runners/drmaa.py",
> >>> line 371, in finish_job
> >>>  drm_job_state.job_wrapper.finish( stdout, stderr, exit_code
> >>>  )
> >>>File "/mnt/galaxy/galaxy-central/lib/galaxy/jobs/__init__.py",
> >>>line
> >>> 1048, in finish
> >>>  if ( self.check_tool_output( stdout, stderr, tool_exit_code
> >>>  )
> >>>  ):
> >>> TypeError: check_tool_output() takes exactly 5 arguments (4
> >>> given)
> >>>
> >>> This looks to have been introduced in this commit:
> >>> https://bitbucket.org/galaxy/galaxy-central/changeset/f557b7b05fdd701cbf99ee04f311bcadb1ae29c4#chg-lib/galaxy/jobs/__init__.py
> >>>
> >>> There should be an additional jobs argument, proposed fix:
> >>>
> >>> $ hg diff lib/galaxy/jobs/__init__.py
> >>> diff -r 4007494e37e1 lib/galaxy/jobs/__init__.py
> >>> --- a/lib/galaxy/jobs/__init__.py Tue Sep 18 09:40:19 2012 +0100
> >>> +++ b/lib/galaxy/jobs/__init__.py Tue Sep 18 10:06:44 2012 +0100
> >>> @@ -1045,7 +1045,8 @@
> >>>   # Check what the tool returned. If the stdout or stderr
> >>>   matched
> >>>   # regular expressions that indicate errors, then set an
> >>>   error.
> >>>   # The same goes if the tool's exit code was in a given
> >>>   range.
> >>> -if ( self.check_tool_output( stdout, stderr,
> >>> tool_exit_code
> >>> ) ):
> >>> +job = self.get_job()
> >>> +if ( self.check_tool_output( stdout, stderr,
> >>> tool_exit_code,
> >>> job ) ):
> >>>   task.state = task.states.OK
> >>>   else:
> >>>   task.state = task.states.ERROR
> >>>
> >>>
> >>> (Let me know if you want this as a pull request - it seems a lot
> >>> of
> >>> effort for a tiny change.)
> >>>
> >>> Regards,
> >>>
> >>> Peter
> >>>
> >> ___
> >> Please keep all replies on the list by using "reply all"
> >> in your mail client.  To manage your subscriptions to this
> >> and other Galaxy lists, please use the interface at:
> >>
> >>http://lists.bx.psu.edu/
> >>
> > ___
> > Please keep all replies on the list by using "reply all"
> > in your mail client.  To manage your subscriptions to this
> > and other Galaxy lists, please use the interface at:
> >
> >http://lists.bx.psu.edu/
> 
> 
> 
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] DRMAA: TypeError: check_tool_output() takes exactly 5 arguments (4 given)

2012-09-18 Thread Jorrit Boekel
Is it possible that you are looking at different classes? TaskWrapper's 
finish method does not use the job variable in my recently merged code 
either (line ~1045), while JobWrapper's does around line 315.


cheers,
jorrit




On 09/18/2012 03:55 PM, Scott McManus wrote:

I have to admit that I'm a little confused as to why you would
be getting this error at all - the "job" variable is introduced
at line 298 in the same file, and it's used as the last variable
to check_tool_output in the changeset you pointed to.
(Also, thanks for pointing to it - that made investigating easier.)

Is it possible that there was a merge problem when you pulled the
latest set of code? For my own sanity, would you mind downloading
a fresh copy of galaxy-central or galaxy-dist into a separate
directory and see if the problem is still there? (I fully admit
that there could be a bug that I left in, but all job runners
should have stumbled across the same problem - the "finish" method
should be called by all job runners.)

Thanks again!

-Scott

- Original Message -

I'll check it out. Thanks.

- Original Message -

Hi all (and in particular, Scott),

I've just updated my development server and found the following
error when running jobs on our SGE cluster via DRMMA:

galaxy.jobs.runners.drmaa ERROR 2012-09-18 09:43:20,698 Job wrapper
finish method failed
Traceback (most recent call last):
   File
   "/mnt/galaxy/galaxy-central/lib/galaxy/jobs/runners/drmaa.py",
line 371, in finish_job
 drm_job_state.job_wrapper.finish( stdout, stderr, exit_code )
   File "/mnt/galaxy/galaxy-central/lib/galaxy/jobs/__init__.py",
   line
1048, in finish
 if ( self.check_tool_output( stdout, stderr, tool_exit_code )
 ):
TypeError: check_tool_output() takes exactly 5 arguments (4 given)

This looks to have been introduced in this commit:
https://bitbucket.org/galaxy/galaxy-central/changeset/f557b7b05fdd701cbf99ee04f311bcadb1ae29c4#chg-lib/galaxy/jobs/__init__.py

There should be an additional jobs argument, proposed fix:

$ hg diff lib/galaxy/jobs/__init__.py
diff -r 4007494e37e1 lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py   Tue Sep 18 09:40:19 2012 +0100
+++ b/lib/galaxy/jobs/__init__.py   Tue Sep 18 10:06:44 2012 +0100
@@ -1045,7 +1045,8 @@
  # Check what the tool returned. If the stdout or stderr
  matched
  # regular expressions that indicate errors, then set an
  error.
  # The same goes if the tool's exit code was in a given
  range.
-if ( self.check_tool_output( stdout, stderr,
tool_exit_code
) ):
+job = self.get_job()
+if ( self.check_tool_output( stdout, stderr,
tool_exit_code,
job ) ):
  task.state = task.states.OK
  else:
  task.state = task.states.ERROR


(Let me know if you want this as a pull request - it seems a lot of
effort for a tiny change.)

Regards,

Peter


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/


___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/



___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] DRMAA: TypeError: check_tool_output() takes exactly 5 arguments (4 given)

2012-09-18 Thread Peter Cock
On Tue, Sep 18, 2012 at 2:55 PM, Scott McManus  wrote:
>
> I have to admit that I'm a little confused as to why you would
> be getting this error at all - the "job" variable is introduced
> at line 298 in the same file, and it's used as the last variable
> to check_tool_output in the changeset you pointed to.
> (Also, thanks for pointing to it - that made investigating easier.)
>
> Is it possible that there was a merge problem when you pulled the
> latest set of code? For my own sanity, would you mind downloading
> a fresh copy of galaxy-central or galaxy-dist into a separate
> directory and see if the problem is still there? (I fully admit
> that there could be a bug that I left in, but all job runners
> should have stumbled across the same problem - the "finish" method
> should be called by all job runners.)

I've not done a fresh install, but just browsing on bitbucket there
is an inconsistency in the self.check_tool_output(...) call signature:
https://bitbucket.org/galaxy/galaxy-central/src/5359d1066d91/lib/galaxy/jobs/__init__.py

e.g.

$ curl -s 
https://bitbucket.org/galaxy/galaxy-central/raw/5359d1066d91/lib/galaxy/jobs/__init__.py
| grep check_tool_output
if ( self.check_tool_output( stdout, stderr, tool_exit_code, job )):
def check_tool_output( self, stdout, stderr, tool_exit_code, job ):
if ( self.check_tool_output( stdout, stderr, tool_exit_code ) ):

Clearly the last occurrence of self.check_tool_output(...) is not
including the job argument (line 1048, in class TaskWrapper).

I suspect you need to have use_tasked_jobs = True in
universe_wsgi.ini to hit this code branch.

Peter
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] DRMAA: TypeError: check_tool_output() takes exactly 5 arguments (4 given)

2012-09-18 Thread Scott McManus

I have to admit that I'm a little confused as to why you would
be getting this error at all - the "job" variable is introduced 
at line 298 in the same file, and it's used as the last variable
to check_tool_output in the changeset you pointed to. 
(Also, thanks for pointing to it - that made investigating easier.)

Is it possible that there was a merge problem when you pulled the
latest set of code? For my own sanity, would you mind downloading 
a fresh copy of galaxy-central or galaxy-dist into a separate 
directory and see if the problem is still there? (I fully admit 
that there could be a bug that I left in, but all job runners 
should have stumbled across the same problem - the "finish" method
should be called by all job runners.)

Thanks again!

-Scott

- Original Message -
> 
> I'll check it out. Thanks.
> 
> - Original Message -
> > Hi all (and in particular, Scott),
> > 
> > I've just updated my development server and found the following
> > error when running jobs on our SGE cluster via DRMMA:
> > 
> > galaxy.jobs.runners.drmaa ERROR 2012-09-18 09:43:20,698 Job wrapper
> > finish method failed
> > Traceback (most recent call last):
> >   File
> >   "/mnt/galaxy/galaxy-central/lib/galaxy/jobs/runners/drmaa.py",
> > line 371, in finish_job
> > drm_job_state.job_wrapper.finish( stdout, stderr, exit_code )
> >   File "/mnt/galaxy/galaxy-central/lib/galaxy/jobs/__init__.py",
> >   line
> > 1048, in finish
> > if ( self.check_tool_output( stdout, stderr, tool_exit_code )
> > ):
> > TypeError: check_tool_output() takes exactly 5 arguments (4 given)
> > 
> > This looks to have been introduced in this commit:
> > https://bitbucket.org/galaxy/galaxy-central/changeset/f557b7b05fdd701cbf99ee04f311bcadb1ae29c4#chg-lib/galaxy/jobs/__init__.py
> > 
> > There should be an additional jobs argument, proposed fix:
> > 
> > $ hg diff lib/galaxy/jobs/__init__.py
> > diff -r 4007494e37e1 lib/galaxy/jobs/__init__.py
> > --- a/lib/galaxy/jobs/__init__.py   Tue Sep 18 09:40:19 2012 +0100
> > +++ b/lib/galaxy/jobs/__init__.py   Tue Sep 18 10:06:44 2012 +0100
> > @@ -1045,7 +1045,8 @@
> >  # Check what the tool returned. If the stdout or stderr
> >  matched
> >  # regular expressions that indicate errors, then set an
> >  error.
> >  # The same goes if the tool's exit code was in a given
> >  range.
> > -if ( self.check_tool_output( stdout, stderr,
> > tool_exit_code
> > ) ):
> > +job = self.get_job()
> > +if ( self.check_tool_output( stdout, stderr,
> > tool_exit_code,
> > job ) ):
> >  task.state = task.states.OK
> >  else:
> >  task.state = task.states.ERROR
> > 
> > 
> > (Let me know if you want this as a pull request - it seems a lot of
> > effort for a tiny change.)
> > 
> > Regards,
> > 
> > Peter
> > 
> ___
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
> 
>   http://lists.bx.psu.edu/
> 
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] DRMAA: TypeError: check_tool_output() takes exactly 5 arguments (4 given)

2012-09-18 Thread Scott McManus

I'll check it out. Thanks.

- Original Message -
> Hi all (and in particular, Scott),
> 
> I've just updated my development server and found the following
> error when running jobs on our SGE cluster via DRMMA:
> 
> galaxy.jobs.runners.drmaa ERROR 2012-09-18 09:43:20,698 Job wrapper
> finish method failed
> Traceback (most recent call last):
>   File "/mnt/galaxy/galaxy-central/lib/galaxy/jobs/runners/drmaa.py",
> line 371, in finish_job
> drm_job_state.job_wrapper.finish( stdout, stderr, exit_code )
>   File "/mnt/galaxy/galaxy-central/lib/galaxy/jobs/__init__.py", line
> 1048, in finish
> if ( self.check_tool_output( stdout, stderr, tool_exit_code ) ):
> TypeError: check_tool_output() takes exactly 5 arguments (4 given)
> 
> This looks to have been introduced in this commit:
> https://bitbucket.org/galaxy/galaxy-central/changeset/f557b7b05fdd701cbf99ee04f311bcadb1ae29c4#chg-lib/galaxy/jobs/__init__.py
> 
> There should be an additional jobs argument, proposed fix:
> 
> $ hg diff lib/galaxy/jobs/__init__.py
> diff -r 4007494e37e1 lib/galaxy/jobs/__init__.py
> --- a/lib/galaxy/jobs/__init__.py Tue Sep 18 09:40:19 2012 +0100
> +++ b/lib/galaxy/jobs/__init__.py Tue Sep 18 10:06:44 2012 +0100
> @@ -1045,7 +1045,8 @@
>  # Check what the tool returned. If the stdout or stderr
>  matched
>  # regular expressions that indicate errors, then set an
>  error.
>  # The same goes if the tool's exit code was in a given
>  range.
> -if ( self.check_tool_output( stdout, stderr, tool_exit_code
> ) ):
> +job = self.get_job()
> +if ( self.check_tool_output( stdout, stderr, tool_exit_code,
> job ) ):
>  task.state = task.states.OK
>  else:
>  task.state = task.states.ERROR
> 
> 
> (Let me know if you want this as a pull request - it seems a lot of
> effort for a tiny change.)
> 
> Regards,
> 
> Peter
> 
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


Re: [galaxy-dev] environment variables and paths for toolshed tools

2012-09-18 Thread Greg Von Kuster
Hi Jim,

thanks very much for catching this and providing the fix (I've been away from 
email for a few days, so am just getting caught up).  I've committed this in 
change set 7705:ba64c2178fbe.

Greg Von Kuster


On Sep 17, 2012, at 5:13 PM, Jim Johnson wrote:

> Greg,
> 
> I think  templates/admin/tool_shed_repository/common.mako  
> needs to bypass the 'set_environment' requirement in the following block.
> 
> Thanks,
> 
> JJ
> 
> 
> 
> $ hg diff templates/admin/tool_shed_repository/common.mako
> diff -r 65ecf4e0ed28 templates/admin/tool_shed_repository/common.mako
> --- a/templates/admin/tool_shed_repository/common.mako  Mon Sep 17 16:24:27 
> 2012 -0400
> +++ b/templates/admin/tool_shed_repository/common.mako  Mon Sep 17 16:06:52 
> 2012 -0500
> @@ -129,28 +129,30 @@
>  <% package_header_row_displayed = True %>
>  %endif
>  %for dependency_key, requirements_dict in 
> tool_dependencies.items():
> -<%
> -name = requirements_dict[ 'name' ]
> -version = requirements_dict[ 'version' ]
> -type = requirements_dict[ 'type' ]
> -install_dir = os.path.join( 
> trans.app.config.tool_dependency_dir,
> -name,
> -version,
> -
> repository_owner,
> -
> repository_name,
> -
> changeset_revision )
> -tool_dependency_readme_text = 
> requirements_dict.get( 'readme', None )
> -%>
> -%if not os.path.exists( install_dir ):
> -
> -${name}
> -${version}
> -${type}
> -${install_dir}
> -
> -%if tool_dependency_readme_text:
> - bgcolor="#CC">${name} ${version} requirements and installation 
> information
> - colspan="4">${tool_dependency_readme_text}
> +%if not dependency_key == 'set_environment':
> +<%
> +name = requirements_dict[ 'name' ]
> +version = requirements_dict[ 
> 'version' ]
> +type = requirements_dict[ 'type' ]
> +install_dir = os.path.join( 
> trans.app.config.tool_dependency_dir,
> +name,
> +version,
> +
> repository_owner,
> +
> repository_name,
> +
> changeset_revision )
> +tool_dependency_readme_text = 
> requirements_dict.get( 'readme', None )
> +%>
> +%if not os.path.exists( install_dir ):
> +
> +${name}
> +${version}
> +${type}
> +${install_dir}
> +
> +%if tool_dependency_readme_text:
> + bgcolor="#CC">${name} ${version} requirements and installation 
> information
> + colspan="4">${tool_dependency_readme_text}
> +%endif
>  %endif
>  %endif
>  %endfor
> 
> 
> 
> 
> On 9/16/12 8:48 AM, Jim Johnson wrote:
>> Greg, Pablo,
>> 
>> I'm using a SnpEffect repository 'snpeff_with_dep'  in 
>> http://testtoolshed.g2.bx.psu.edu/  
>> to test using tool_dependencies.xml with the  tag.
>> ( You both write access, if you want to use this for correcting my errors or 
>> debugging. )
>> 
>> I'm getting an error in prepare_for_install:
>> 
>> Module galaxy.web.controllers.admin_toolshed:1174 in prepare_for_install
>> >>> too

[galaxy-dev] DRMAA: TypeError: check_tool_output() takes exactly 5 arguments (4 given)

2012-09-18 Thread Peter Cock
Hi all (and in particular, Scott),

I've just updated my development server and found the following
error when running jobs on our SGE cluster via DRMMA:

galaxy.jobs.runners.drmaa ERROR 2012-09-18 09:43:20,698 Job wrapper
finish method failed
Traceback (most recent call last):
  File "/mnt/galaxy/galaxy-central/lib/galaxy/jobs/runners/drmaa.py",
line 371, in finish_job
drm_job_state.job_wrapper.finish( stdout, stderr, exit_code )
  File "/mnt/galaxy/galaxy-central/lib/galaxy/jobs/__init__.py", line
1048, in finish
if ( self.check_tool_output( stdout, stderr, tool_exit_code ) ):
TypeError: check_tool_output() takes exactly 5 arguments (4 given)

This looks to have been introduced in this commit:
https://bitbucket.org/galaxy/galaxy-central/changeset/f557b7b05fdd701cbf99ee04f311bcadb1ae29c4#chg-lib/galaxy/jobs/__init__.py

There should be an additional jobs argument, proposed fix:

$ hg diff lib/galaxy/jobs/__init__.py
diff -r 4007494e37e1 lib/galaxy/jobs/__init__.py
--- a/lib/galaxy/jobs/__init__.py   Tue Sep 18 09:40:19 2012 +0100
+++ b/lib/galaxy/jobs/__init__.py   Tue Sep 18 10:06:44 2012 +0100
@@ -1045,7 +1045,8 @@
 # Check what the tool returned. If the stdout or stderr matched
 # regular expressions that indicate errors, then set an error.
 # The same goes if the tool's exit code was in a given range.
-if ( self.check_tool_output( stdout, stderr, tool_exit_code ) ):
+job = self.get_job()
+if ( self.check_tool_output( stdout, stderr, tool_exit_code, job ) ):
 task.state = task.states.OK
 else:
 task.state = task.states.ERROR


(Let me know if you want this as a pull request - it seems a lot of
effort for a tiny change.)

Regards,

Peter
___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/


[galaxy-dev] Fwd: none availability genome.fa files through mapping modules on local server

2012-09-18 Thread Ross
For the record - mixed tabs/spaces as delimiters in .loc files seem to
have been the problem (again) - maybe we should log errors when ragged
tab delimited loc tables are provided?

-- Forwarded message --
From: Sandrine Imbeaud 
Date: Tue, Sep 18, 2012 at 5:54 PM
Subject: Re: [galaxy-dev] none availability genome.fa files through
mapping modules on local server
To: Ross 


We have checked again the tab delimiters and it works.. Many thanks.
/ Sandrine

Le 9/14/2012 10:18 AM, Ross a écrit :

> make absolutely sure there are only tab delimiters between loc file
> fields - otherwise they fail.
>
> On Fri, Sep 14, 2012 at 5:58 PM, Sandrine Imbeaud
>  wrote:
>>
>> Hello,
>>
>> I apologize for this probably very simple application.
>> We have installed our own Galaxy server and started using in-house the NGS
>> modules. However, during the mapping procedure using either BWA for illumina
>> or BFAST tools, no reference genome index is available.
>> To solve the problem, we have followed the tutorials and have uploaded the
>> hg19.fa file and put it locally in the Galaxy-dist/database folder. We also
>> have modified the *_index.loc files indicating the path to the file. We
>> restarted the Galaxy server. However, still no reference is available
>> through the NGS mapping modules.
>> Is there anyone that may help use solving this probably simple problem?
>>
>> Kind regards
>> / Sandrine
>> ___
>> Please keep all replies on the list by using "reply all"
>> in your mail client.  To manage your subscriptions to this
>> and other Galaxy lists, please use the interface at:
>>
>>   http://lists.bx.psu.edu/
>
>
>


--
Sandrine Imbeaud
INSERM, UMR U-674, IUH
Université Paris Descartes

Génomique Fonctionnelle des tumeurs solides
27 rue Juliette Dodu
F75010 Paris, France
TEL: +33 (0)1 53 72 51 98
FAX: +33 (0)1 53 72 51 92
MOBILE: +33 (0)6 12 69 80 29
http://www.inserm-u674.net/



-- 
Ross Lazarus MBBS MPH;
Head, Medical Bioinformatics, BakerIDI; Tel: +61 385321444
http://scholar.google.com/citations?hl=en&user=UCUuEM4J

___
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/