[galaxy-dev] Test run frequency on TestToolShed

2014-08-18 Thread Peter Cock
Hi all,

Are the main and test tool-sheds currently meant to
be running the tool functional tests every 48 hours?

I created and updated these repositories last week,
but they have yet to be tested:

https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_composition
https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast_rbh

Thanks,

Peter

P.S. it would be nice to be able to sort the Repositories I
own lists etc by date (particularly for my typical workflow
of posting an update to the TestToolShed, waiting for a
green light from the tests, and then pushing this to the
main ToolShed).
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Test run frequency on TestToolShed

2014-08-18 Thread Peter Cock
On Mon, Aug 18, 2014 at 11:16 AM, Peter Cock p.j.a.c...@googlemail.com wrote:
 Hi all,

 Are the main and test tool-sheds currently meant to
 be running the tool functional tests every 48 hours?

 I created and updated these repositories last week,
 but they have yet to be tested:

 https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_composition
 https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast_rbh

 Thanks,

 Peter

 P.S. it would be nice to be able to sort the Repositories I
 own lists etc by date (particularly for my typical workflow
 of posting an update to the TestToolShed, waiting for a
 green light from the tests, and then pushing this to the
 main ToolShed).

Perhaps something deeper here, older untested examples:

Revised 2014-07-30 ,
https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_filter_by_id
https://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr

Revised 2014-07-31,
https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Upload file not working with new install

2014-08-18 Thread John Chilton
I have not seen this before - are you able to submit things as your
Galaxy user from the command line? Are their logs on the PBS side that
give any more clues?

-John

On Fri, Aug 15, 2014 at 2:01 PM, Iry Witham iry.wit...@jax.org wrote:
 Hi John,

 It looks like it may be related to PBS and Torque.  I am currently
 running torque-4.2.5.  I scrambled the pbs-python eggs and it is now
 running pbs-python 4.3.5.  The good thing is I am now getting an attempt
 to submit to pbs, but all jobs are failing.  This is the error:

 galaxy.jobs.handler DEBUG 2014-08-15 13:10:37,822 (16) Dispatching to pbs
 runner
 galaxy.jobs DEBUG 2014-08-15 13:10:41,110 (16) Persisting job destination
 (destination id: pbs:-l nodes=1:ppn=1,walltime=20:00:00)
 galaxy.jobs.handler INFO 2014-08-15 13:10:41,145 (16) Job dispatched
 galaxy.tools.deps DEBUG 2014-08-15 13:10:42,338 Building dependency shell
 command for dependency 'samtools'
 galaxy.tools.deps WARNING 2014-08-15 13:10:42,339 Failed to resolve
 dependency on 'samtools', ignoring
 galaxy.jobs.runners.pbs ERROR 2014-08-15 13:11:33,507 Connection to PBS
 server for submit failed: 15007: No permission


 Iry

 On 8/15/14 12:11 PM, John Chilton jmchil...@gmail.com wrote:

Okay - I just checked out a clean Galaxy without modifications and I
don't see any problems with uploads. So this is likely something to do
with your setup - if I had to guess I would guess that your
job_conf.xml configuration is somehow incorrect - the job is getting
creating but nothing is running it. Can you verify this by looking in
the logs? I would imagine you do see lines like:

galaxy.tools.actions.upload_common INFO 2014-08-15 12:04:04,306 tool
upload1 created job id 2

for the uploads but nothing like:

galaxy.jobs.runners DEBUG 2014-08-15 12:04:05,119 (2) command is:
python
/home/john/workspace/galaxy-central-fresh/tools/data_source/upload.py


You could also verify this is the problem by doing something other
than uploading - maybe a data source tool? I have attached a tool that
doesn't require any existing files or external data sources that I use
for testing stuff like this - maybe see if it runs?

If no jobs run - can you attach your job_conf.xml and we can try to
debug the problem from there.

If other tools run than it is likely not a job related problem - in
that case can you answer some questions - Are you using a proxy - if
so which one? Can you attach the configuration? Is the upload.xml tool
in your tool_conf.xml? Have you made modifications to it?

-John

On Fri, Aug 15, 2014 at 10:47 AM, Iry Witham iry.wit...@jax.org wrote:
 I have checked and everything appears correct.  I did notice that in the
 /database/tmp/upload_store/ there are files that when I cat them they
are
 the files I am/have attempted to upload.  However, they do not get
moved to
 the /ftp directory.  I am wondering if this is due to a
misconfiguration in
 the universe.wsgi.ini or if it could be related to proftpd or postgres.
 Unfortunately I am under a very tight deadline and need to figure this
out.
 Any assistance will be appreciated.

 Thanks,
 Iry

 From: Michael Mason mma...@benaroyaresearch.org
 Date: Thursday, August 14, 2014 12:48 PM
 To: Iry Witham iry.wit...@jax.org, galaxy-dev@lists.bx.psu.edu
 galaxy-dev@lists.bx.psu.edu
 Subject: Re: [galaxy-dev] Upload file not working with new install

 No sure if this will help but you may want to check this. We just fixed
a
 problem from uploading small files via the ftp directory. Our nfs mount
was
 set to cache file meta data about every 30 sec. This proved problematic
when
 uploading a small fasta file that previously uploaded.  Every time we
tried
 uploading, it finished with no errors but was empty. Turning the nfs
caching
 off fixed this.

 From: Iry Witham iry.wit...@jax.org
 Date: Thursday, August 14, 2014 9:34 AM
 To: galaxy-dev@lists.bx.psu.edu galaxy-dev@lists.bx.psu.edu
 Subject: [galaxy-dev] Upload file not working with new install

 Hi Team,

 I have installed a fresh version of Galaxy and am having an issue with
the
 'Upload File' tool.  When I attempt to use it it just sits and spins.
No
 data gets uploaded.  I have looked through the universe.wsgi.ini file
and
 cannot find anything there that stands out.  Can you point me to a
possible
 solution?

 Thanks,
 Iry

 The information in this email, including attachments, may be
confidential
 and is intended solely for the addressee(s). If you believe you received
 this email by mistake, please notify the sender by return email as soon
as
 possible.

 
 --CONFIDENTIALITY NOTICE--: The information contained in this email is
 intended for the exclusive use of the addressee and may contain
confidential
 information. If you are not the intended recipient, you are hereby
notified
 that any form of dissemination of this communication is strictly
prohibited.
 www.benaroyaresearch.org

 The information in this email, including attachments, may be
confidential
 and is 

Re: [galaxy-dev] Test run frequency on TestToolShed

2014-08-18 Thread Dave Bouvier

Peter,

The tests are configured to run once every 48 hours, on even-numbered days of 
the month. I'll be looking into the three you mentioned in your second email to 
determine why they are not being tested, or why the test results are not being 
updated.

--Dave B.
On 08/18/2014 07:28 AM, Peter Cock wrote:

On Mon, Aug 18, 2014 at 11:16 AM, Peter Cock p.j.a.c...@googlemail.com wrote:
 Hi all,

 Are the main and test tool-sheds currently meant to
 be running the tool functional tests every 48 hours?

 I created and updated these repositories last week,
 but they have yet to be tested:

 https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_composition
 https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast_rbh

 Thanks,

 Peter

 P.S. it would be nice to be able to sort the Repositories I
 own lists etc by date (particularly for my typical workflow
 of posting an update to the TestToolShed, waiting for a
 green light from the tests, and then pushing this to the
 main ToolShed).

Perhaps something deeper here, older untested examples:

Revised 2014-07-30 ,
https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_filter_by_id
https://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr

Revised 2014-07-31,
https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go

Peter
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
   http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
   http://galaxyproject.org/search/mailinglists/



___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Test run frequency on TestToolShed

2014-08-18 Thread Saket Choudhary
Dave,

I have similar issues with my repositories:
https://testtoolshed.g2.bx.psu.edu/view/saketkc/pycurl_test

This was uploaded on 14th August.

https://testtoolshed.g2.bx.psu.edu/view/saketkc/package_pycurl_7_19_3_1

shows no test results for the month of August. The last test was run on
2014-07-28

Saket

On 18 August 2014 06:33, Dave Bouvier d...@bx.psu.edu wrote:
 Peter,

 The tests are configured to run once every 48 hours, on even-numbered days
 of the month. I'll be looking into the three you mentioned in your second
 email to determine why they are not being tested, or why the test results
 are not being updated.

 --Dave B.

 On 08/18/2014 07:28 AM, Peter Cock wrote:

 On Mon, Aug 18, 2014 at 11:16 AM, Peter Cock p.j.a.c...@googlemail.com
 wrote:
  Hi all,
 
  Are the main and test tool-sheds currently meant to
  be running the tool functional tests every 48 hours?
 
  I created and updated these repositories last week,
  but they have yet to be tested:
 
  https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_composition
  https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast_rbh
 
  Thanks,
 
  Peter
 
  P.S. it would be nice to be able to sort the Repositories I
  own lists etc by date (particularly for my typical workflow
  of posting an update to the TestToolShed, waiting for a
  green light from the tests, and then pushing this to the
  main ToolShed).

 Perhaps something deeper here, older untested examples:

 Revised 2014-07-30 ,
 https://testtoolshed.g2.bx.psu.edu/view/peterjc/seq_filter_by_id
 https://testtoolshed.g2.bx.psu.edu/view/peterjc/blastxml_to_top_descr

 Revised 2014-07-31,
 https://testtoolshed.g2.bx.psu.edu/view/peterjc/blast2go

 Peter
 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/


 ___
 Please keep all replies on the list by using reply all
 in your mail client.  To manage your subscriptions to this
 and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

 To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Upload file not working with new install

2014-08-18 Thread Iry Witham
Hi Nate,

We were able to get upgraded to pbs-python 4.4.0 after having to modify some 
c++ code that was causing errors with the _py.so.  We now have it running, but 
I am trying to get the references to work for the tools I have installed from 
the tool shed.  They can't seem to find the correct path.

Thanks,
Iry

Sent from my iPhone

On Aug 18, 2014, at 5:08 PM, Nate Coraor 
n...@bx.psu.edumailto:n...@bx.psu.edu wrote:

On Aug 18, 2014, at 9:55 AM, Iry Witham 
iry.wit...@jax.orgmailto:iry.wit...@jax.org wrote:

Hi John,

I was able to work through this since it was related to pbs on the
cluster.  However, I am still having issues.  This is the error I am
getting:

galaxy.tools.genome_index DEBUG 2014-08-18 09:31:56,819 Loaded genome
index tool: __GENOME_INDEX__
galaxy.jobs.manager DEBUG 2014-08-18 09:31:56,825 Starting job handler
galaxy.jobs INFO 2014-08-18 09:31:56,825 Handler 'handler1' will load all
configured runner plugins
galaxy.jobs.runners DEBUG 2014-08-18 09:31:56,829 Starting 5 LocalRunner
workers
galaxy.jobs DEBUG 2014-08-18 09:31:57,073 Loaded job runner
'galaxy.jobs.runners.local:LocalJobRunner' as 'local'
galaxy.jobs.runners DEBUG 2014-08-18 09:31:57,104 Starting 3 LWRRunner
workers
galaxy.jobs.runners.lwr_client.manager INFO 2014-08-18 09:31:57,197
Setting LWR client class to standard, non-caching variant.
galaxy.jobs DEBUG 2014-08-18 09:31:57,228 Loaded job runner
'galaxy.jobs.runners.lwr:LwrJobRunner' as 'lwr'
Traceback (most recent call last):
 File
/hpcdata/galaxy-test/galaxy-setup/galaxy-dist/lib/galaxy/webapps/galaxy/bu
ildapp.py, line 39, in app_factory
   app = UniverseApplication( global_conf = global_conf, **kwargs )
 File /hpcdata/galaxy-test/galaxy-setup/galaxy-dist/lib/galaxy/app.py,
line 144, in __init__
   self.job_manager = manager.JobManager( self )
 File
/hpcdata/galaxy-test/galaxy-setup/galaxy-dist/lib/galaxy/jobs/manager.py,
line 23, in __init__
   self.job_handler = handler.JobHandler( app )
 File
/hpcdata/galaxy-test/galaxy-setup/galaxy-dist/lib/galaxy/jobs/handler.py,
line 31, in __init__
   self.dispatcher = DefaultJobDispatcher( app )
 File
/hpcdata/galaxy-test/galaxy-setup/galaxy-dist/lib/galaxy/jobs/handler.py,
line 583, in __init__
   self.job_runners = self.app.job_config.get_job_runner_plugins(
self.app.config.server_name )
 File
/hpcdata/galaxy-test/galaxy-setup/galaxy-dist/lib/galaxy/jobs/__init__.py
, line 496, in get_job_runner_plugins
   module = __import__( module_name )
 File
/hpcdata/galaxy-test/galaxy-setup/galaxy-dist/lib/galaxy/jobs/runners/pbs.
py, line 32, in module
   raise Exception( egg_message % str( e ) )
Exception:

The 'pbs' runner depends on 'pbs_python' which is not installed or not
configured properly.  Galaxy's scramble system should make this
installation
simple, please follow the instructions found at:

   http://wiki.galaxyproject.org/Admin/Config/Performance/Cluster

Additional errors may follow:
/hpcdata/galaxy-test/galaxy-setup/galaxy-dist/eggs/pbs_python-4.3.5-py2.6-l
inux-x86_64-ucs4.egg/_pbs.so: undefined symbol: log_record

Hi Iry,

It looks like we need to upgrade to pbs_python 4.4.0: 
https://oss.trac.surfsara.nl/pbs_python/ticket/34

The pbs_python 4.4.0 source is on our eggs server - could you update the 
version to 4.4.0 in /hpcdata/galaxy-test/galaxy-setup/galaxy-dist/eggs.ini and 
re-scramble the egg and let us know if this fixes it?

Thanks,
--nate



Removing PID file handler1.pid


I am running the following:

galaxy@galaxy2:/hpcdata/galaxy-test/galaxy-setup/galaxy-dist qsub --about
HomeDir:   /var/spool/torque  InstallDir: /usr/local  Server: rockhopper
BuildDir:  /root/torque-4.2.5
BuildUser: root
BuildHost: galaxy2
BuildDate: Sat Aug 16 11:17:00 EDT 2014
Version:   4.2.5
Commit:  39f78e588b3a47a6c7bed1004ec7b5d0ccf24288


The handler.pid are being removed.

Thanks,
Iry

On 8/18/14 9:16 AM, John Chilton 
jmchil...@gmail.commailto:jmchil...@gmail.com wrote:

I have not seen this before - are you able to submit things as your
Galaxy user from the command line? Are their logs on the PBS side that
give any more clues?

-John

On Fri, Aug 15, 2014 at 2:01 PM, Iry Witham 
iry.wit...@jax.orgmailto:iry.wit...@jax.org wrote:
Hi John,

   It looks like it may be related to PBS and Torque.  I am
currently
running torque-4.2.5.  I scrambled the pbs-python eggs and it is now
running pbs-python 4.3.5.  The good thing is I am now getting an attempt
to submit to pbs, but all jobs are failing.  This is the error:

galaxy.jobs.handler DEBUG 2014-08-15 13:10:37,822 (16) Dispatching to
pbs
runner
galaxy.jobs DEBUG 2014-08-15 13:10:41,110 (16) Persisting job
destination
(destination id: pbs:-l nodes=1:ppn=1,walltime=20:00:00)
galaxy.jobs.handler INFO 2014-08-15 13:10:41,145 (16) Job dispatched
galaxy.tools.deps DEBUG 2014-08-15 13:10:42,338 Building dependency
shell
command for dependency 'samtools'
galaxy.tools.deps WARNING 2014-08-15 13:10:42,339 Failed to resolve
dependency on 'samtools', ignoring
galaxy.jobs.runners.pbs