[galaxy-dev] error after pulling latest updates

2013-06-05 Thread Branden Timm

Hi All,
  Just did an update to HEAD, and upon restarting the daemons received 
the following messages:


galaxy.jobs.handler DEBUG 2013-06-05 10:55:31,478 recovering job 2083 in 
condor runner

Traceback (most recent call last):
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/webapps/galaxy/buildapp.py, 
line 35, in app_factory

app = UniverseApplication( global_conf = global_conf, **kwargs )
  File /home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/app.py, line 
164, in __init__

self.job_manager = manager.JobManager( self )
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/jobs/manager.py, line 
36, in __init__

self.job_handler.start()
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/jobs/handler.py, line 
34, in start

self.job_queue.start()
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/jobs/handler.py, line 
77, in start

self.__check_jobs_at_startup()
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/jobs/handler.py, line 
125, in __check_jobs_at_startup

self.dispatcher.recover( job, job_wrapper )
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/jobs/handler.py, line 
620, in recover

self.job_runners[runner_name].recover( job, job_wrapper )
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/jobs/runners/condor.py, line 
243, in recover
cjs.user_log = os.path.join( 
self.app.config.cluster_files_directory, '%s.condor.log' % galaxy_id_tag )

NameError: global name 'galaxy_id_tag' is not defined
Removing PID file main.pid

--
Branden Timm
bt...@energy.wisc.edu
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] error after pulling latest updates

2013-06-05 Thread Branden Timm
I was able to work around this error by hacking the condor job runner, 
there were two obvious errors.  First, in recover(), galaxy_id_tag was 
not being set (hence the NameError).  Second, the same method was 
invoking self.__old_job_state with one argument when it clearly expects 
two.  The latter I just commented out.


243d242
 galaxy_id_tag = job_wrapper.get_id_tag()
246c245
 #self.__old_state_paths( cjs )
---
 self.__old_state_paths( cjs )

Obviously this is a hacky workaround, but I'd like to hear if anybody 
knows the cause of these errors and whether a patch should be submitted.


--
Branden Timm
bt...@energy.wisc.edu

On 6/5/2013 10:58 AM, Branden Timm wrote:

Hi All,
  Just did an update to HEAD, and upon restarting the daemons received 
the following messages:


galaxy.jobs.handler DEBUG 2013-06-05 10:55:31,478 recovering job 2083 
in condor runner

Traceback (most recent call last):
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/webapps/galaxy/buildapp.py, 
line 35, in app_factory

app = UniverseApplication( global_conf = global_conf, **kwargs )
  File /home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/app.py, line 
164, in __init__

self.job_manager = manager.JobManager( self )
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/jobs/manager.py, 
line 36, in __init__

self.job_handler.start()
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/jobs/handler.py, 
line 34, in start

self.job_queue.start()
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/jobs/handler.py, 
line 77, in start

self.__check_jobs_at_startup()
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/jobs/handler.py, 
line 125, in __check_jobs_at_startup

self.dispatcher.recover( job, job_wrapper )
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/jobs/handler.py, 
line 620, in recover

self.job_runners[runner_name].recover( job, job_wrapper )
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/galaxy/jobs/runners/condor.py, 
line 243, in recover
cjs.user_log = os.path.join( 
self.app.config.cluster_files_directory, '%s.condor.log' % 
galaxy_id_tag )

NameError: global name 'galaxy_id_tag' is not defined
Removing PID file main.pid

--
Branden Timm
bt...@energy.wisc.edu
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


[galaxy-dev] Error cleaning up Condor jobs

2013-05-08 Thread Branden Timm

Hi All,
  I've been working to configure a new Galaxy instance to run jobs 
under Condor.  Things are 99% working at this point, but what seems to 
be happening is after the Condor job finishes Galaxy tries to clean up a 
cluster file that isn't there, namely the .ec (exit code) file.  
Relevant log info:


galaxy.jobs DEBUG 2013-05-07 15:02:49,364 (1985) Working directory for 
job is: /home/GLBRCORG/galaxy/database/job_working_directory/001/1985
galaxy.jobs.handler DEBUG 2013-05-07 15:02:49,387 (1985) Dispatching to 
condor runner
galaxy.jobs DEBUG 2013-05-07 15:02:49,720 (1985) Persisting job 
destination (destination id: condor)

galaxy.jobs.handler INFO 2013-05-07 15:02:49,761 (1985) Job dispatched
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,368 (1985) 
submitting file /home/GLBRCORG/galaxy/database/condor/galaxy_1985.sh
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,369 (1985) command 
is: python 
/home/GLBRCORG/galaxy/galaxy-central/tools/fastq/fastq_to_fasta.py 
'/home/GLBRCORG/galaxy/database/files/000/dataset_3.dat' 
'/home/GLBRCORG/galaxy/database/files/002/dataset_2842.dat' ''; cd 
/home/GLBRCORG/galaxy/galaxy-central; 
/home/GLBRCORG/galaxy/galaxy-central/set_metadata.sh 
/home/GLBRCORG/galaxy/database/files 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985 . 
/home/GLBRCORG/galaxy/galaxy-central/universe_wsgi.ini 
/home/GLBRCORG/galaxy/database/tmp/tmpGe1JZJ 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/galaxy.json /home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_in_HistoryDatasetAssociation_3161_are5Bg,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_kwds_HistoryDatasetAssociation_3161_p73Yus,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_out_HistoryDatasetAssociation_3161_tLqep6,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_results_HistoryDatasetAssociation_3161_3QSW5X,,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_override_HistoryDatasetAssociation_3161_JUFvmk 


galaxy.jobs.runners.condor INFO 2013-05-07 15:02:58,960 (1985) queued as 15
galaxy.jobs DEBUG 2013-05-07 15:02:59,110 (1985) Persisting job 
destination (destination id: condor)
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:59,536 (1985/15) job 
is now running
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:16,966 (1985/15) job 
is now running
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:17,279 (1985/15) job 
has completed
galaxy.jobs.runners DEBUG 2013-05-07 15:07:17,417 (1985/15) Unable to 
cleanup /home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec: [Errno 2] 
No such file or directory: 
'/home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec'

galaxy.jobs DEBUG 2013-05-07 15:07:17,560 setting dataset state to ERROR
galaxy.jobs DEBUG 2013-05-07 15:07:17,961 job 1985 ended
galaxy.datatypes.metadata DEBUG 2013-05-07 15:07:17,961 Cleaning up 
external metadata files


I've done a watch on the condor job directory, and as far as I can tell 
galaxy_1985.ec never gets created.  From a cursory look at 
lib/galaxy/jobs/runners/__init__.py and condor.py, it looks like the 
cleanup is happening in the AsynchronousJobState::cleanup method, which 
iterates on the cleanup_file_attributes list.  I naively tried to 
override cleanup_file_attributes in CondorJobState to disinclude 
'exit_code_file', to no avail.


I'm hoping somebody can spot where the hiccup is here.  Another question 
that is on my mind is should a failure to cleanup cluster files set the 
dataset state to ERROR?  An inspection of the output file from my job 
leads me to believe it finished just fine, and indicating failure to the 
user because Galaxy couldn't cleanup a 1b error code file seems a little 
extreme to me.


Thanks!

--
Branden Timm
bt...@energy.wisc.edu
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/

[galaxy-dev] Error cleaning up Condor jobs

2013-05-08 Thread Branden Timm

Hi All,
  I've been working to configure a new Galaxy instance to run jobs 
under Condor.  Things are 99% working at this point, but what seems to 
be happening is after the Condor job finishes Galaxy tries to clean up a 
cluster file that isn't there, namely the .ec (exit code) file.  
Relevant log info:


galaxy.jobs DEBUG 2013-05-07 15:02:49,364 (1985) Working directory for 
job is: /home/GLBRCORG/galaxy/database/job_working_directory/001/1985
galaxy.jobs.handler DEBUG 2013-05-07 15:02:49,387 (1985) Dispatching to 
condor runner
galaxy.jobs DEBUG 2013-05-07 15:02:49,720 (1985) Persisting job 
destination (destination id: condor)

galaxy.jobs.handler INFO 2013-05-07 15:02:49,761 (1985) Job dispatched
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,368 (1985) 
submitting file /home/GLBRCORG/galaxy/database/condor/galaxy_1985.sh
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,369 (1985) command 
is: python 
/home/GLBRCORG/galaxy/galaxy-central/tools/fastq/fastq_to_fasta.py 
'/home/GLBRCORG/galaxy/database/files/000/dataset_3.dat' 
'/home/GLBRCORG/galaxy/database/files/002/dataset_2842.dat' ''; cd 
/home/GLBRCORG/galaxy/galaxy-central; 
/home/GLBRCORG/galaxy/galaxy-central/set_metadata.sh 
/home/GLBRCORG/galaxy/database/files 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985 . 
/home/GLBRCORG/galaxy/galaxy-central/universe_wsgi.ini 
/home/GLBRCORG/galaxy/database/tmp/tmpGe1JZJ 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/galaxy.json /home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_in_HistoryDatasetAssociation_3161_are5Bg,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_kwds_HistoryDatasetAssociation_3161_p73Yus,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_out_HistoryDatasetAssociation_3161_tLqep6,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_results_HistoryDatasetAssociation_3161_3QSW5X,,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_override_HistoryDatasetAssociation_3161_JUFvmk

galaxy.jobs.runners.condor INFO 2013-05-07 15:02:58,960 (1985) queued as 15
galaxy.jobs DEBUG 2013-05-07 15:02:59,110 (1985) Persisting job 
destination (destination id: condor)
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:59,536 (1985/15) job 
is now running
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:16,966 (1985/15) job 
is now running
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:17,279 (1985/15) job 
has completed
galaxy.jobs.runners DEBUG 2013-05-07 15:07:17,417 (1985/15) Unable to 
cleanup /home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec: [Errno 2] 
No such file or directory: 
'/home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec'

galaxy.jobs DEBUG 2013-05-07 15:07:17,560 setting dataset state to ERROR
galaxy.jobs DEBUG 2013-05-07 15:07:17,961 job 1985 ended
galaxy.datatypes.metadata DEBUG 2013-05-07 15:07:17,961 Cleaning up 
external metadata files


I've done a watch on the condor job directory, and as far as I can tell 
galaxy_1985.ec never gets created.  From a cursory look at 
lib/galaxy/jobs/runners/__init__.py and condor.py, it looks like the 
cleanup is happening in the AsynchronousJobState::cleanup method, which 
iterates on the cleanup_file_attributes list.  I naively tried to 
override cleanup_file_attributes in CondorJobState to disinclude 
'exit_code_file', to no avail.


I'm hoping somebody can spot where the hiccup is here.  Another question 
that is on my mind is should a failure to cleanup cluster files set the 
dataset state to ERROR?  An inspection of the output file from my job 
leads me to believe it finished just fine, and indicating failure to the 
user because Galaxy couldn't cleanup a 1b error code file seems a little 
extreme to me.


Thanks!

--
Branden Timm
bt...@energy.wisc.edu
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Error installing migrated tool deps

2013-05-08 Thread Branden Timm

Confirmed working, thanks!

-Branden

On 5/7/2013 1:25 PM, Dave Bouvier wrote:

Branden,

Thank you for reporting this issue, I've committed a fix in 
9662:6c462a5a566d. You should be able to re-run your tool migration 
after updating to that revision.


   --Dave B.

On 5/7/13 11:09:31.000, Branden Timm wrote:

I recently upgraded to the latest galaxy-central, and was advised on
first run that two tools in tool_conf.xml had been removed from the
distribution, but could be installed from the tool shed.  I ran the
script that it generated, however the script fails with the following
messages:

No handlers could be found for logger
galaxy.tools.parameters.dynamic_options
/home/GLBRCORG/galaxy/galaxy-central/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg/pysam/__init__.py:1: 


RuntimeWarning: __builtin__.file size changed, may indicate binary
incompatibility
   from csamtools import *
Repositories will be installed into configured tool_path location
../shed_tools
Skipping automatic install of repository ' bowtie_wrappers ' because it
has already been installed in location
../shed_tools/toolshed.g2.bx.psu.edu/repos/devteam/bowtie_wrappers/0c7e4eadfb3c 



Traceback (most recent call last):
   File ./scripts/migrate_tools/migrate_tools.py, line 21, in module
 app = MigrateToolsApplication( sys.argv[ 1 ] )
   File
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/galaxy_install/migrate/common.py, 


line 76, in __init__
 install_dependencies=install_dependencies )
   File
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/galaxy_install/install_manager.py, 


line 75, in __init__
 self.install_repository( repository_elem, install_dependencies )
   File
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/galaxy_install/install_manager.py, 


line 319, in install_repository
 install_dependencies=install_dependencies )
   File
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/galaxy_install/install_manager.py, 


line 203, in handle_repository_contents
 persist=True )
   File
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/util/metadata_util.py, 


line 594, in generate_metadata_for_changeset_revision
 tool, valid, error_message = tool_util.load_tool_from_config( app,
app.security.encode_id( repository.id ), full_path )
AttributeError: 'MigrateToolsApplication' object has no attribute
'security'

Any help would be greatly appreciated.  Thanks!

--
Branden Timm
bt...@energy.wisc.edu

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/mailinglists/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Error cleaning up Condor jobs

2013-05-08 Thread Branden Timm
Nate, thanks for the tip.  I'm adding some debugging info around that 
block now to inspect what is going on.


One thing I just remembered (been awhile since I debugged Galaxy tools) 
- does Galaxy still treat ANY stderr output as an indication of job 
failure?  There are two warnings in the stderr for the job:


WARNING:galaxy.datatypes.registry:Error loading datatype with extension 
'blastxml': 'module' object has no attribute 'BlastXml'
WARNING:galaxy.datatypes.registry:Error appending sniffer for datatype 
'galaxy.datatypes.xml:BlastXml' to sniff_order: 'module' object has no 
attribute 'BlastXml'

--
Branden Timm
bt...@energy.wisc.edu


On 5/8/2013 10:17 AM, Nate Coraor wrote:

On May 8, 2013, at 10:08 AM, Branden Timm wrote:


Hi All,
   I've been working to configure a new Galaxy instance to run jobs under 
Condor.  Things are 99% working at this point, but what seems to be happening 
is after the Condor job finishes Galaxy tries to clean up a cluster file that 
isn't there, namely the .ec (exit code) file.  Relevant log info:

galaxy.jobs DEBUG 2013-05-07 15:02:49,364 (1985) Working directory for job is: 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985
galaxy.jobs.handler DEBUG 2013-05-07 15:02:49,387 (1985) Dispatching to condor 
runner
galaxy.jobs DEBUG 2013-05-07 15:02:49,720 (1985) Persisting job destination 
(destination id: condor)
galaxy.jobs.handler INFO 2013-05-07 15:02:49,761 (1985) Job dispatched
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,368 (1985) submitting file 
/home/GLBRCORG/galaxy/database/condor/galaxy_1985.sh
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:56,369 (1985) command is: 
python /home/GLBRCORG/galaxy/galaxy-central/tools/fastq/fastq_to_fasta.py 
'/home/GLBRCORG/galaxy/database/files/000/dataset_3.dat' 
'/home/GLBRCORG/galaxy/database/files/002/dataset_2842.dat' ''; cd 
/home/GLBRCORG/galaxy/galaxy-central; 
/home/GLBRCORG/galaxy/galaxy-central/set_metadata.sh 
/home/GLBRCORG/galaxy/database/files 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985 . 
/home/GLBRCORG/galaxy/galaxy-central/universe_wsgi.ini 
/home/GLBRCORG/galaxy/database/tmp/tmpGe1JZJ 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/galaxy.json 
/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_in_HistoryDatasetAssociation_3161_are5Bg,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_kwds_HistoryDatasetAssociation_3161_p73Yus,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_out_HistoryDatasetAssociation_3161_tLqep6,/home/G!

LBRCORG/galaxy/database/job_working_directory/001/1985/metadata_results_HistoryDatasetAssociation_3161_3QSW5X,,/home/GLBRCORG/galaxy/database/job_working_directory/001/1985/metadata_override_HistoryDatasetAssociation_3161_JUFvmk

galaxy.jobs.runners.condor INFO 2013-05-07 15:02:58,960 (1985) queued as 15
galaxy.jobs DEBUG 2013-05-07 15:02:59,110 (1985) Persisting job destination 
(destination id: condor)
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:02:59,536 (1985/15) job is now 
running
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:16,966 (1985/15) job is now 
running
galaxy.jobs.runners.condor DEBUG 2013-05-07 15:07:17,279 (1985/15) job has 
completed
galaxy.jobs.runners DEBUG 2013-05-07 15:07:17,417 (1985/15) Unable to cleanup 
/home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec: [Errno 2] No such file or 
directory: '/home/GLBRCORG/galaxy/database/condor/galaxy_1985.ec'
galaxy.jobs DEBUG 2013-05-07 15:07:17,560 setting dataset state to ERROR
galaxy.jobs DEBUG 2013-05-07 15:07:17,961 job 1985 ended
galaxy.datatypes.metadata DEBUG 2013-05-07 15:07:17,961 Cleaning up external 
metadata files

Hi Branden,

The ec file is optional, and the message that it's unable to be cleaned up is a 
red herring in this case.  The state is being set to ERROR, i suspect because 
the check of its outputs on line 894 of lib/galaxy/jobs/__init__.py is failing:

  894 if ( self.check_tool_output( stdout, stderr, tool_exit_code, 
job )):

You might need to add some debugging to see where exactly this error 
determination is coming from.

--nate


I've done a watch on the condor job directory, and as far as I can tell 
galaxy_1985.ec never gets created.  From a cursory look at 
lib/galaxy/jobs/runners/__init__.py and condor.py, it looks like the cleanup is 
happening in the AsynchronousJobState::cleanup method, which iterates on the 
cleanup_file_attributes list.  I naively tried to override 
cleanup_file_attributes in CondorJobState to disinclude 'exit_code_file', to no 
avail.

I'm hoping somebody can spot where the hiccup is here.  Another question that 
is on my mind is should a failure to cleanup cluster files set the dataset 
state to ERROR?  An inspection of the output file from my job leads me to 
believe it finished just fine, and indicating failure to the user because 
Galaxy couldn't cleanup a 1b error code file seems a little extreme to me.

Thanks

[galaxy-dev] Error installing migrated tool deps

2013-05-07 Thread Branden Timm
I recently upgraded to the latest galaxy-central, and was advised on 
first run that two tools in tool_conf.xml had been removed from the 
distribution, but could be installed from the tool shed.  I ran the 
script that it generated, however the script fails with the following 
messages:


No handlers could be found for logger 
galaxy.tools.parameters.dynamic_options
/home/GLBRCORG/galaxy/galaxy-central/eggs/pysam-0.4.2_kanwei_b10f6e722e9a-py2.6-linux-x86_64-ucs4.egg/pysam/__init__.py:1: 
RuntimeWarning: __builtin__.file size changed, may indicate binary 
incompatibility

  from csamtools import *
Repositories will be installed into configured tool_path location 
../shed_tools
Skipping automatic install of repository ' bowtie_wrappers ' because it 
has already been installed in location 
../shed_tools/toolshed.g2.bx.psu.edu/repos/devteam/bowtie_wrappers/0c7e4eadfb3c

Traceback (most recent call last):
  File ./scripts/migrate_tools/migrate_tools.py, line 21, in module
app = MigrateToolsApplication( sys.argv[ 1 ] )
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/galaxy_install/migrate/common.py, 
line 76, in __init__

install_dependencies=install_dependencies )
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/galaxy_install/install_manager.py, 
line 75, in __init__

self.install_repository( repository_elem, install_dependencies )
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/galaxy_install/install_manager.py, 
line 319, in install_repository

install_dependencies=install_dependencies )
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/galaxy_install/install_manager.py, 
line 203, in handle_repository_contents

persist=True )
  File 
/home/GLBRCORG/galaxy/galaxy-central/lib/tool_shed/util/metadata_util.py, 
line 594, in generate_metadata_for_changeset_revision
tool, valid, error_message = tool_util.load_tool_from_config( app, 
app.security.encode_id( repository.id ), full_path )

AttributeError: 'MigrateToolsApplication' object has no attribute 'security'

Any help would be greatly appreciated.  Thanks!

--
Branden Timm
bt...@energy.wisc.edu

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
 http://lists.bx.psu.edu/

To search Galaxy mailing lists use the unified search at:
 http://galaxyproject.org/search/mailinglists/


Re: [galaxy-dev] Version Error and Python 2.7

2011-05-03 Thread Branden Timm
Sorry to dig up this old thread, but how do you remove the check that 
prevents galaxy running on 2.7?  One of my developers just updated his 
workstation to Ubuntu 11.04 (Py 2.7) which is preventing him from 
running Galaxy.  I'm not sure whether it would be better to remove this 
check or to just install 2.6 alongside 2.7.


--
Branden Timm
bt...@glbrc.wisc.edu

On 2/2/2011 12:07 PM, Peter Cock wrote:

On Wed, Feb 2, 2011 at 6:02 PM, John David Osborneozb...@uab.edu  wrote:

I have (or rather had) a functional galaxy installation on a CentOS box
running python 2.4. I did an mecurial update for the first time and broke my
installation. It looks like some eggs are getting out of date and when I run
scramble.py I get an error (see below). I’m guessing this is because
scramble expects python 2.5 or above for the import (import in 2.4 I don’t
believe takes arguments).

Sounds like a bug.


My question is does galaxy really still support 2.4 and should I move to
python 2.7 (the current stable version) because the wiki just mentions
versions 2.4 to 2.6 as being supported.

There is currently a check to prevent Galaxy running on Python 2.7,
but if you remove that at the very least you'll see lots of deprecation
warnings. So we decided to go with Python 2.6 here. See also:
https://bitbucket.org/galaxy/galaxy-central/issue/386/python-27-support

I personally have no objection to Galaxy dropping support for
Python 2.4 (in a timely manor to give existing users time to plan),
it would give a bit more flexibility when writing tools for Galaxy.

Peter

___
galaxy-dev mailing list
galaxy-dev@lists.bx.psu.edu
http://lists.bx.psu.edu/listinfo/galaxy-dev

___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] bwa failure preparing job

2011-04-26 Thread Branden Timm
Does anybody have any idea why I would be getting this error before the 
tool runs?


--
Branden Timm
Great Lakes Bioenergy Research Center
bt...@wisc.edu

On 4/19/2011 10:45 AM, Branden Timm wrote:

Hi All,
  I'm having issues running BWA for Illumina with the latest version 
of Galaxy (5433:c1aeb2f33b4a).


It seems that the error is a python list error while preparing the job:

Traceback (most recent call last):
   File /home/galaxy/galaxy-central/lib/galaxy/jobs/runners/local.py, line 
58, in run_job
 job_wrapper.prepare()
   File /home/galaxy/galaxy-central/lib/galaxy/jobs/__init__.py, line 371, in 
prepare
 self.command_line = self.tool.build_command_line( param_dict )
   File /home/galaxy/galaxy-central/lib/galaxy/tools/__init__.py, line 1575, 
in build_command_line
 command_line = fill_template( self.command, context=param_dict )
   File /home/galaxy/galaxy-central/lib/galaxy/util/template.py, line 9, in 
fill_template
 return str( Template( source=template_text, searchList=[context] ) )
   File 
/home/galaxy/galaxy-central/eggs/Cheetah-2.2.2-py2.6-linux-x86_64-ucs4.egg/Cheetah/Template.py,
 line 1004, in __str__
 return getattr(self, mainMethName)()
   File DynamicallyCompiledCheetahTemplate.py, line 106, in respond
IndexError: list index out of range
I checked the bwa_index.loc file for errors, it seems that the line 
for the reference genome I'm trying to map against is correct (all 
whitespace is tab characters):
synpcc7002  synpcc7002  Synechococcus   
/home/galaxy/galaxy-central/bwa_

indices/SYNPCC7002

I'm not sure what the next troubleshooting step is, any ideas?

--
Branden Timm
bt...@glbrc.wisc.edu


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

Re: [galaxy-dev] bwa failure preparing job

2011-04-26 Thread Branden Timm

Vipin, thanks for the tip.  I was not aware of data tables at all.

I checked bwa_wrapper.xml and it is still using the from_file attribute 
for the reference genome parameter, not from_data_table.  It would 
appear then that BWA is not using data tables?  Additionally, I have now 
noticed that our bowtie runs are failing as well with the error:


AssertionError: Requested 'path' column missing from column def

I looked at bowtie_wrapper.xml, and it too seems to still be using 
from_file instead of from_data_table for the reference genome 
drop-down.  There is a line there using data tables, but it is commented 
out:


!--options from_data_table=bowtie_indexes/--

I'm really confused as to what is going on here, but it seems like when 
I updated recently (first time since January probably) it broke all of 
my location files and I'm not sure how to fix them.  I'm also confused 
because it seems that even using data tables, the format of my .loc 
files shouldn't need to change because they both use four columns 
separated by tabs.


As always, any help is greatly appreciated.

--
Branden Timm
bt...@glbrc.wisc.edu



On 4/26/2011 12:29 PM, Vipin TS wrote:

Hi Branden,

I find a wiki documentation here,

https://bitbucket.org/galaxy/galaxy-central/wiki/DataTables

Hope this will help you to experiment a bit around.

regards, Vipin

Does anybody have any idea why I would be getting this error
before the tool runs?

--
Branden Timm
Great Lakes Bioenergy Research Center
bt...@wisc.edu mailto:bt...@wisc.edu


On 4/19/2011 10:45 AM, Branden Timm wrote:

Hi All,
  I'm having issues running BWA for Illumina with the latest
version of Galaxy (5433:c1aeb2f33b4a).

It seems that the error is a python list error while preparing
the job:

Traceback (most recent call last):
   File /home/galaxy/galaxy-central/lib/galaxy/jobs/runners/local.py, 
line 58, in run_job
 job_wrapper.prepare()
   File /home/galaxy/galaxy-central/lib/galaxy/jobs/__init__.py, line 
371, in prepare
 self.command_line = self.tool.build_command_line( param_dict )
   File /home/galaxy/galaxy-central/lib/galaxy/tools/__init__.py, line 
1575, in build_command_line
 command_line = fill_template( self.command, context=param_dict )
   File /home/galaxy/galaxy-central/lib/galaxy/util/template.py, line 9, 
in fill_template
 return str( Template( source=template_text, searchList=[context] ) )
   File 
/home/galaxy/galaxy-central/eggs/Cheetah-2.2.2-py2.6-linux-x86_64-ucs4.egg/Cheetah/Template.py,
 line 1004, in __str__
 return getattr(self, mainMethName)()
   File DynamicallyCompiledCheetahTemplate.py, line 106, in respond
IndexError: list index out of range
I checked the bwa_index.loc file for errors, it seems that the
line for the reference genome I'm trying to map against is
correct (all whitespace is tab characters):
synpcc7002  synpcc7002  Synechococcus  
/home/galaxy/galaxy-central/bwa_

indices/SYNPCC7002

I'm not sure what the next troubleshooting step is, any ideas?

--
Branden Timm
bt...@glbrc.wisc.edu mailto:bt...@glbrc.wisc.edu


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

   http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

http://lists.bx.psu.edu/


___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] bwa failure preparing job

2011-04-19 Thread Branden Timm

Hi All,
  I'm having issues running BWA for Illumina with the latest version of 
Galaxy (5433:c1aeb2f33b4a).


It seems that the error is a python list error while preparing the job:

Traceback (most recent call last):
  File /home/galaxy/galaxy-central/lib/galaxy/jobs/runners/local.py, line 58, 
in run_job
job_wrapper.prepare()
  File /home/galaxy/galaxy-central/lib/galaxy/jobs/__init__.py, line 371, in 
prepare
self.command_line = self.tool.build_command_line( param_dict )
  File /home/galaxy/galaxy-central/lib/galaxy/tools/__init__.py, line 1575, 
in build_command_line
command_line = fill_template( self.command, context=param_dict )
  File /home/galaxy/galaxy-central/lib/galaxy/util/template.py, line 9, in 
fill_template
return str( Template( source=template_text, searchList=[context] ) )
  File 
/home/galaxy/galaxy-central/eggs/Cheetah-2.2.2-py2.6-linux-x86_64-ucs4.egg/Cheetah/Template.py,
 line 1004, in __str__
return getattr(self, mainMethName)()
  File DynamicallyCompiledCheetahTemplate.py, line 106, in respond
IndexError: list index out of range

I checked the bwa_index.loc file for errors, it seems that the line for 
the reference genome I'm trying to map against is correct (all 
whitespace is tab characters):
synpcc7002  synpcc7002  Synechococcus   
/home/galaxy/galaxy-central/bwa_

indices/SYNPCC7002

I'm not sure what the next troubleshooting step is, any ideas?

--
Branden Timm
bt...@glbrc.wisc.edu
___
Please keep all replies on the list by using reply all
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:

  http://lists.bx.psu.edu/

[galaxy-dev] listing the same tool twice in tool_conf.xml

2011-02-21 Thread Branden Timm

Hi All,
  My group is trying to create some custom sections in the Tools pane, 
and have run into an issue.  We have a custom section, within which 
there are three different labels and several tools under each label.  We 
are trying list the same tool under two different labels within the same 
section.  However, the second occurrence of the tool is not displayed.  
Is it possible to list the same tool twice under the same section?


--
Branden Timm
bt...@glbrc.wisc.edu
___
To manage your subscriptions to this and other Galaxy lists, please use the 
interface at:

 http://lists.bx.psu.edu/


Re: [galaxy-dev] bowtie hanging after execution

2011-01-13 Thread Branden Timm

Hi,
  Updating to the latest revision has solved the bowtie hanging 
problem.  Thanks very much for your help!


-Branden

On 1/13/2011 10:52 AM, Kelly Vincent wrote:

Hi,

We are running Bowtie 0.12.7 currently. I've updated the wiki to 
reflect the current version of the tools we're using (thanks for 
letting us know it was out of date!).


Regards,
Kelly


On Jan 13, 2011, at 11:39 AM, Branden Timm wrote:

Thanks for the tip, Kelly - I tested this on my workstation at 
revision 4640 and am seeing the same behavior.  The output SAM file 
is 1.9G.  I will pull the latest changes from the repository and 
re-test.


By the way, is 0.12.1 still the preferred version of bowtie for 
HEAD?  The NGSLocalSetup page indicates so, but seems a bit 
outdated.  I am running 0.12.1 currently.


--
Branden Timm
bt...@glbrc.wisc.edu

On 1/13/2011 12:30 AM, Kelly Vincent wrote:

Branden,

Which revision of Galaxy are you using? What this sounds like is 
that it is taking a long time to set the metadata on the resulting 
SAM file, i.e., after the Bowtie job has run. Prior to 
4698:48a81f235356 (12/1/10), all the lines in a large SAM file would 
be read to determine how many lines there were--this could take a 
very long time. But that changeset made it so that it gave up if the 
file was too large and did not set the number of lines. However, in 
4842:7933d9312c38 (1/12/11), this was changed so that if the file is 
too large, it generates a rough estimate of the number of lines.


Regards,
Kelly


On Jan 12, 2011, at 5:32 PM, Branden Timm wrote:


Hi All,
I am seeing a strange problem with Galaxy and bowtie.  Here is the 
scenario:


1) I run bowtie for Illumina data (bowtie 0.12.5 Linux x86_64) on a 
fastqsanger input generated by FASTQ-Groomer
2) On the system, I see the bowtie_wrapper and bowtie subprocesses 
start, with bowtie distributing across the four cores in the system
3) The bowtie and bowtie_wrapper processes stop a few minutes 
later, but the history item still shows that it is running.  This 
happens for about 20 minutes.  Paster.log show constant POST 
history_item_updates activities every three seconds, and the Galaxy 
server process itself hogs 100% of one of the system's cores.


I've tried this both on our production galaxy site (RHEL 5.5, 
Python 2.4.3) and my local workstation (RHEL 6, Python 2.6.5).


As part of my troubleshooting, I've extracted the bowtie_wrapper 
command from paster.log and run it on the command line.  The tool 
completes successfully in a few minutes, which confirms that the 
Galaxy server process seems to be the culprit in this situation, 
not bowtie_wrapper.


Any help would be greatly appreciated.  Cheers

--
Branden Timm
University of Wisconsin
Great Lakes Bioenergy Research Center
bt...@glbrc.wisc.edu

___
galaxy-dev mailing list
galaxy-dev@lists.bx.psu.edu
http://lists.bx.psu.edu/listinfo/galaxy-dev





___
galaxy-dev mailing list
galaxy-dev@lists.bx.psu.edu
http://lists.bx.psu.edu/listinfo/galaxy-dev


[galaxy-dev] bowtie hanging after execution

2011-01-12 Thread Branden Timm

Hi All,
  I am seeing a strange problem with Galaxy and bowtie.  Here is the 
scenario:


1) I run bowtie for Illumina data (bowtie 0.12.5 Linux x86_64) on a 
fastqsanger input generated by FASTQ-Groomer
2) On the system, I see the bowtie_wrapper and bowtie subprocesses 
start, with bowtie distributing across the four cores in the system
3) The bowtie and bowtie_wrapper processes stop a few minutes later, but 
the history item still shows that it is running.  This happens for about 
20 minutes.  Paster.log show constant POST history_item_updates 
activities every three seconds, and the Galaxy server process itself 
hogs 100% of one of the system's cores.


I've tried this both on our production galaxy site (RHEL 5.5, Python 
2.4.3) and my local workstation (RHEL 6, Python 2.6.5).


As part of my troubleshooting, I've extracted the bowtie_wrapper command 
from paster.log and run it on the command line.  The tool completes 
successfully in a few minutes, which confirms that the Galaxy server 
process seems to be the culprit in this situation, not bowtie_wrapper.


Any help would be greatly appreciated.  Cheers

--
Branden Timm
University of Wisconsin
Great Lakes Bioenergy Research Center
bt...@glbrc.wisc.edu

___
galaxy-dev mailing list
galaxy-dev@lists.bx.psu.edu
http://lists.bx.psu.edu/listinfo/galaxy-dev