Hey Jens,
I have tried a few different things and I have been unable to
replicate the behavior locally.
Is this tool specific or configuration specific - i.e. do you see
this behavior only with a specific tool or do the concatenate datasets
tool experience this as well say?
If it is tool specific - you (or the underlying application) may be
deleting metadata_results_XXX files in the working directory as part
of the job? If you are convinced the tool is not deleting these files
but it is a tool-specific problem can you pass along the tool you are
using (or better a minimal version of the tool that produces this
behavior.)
If it is configuration specific and you can get it to happen with
many different tools - can you try to do it against a clean copy of
galaxy-dist or galaxy-central and pass along the exact updates
(universe_wsgi.ini, job_conf.xml, etc...) that you used to configure
Galaxy to cause it to produce this error.
Couple more things - it hasn't gone into the if in the
jobs/__init__.py statement - it is doing that first check
external_metadata_set_successfully when the error occurs - so I don't
think it is a problem with the retry_metadata_internally configuration
option.
Additionally, you are using the local job runner so Galaxy will
always retry_metadata_internally - the local job runner doesn't try
to embed the metadata calculation into the job like the cluster job
runners (so retry_metadata_internally doesn't really matter with the
local job runner... right now anyway). If you don't want the metadata
calculation to be embedded into the local job runner job the way
cluster job runners do it (so retry_metadata_internally does in fact
matter) there was an option added to the local job runner last release
that I realized I hadn't documented - you can add param
id=embed_metadata_in_jobTrue/param to the local job destination
in job_conf.xml.
More information about this new option here:
https://bitbucket.org/galaxy/galaxy-central/commits/75c63b579ccdd63e0558dd9aefce7786677dbacd
-John
On Wed, Jun 25, 2014 at 7:41 AM, Preussner, Jens
jens.preuss...@mpi-bn.mpg.de wrote:
Dear all,
I noticed a strange behaviour in our local galaxy installation. First of
all, my universe_wsgi.ini contains “retry_metadata_internally = False” and
“cleanup_job = always”. The tool writes its output simply into the
job_working_directory and we move it via mv static_filename.txt $output
in the command-tag. This works fine.
When restarting the galaxy server and executing the tool after a fresh
restart, everything is ok, there are no errors.
When executing the same tool a second time, galaxy brings a “tool error”
stating that it was unable to finish the job. Nevertheless, the output files
are all correct (but marked as red or failed).
The error report states:
Traceback (most recent call last):
File /home/galaxy/galaxy-dist/lib/galaxy/jobs/runners/local.py, line
129, in queue_job
job_wrapper.finish( stdout, stderr, exit_code )
File /home/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py, line 997, in
finish
if ( not
self.external_output_metadata.external_metadata_set_successfully( dataset,
self.sa_session ) and self.app.config.retry_metadata_internally ):
File /home/galaxy/galaxy-dist/lib/galaxy/datatypes/metadata.py, line
731, in external_metadata_set_successfully
rval, rstring = json.load( open( metadata_files.filename_results_code )
)
IOError: [Errno 2] No such file or directory:
u'/home/galaxy/galaxy-dist/database/job_working_directory/000/59/metadata_results_HistoryDatasetAssociation_281_oHFjx0'
And in the logfile you can find multiple entries like that:
galaxy.datatypes.metadata DEBUG 2014-06-25 14:29:35,466 Failed to cleanup
external metadata file (filename_results_code) for
HistoryDatasetAssociation_281: [Errno 2] No such file or directory:
'/home/galaxy/galaxy-dist/database/job_working_directory/000/59/metadata_results_HistoryDatasetAssociation_281_oHFjx0'
The if-statement in /home/galaxy/galaxy-dist/lib/galaxy/jobs/__init__.py,
line 997 should evaluate to False, since
self.app.config.retry_metadata_internally is set to False in the
universe_wsgi.ini but it seems it doesn’t in this case?
Anyone having experienced such a behavior? Any suggestions how to go on and
solve the issue?
Many thanks!
Jens
___
Please keep all replies on the list by using reply all
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/
To search Galaxy mailing lists use the unified search at:
http://galaxyproject.org/search/mailinglists/
___
Please keep all replies on the list by using reply all
in your mail client. To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
http://lists.bx.psu.edu/
To