Sorry for the late response. This generally means that Galaxy thinks a
job script has been written and should be executable but the file
system and operating system doesn't think the file is yet ready for
execution. This could be for instance caused by NFS caching of file
system permissions I think and it sometimes happens with Docker's
overlay file system. I don't yet have a good grasp on how to work
around it in every case - Galaxy will try to verify the script is
ready before submitting jobs though and if sleeping and waiting for
this condition to pass fails - this exception gets thrown. On some
file systems - this check might fail but the job might still run - so
this check can be disabled by setting check_job_script_integrity to
False in Galaxy's config file.

-John

On Mon, Nov 6, 2017 at 10:20 PM, John Letaw <le...@ohsu.edu> wrote:
> Hi all,
>
>
>
> I’m installing via GalaxyKickStart…
>
>
>
> I’m getting the following error:
>
>
>
> galaxy.jobs.runners ERROR 2017-11-06 19:14:05,263 (19) Failure preparing job
>
> Traceback (most recent call last):
>
>   File
> "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/__init__.py",
> line 175, in prepare_job
>
>     modify_command_for_container=modify_command_for_container
>
>   File
> "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/__init__.py",
> line 209, in build_command_line
>
>     container=container
>
>   File
> "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/command_factory.py",
> line 84, in build_command
>
>     externalized_commands = __externalize_commands(job_wrapper,
> external_command_shell, commands_builder, remote_command_params)
>
>   File
> "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/command_factory.py",
> line 143, in __externalize_commands
>
>     write_script(local_container_script, script_contents, config)
>
>   File
> "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/util/job_script/__init__.py",
> line 112, in write_script
>
>     _handle_script_integrity(path, config)
>
>   File
> "/home/exacloud/lustre1/galaxydev/galaxyuser/lib/galaxy/jobs/runners/util/job_script/__init__.py",
> line 147, in _handle_script_integrity
>
>     raise Exception("Failed to write job script, could not verify job script
> integrity.")
>
> Exception: Failed to write job script, could not verify job script
> integrity.
>
> galaxy.model.metadata DEBUG 2017-11-06 19:14:05,541 Cleaning up external
> metadata files
>
> galaxy.model.metadata DEBUG 2017-11-06 19:14:05,576 Failed to cleanup
> MetadataTempFile temp files from
> /home/exacloud/lustre1/galaxydev/galaxyuser/database/jobs/000/19/metadata_out_HistoryDatasetAssociation_16_I8bhLX:
> No JSON object could be decoded
>
>
>
> I would like to further understand what it means to not verify integrity of
> a job script.  Does this just mean there is a permissions error?  Ownership
> doesn’t match up?
>
>
>
> Thanks,
>
> John
>
>
>
>
> ___________________________________________________________
> Please keep all replies on the list by using "reply all"
> in your mail client.  To manage your subscriptions to this
> and other Galaxy lists, please use the interface at:
>   https://lists.galaxyproject.org/
>
> To search Galaxy mailing lists use the unified search at:
>   http://galaxyproject.org/search/
___________________________________________________________
Please keep all replies on the list by using "reply all"
in your mail client.  To manage your subscriptions to this
and other Galaxy lists, please use the interface at:
  https://lists.galaxyproject.org/

To search Galaxy mailing lists use the unified search at:
  http://galaxyproject.org/search/

Reply via email to