Dave,
I upgraded the galaxy to Sep 7 release. The KeyError error has gone as
expected.
But the deferred_job_queue error persists. We are running galaxy in the
cluster (SGE) here. Is it possible that Manage local data need to be
configured in the cluster env?
Regards,
Derrick
On Mon, Sep 10,
Derrick,
I have not actually tested the local data manager on CloudBioLinux, but
that sounds like something worth looking into.
As for the rsync server mentioned in another email, the data manager
checks our rsync server first, then tries generating indexes if the
rsync server doesn't have
Hi Dave,
Thanks for the clarification. Our testing shows that the local data manager
works in the vanilla Galaxy (simply hg clone, then run.sh). It failed on
our clustered instance that it has been configured to use multiple web
threads, one job manager and two job handlers.
Local data manager
Derrick,
The deferred_job_queue instance in the job manager object is only
instantiated when the enable_beta_job_managers variable is set to True.
Is it possible that your previous setting of that variable was somehow
reverted?
The KeyError error you mentioned in your previous should be
Hi Dave,
The enable_beta_job_managers is set to true in my universe ini file.
Unless it can be override in somewhere else?
My galaxy hasn't been patched to the Sep release, but will give it a try.
I actually also downloaded the complete pre-built indexes from galaxy's
recent released rsync
Hi guys,
I found that the problematic loc files were created by CloudBioLinux and
point to the reference indexes downloaded by CloudBioLinux script. I
suspect that Manage Local Data cannot recognize the file structure created
by CloudBioLinux?
Anyhow, I removed all the loc files from
Hi guys,
I enabled the tool with enable_beta_job_managers = True, server runs fine.
But after I clicked on Manage local data, it gave the following error:
URL: http://pwbc.garvan.unsw.edu.au/galaxy_dev/data_admin/manage_data
[^http://pwbc.garvan.unsw.edu.au/galaxy_dev/data_admin/manage_data
]