Hi,
We are running galaxy on the cluster with the 'real user' submission. It all
works fine, but at the end of the run we found something unexpected.
All the .dat files in the file_path folder are owned by the user running
Galaxy, but all the folders (e.g. dataset_25808_files) are still
I only see HTTP messages. I haven't seen another error, I now just
see the upload job not ever ending.
(I'll be away next week so we might have to pick up on this when I'm
back, if you don't hear from me)
10.101.10.34 - - [16/Nov/2012:14:36:46 -0400] POST
/root/history_item_updates HTTP/1.1 200
I'm not getting the red dataset after I changed the new_file_path and
restarted the galaxy service.
Now when I upload it just says Dataset is uploading forever.
It seems like it's not trying to launch a job, right?
-Greg
On Fri, Nov 16, 2012 at 2:45 PM, Nate Coraor n...@bx.psu.edu wrote:
On
Dear all,
I have mapped some Illumina reads to a reference using Bowtie.
I am trying to use Sam Tools to convert Sam to Bam on our local galaxy, but it
always fails:
Traceback (most recent call last):
File /export/galaxy/galaxy-central/lib/galaxy/jobs/runners/local.py, line
155, in run_job
On Sun, Nov 18, 2012 at 1:13 PM, Christophe Antoniewski
droso...@gmail.com wrote:
but the second script output is empty. I suspect that the second script is
launched when the output of the first script is not available yet.
I'm pretty sure we do not currently support multiple command tags.
Scooter,
The ENCODE VM which is available on Amazon Web Services and as a
downloadable VM[1] uses Galaxy's Cloudman component for managing
storage and compute resources. However, the Galaxy analysis interface
was not used. All of the code for analysis performed in the ENCODE
integrated paper is
I want to add that this issue doesn't show up if I turn off
'use_tasked_jobs' for jobs splitting. I realize this feature is marked
as not ready for production. I will stay away from it for the time
being.
Thanks,
Carlos
On Fri, Nov 16, 2012 at 5:13 PM, Carlos Borroto
carlos.borr...@gmail.com
I would like to recall the previous email. The error does happen even
with 'use_tasked_jobs' set to false. It looks like I have some issues
with my local torque install that I need to resolve.
Thanks and sorry,
Carlos
On Mon, Nov 19, 2012 at 11:23 AM, Carlos Borroto
carlos.borr...@gmail.com
On Fri, Nov 16, 2012 at 2:35 PM, Joshua Orvis jor...@gmail.com wrote:
If I removed that line how is it still part of the call? Is Galaxy caching
the xml tool files?
Did you restart Galaxy? Tool configuration files are read at startup.
You can also force a tool to be reloaded in the admin
Hello,
I'm writing because I've been trying for the past few days to configure
Galaxy to use Apache-based LDAP authentication, but have reached a point
where I'm basically stuck. The system in a virtual machine running:
- CentOS 5.8
- Apache 2.2.3
I'm trying to configure a Galaxy instance at
Thank you for the clarification. Would be an interesting project for a
Galaxy hackathon to move over the ENCODE tools/workflow to run in Galaxy.
On 11/19/12 11:21 AM, James Taylor ja...@jamestaylor.org wrote:
Scooter,
The ENCODE VM which is available on Amazon Web Services and as a
downloadable
On Nov 19, 2012, at 8:01 AM, Nicholas Tucker wrote:
Dear all,
I have mapped some Illumina reads to a reference using Bowtie.
I am trying to use Sam Tools to convert Sam to Bam on our local galaxy, but
it always fails:
Traceback (most recent call last):
File
On Nov 16, 2012, at 2:50 PM, greg wrote:
I'm not getting the red dataset after I changed the new_file_path and
restarted the galaxy service.
Now when I upload it just says Dataset is uploading forever.
It seems like it's not trying to launch a job, right?
That seems likely. I'd try
Hi devs!
After updating my local instance to changeset
19cbbaf566216cb46ecc6a6d17e0f1e0ab52978e my tool which submits a workflow via
the
API and set force_history_refresh to TRUE does not refresh the history anymore.
The job is submit via DRMAA to SGE. Unfortunately the job seems to run
14 matches
Mail list logo