This looks like Galaxy asking for part of a BAM file, using a byte range
request, but the server hosting the BAM file is not handling this. It is
probably a configuration error on that server, or perhaps in a proxy?
Peter
On Mon, Mar 10, 2014 at 11:09 PM, Pete Schmitt <
peter.r.schm...@dartmouth
Hi all,
We have a local tool which role is to transfer (ie copy) a dataset file to a
directory on our NFS. This is extremely convenient as it can be included within
workflows and therefore save the time of clicking download button (we also have
configurable renaming/compression as part of it).
Hi all,
Some of our split BLAST jobs using the parallelism feature have
been failing (apparently for a while, so this is not a recent problem
from changes in the BLAST wrappers) as follows:
The Galaxy framework encountered the following error while attempting
to run the tool:
Traceback (most re
Are you using Apache? This might help:
http://lists.bx.psu.edu/pipermail/galaxy-user/2012-November/005508.html
https://wiki.galaxyproject.org/Admin/Config/ApacheProxy
Peter
On Tue, Mar 11, 2014 at 1:16 PM, Pete Schmitt
wrote:
>
> Where would the configuration be in the galaxy server that would
Hello,
It's difficult to determine the cause of the problem in your environment based
on the details you've provided. If you have defined your tool_dependeny_dir
configuration setting in your universe_wsgi.ini to be ../tool_dependencies,
then the tool shed installation process will install too
Dear developers,
I'm trying to launch a mpi version of test tool.
To do so, I have a test xml called sleep.xml but all he does is write
the hostname of the machine in an output file.
(command: /usr/lib64/openmpi/bin/mpirun hostname > $output)
When I launch this tool as a local job (we use
Hi Peter,
I think you need to have:
Sorry, can't test it right now.
Best,
Bjoern
Am 10.03.2014 16:28, schrieb Peter Cock:
Hi all,
An eagle eyed user has just spotted a bug in our BLAST wrappers
and/or the Galaxy framework itself.
1.
On Tue, Mar 11, 2014 at 2:58 PM, Björn Grüning
wrote:
> Hi Peter,
>
> I think you need to have:
>
>
>
>
>
>
>
>
>
>
>
>
> Sorry, can't test it right now.
That seems to work - thanks Björn :)
This seems to have exposed a bug in the framew
On Tue, Mar 11, 2014 at 11:44 AM, Peter Cock wrote:
> Hi all,
>
> Some of our split BLAST jobs using the parallelism feature have
> been failing (apparently for a while, so this is not a recent problem
> from changes in the BLAST wrappers) as follows:
>
>
> The Galaxy framework encountered the fo
Hi,
I have upgraded my local galaxy to the latest stable version and it works fine
but I get an error when I try to import workflows that were created older
version of Galaxy. Here is the error I keep getting.
Internal Server Error
Galaxy was unable to successfully complete your request
URL:
Hi All,
I've been able to submit jobs to the cluster through galaxy, it works great.
But when the job is in queue to run (it is gray in the galaxy history pane) and
I cancel the job, it still remains in queue on the cluster. Why does this
happen? How can I delete the jobs in queue as well? I tri
On Tue, Mar 11, 2014 at 5:57 PM, Ravi Alla wrote:
> Hi All,
> I've been able to submit jobs to the cluster through galaxy, it works great.
> But when the job is in queue to run (it is gray in the galaxy history pane)
> and I cancel the job, it still remains in queue on the cluster. Why does
> this
Hi All,
I have installed the recent version of galaxy and starting multiple web and job
handlers(six each) on a Centos 5.1 machine.
It is working almost perfectly.
1. The first problem is sometimes jobs never start and they are in grey stage.
After I killed them and start again, they work fine.
Hi Peter,
No I am not using that option. It is currently set to false in my
universe_wgsi.ini file. It says it is a new feature and not recommended, so I
didn't mess with it.
Thanks
Ravi
On Mar 11, 2014, at 11:27 AM, Peter Cock wrote:
> Hi Ravi,
>
> Could you reply to the list?
>
> And actual
Dear Galaxy Developers,I'm running galaxy local on ubuntu and trying to run a
workflow on multiple datasets (separately). Occasionally when I try to run
workflow on input dataset I get "Unable to finish job" error in some steps, in
most time the problem will be solved when I run the workflow ag
Hello all,
I would like to get galaxy to submit jobs to slurm using the drmaa.
Galaxy exists on its own machine, and slurm is run on a cluster.
I am at a bit of a lose as to how to do this. I have galaxy installed
and working fine, and slurm works fine on its own. I saw a blog post
detailing
Hi Sam,
Using static/scripts/galaxy.menu.js,I can add/remove/change labels of my local
Galaxy successfully,but I don't know how to change the positions ,as well as
the Galaxy interface.So are there any details or guidances for this help?Thank
youvery much.
Regards, Shenwiyn
--
Hi List,
I just pulled and merged from galaxy-dist (I think it's dc067a95261d), updated
my database, and migrated my tools. The issues I am able to see are minor, but
I have a user who can no longer access a particular history. It apparently just
hangs. If they delete the history, Galaxy respon
Jillian,
We've connected our Galaxy instance to a slurm cluster via slurm-drmaa
[1], and use the DRMAAJobRunner plugin configured according to the
Galaxy admin wiki [2]. It works pretty well, but it should be noted
that the nativeSpecification options slurm-drmaa makes available to
you (that is, o
We've had the same problem since updating from the April 2013 stable
to Feb 2014 stable. Our jobs are going off to a slurm cluster via the
SlurmJobRunner plugin (though this was happening with the
DRMAAJobRunner plugin too, if I remember right). Removing pending
datasets occasionally removes the en
20 matches
Mail list logo