Dear Community,
The Galaxy Committers team is pleased to announce the release of Galaxy 19.09.
The release announcement for developers and admins can be found at
https://docs.galaxyproject.org/en/master/releases/19.09_announce.html
and user facing release notes are at
Hi Pietro,
Thanks for trying that runner!
Some background here regarding the volume directory issue:
https://github.com/galaxyproject/galaxy/pull/3946
https://github.com/galaxyproject/galaxy/pull/3946/files/2099c09f6ab5a8f5951d01cd6fa67681618e2dda#r112457231
It looks to be a known and
Does your tool_conf.xml file have monitor="true" on the toolbox tag
(e.g. -
https://github.com/galaxyproject/galaxy/blob/dev/config/tool_conf.xml.sample#L2)?
I guess the tool reload option will only work in a single process
setup - but the monitoring with watchdog should be sufficient to not
need
Sorry for the late response. This generally means that Galaxy thinks a
job script has been written and should be executable but the file
system and operating system doesn't think the file is yet ready for
execution. This could be for instance caused by NFS caching of file
system permissions I
On Thu, Nov 16, 2017 at 12:01 PM, Matthias Bernt wrote:
> Hi all,
>
> I tried to install velvetoptimizer (for the assembly tutorial in the GTN).
>
> It lists velvetoptimiser as requirement in the main xml and the
> tool_dependencies.xml (listing the installation steps). Now there
Well that is no good - I take it you have outputs_to_working_directory
set to True in your galaxy.ini? That option doesn't get a lot of
testing and it seems like it is interfering with
retry_metadata_internally = True. What version of Galaxy are you
running? Another work around would be to try to
Thanks for the report,
I suspect the breaking change would be this -
https://github.com/galaxyproject/galaxy/pull/4563. I added a bunch of
tests for these API changes that I though would ensure backward
compatibility but perhaps there is some sort of breakage here. I
opened a PR in response to
Unfortunately Galaxy has no native support for scheduled jobs or
workflow executions like this. Pretty much all computation in Galaxy
is kicked off by user action or external interaction via the API. So
if you want to place the actual computation inside of Galaxy as a
workflow or a tool you can -
Something like this is possible with some caveats. It is possible to
detect memory and walltime errors - but not based on regex in tools
but instead by the job runner. So the SLURM runner implements
detection of out of memory errors and timeout I think - I don't think
most of the other runners do.
On Tue, Aug 8, 2017 at 8:01 AM, Matthias Bernt wrote:
> Dear dev-list,
>
> here an answer to my own question.
>
> The problem was that the tool "Group data by a column and perform aggregate
> operation on other columns." returned an error:
>
> Traceback (most recent call last):
There is an open issue for these paths not working with vanilla Galaxy
tools https://github.com/galaxyproject/galaxy/issues/1676.
-John
On Wed, Aug 9, 2017 at 10:03 AM, Peter Cock wrote:
> I'm puzzled now too, cross reference
>
>
The history state being 'ok' isn't a great metric for determining if
the workflow is complete. The history state essentially only tells you
if there are datasets of certain states in the history. At the start
of the workflow - the invocation may be backgrounded and getting ready
to run so there
> hi,
>
> i have question when submitting jobs as real user
>
> defaults = "$galaxy_root:default_ro,$tool_directory:default_ro"
>
> singularity gives a WARNING since the home dir is already mounted
> "WARNING: Not mounting requested bind point (already mounted in
&
Sorry for the late response - but earlier this week Björn Grüning and I
added various support to Galaxy's development branch for Singularity.
The following pull request added Singularity support in job running (
https://github.com/galaxyproject/galaxy/pull/4175) - here job destinations
may
(Once more with the list cc'ed - sorry for the duplicate message.)
The IDs are encoded before they are shared out via the API so that raw
integers aren't exposed. The ID the API returns is generally the value
in the id column for that table.
Eric Rasche kindly contributed a script to the
I've never seen anything like this - have you modified Galaxy's
requirements.txt file or are you using the one that ships with Galaxy?
On Fri, May 12, 2017 at 11:20 AM, Yip, Miu ki wrote:
> Hi all,
>
> When attempting to start up Galaxy, via the run.sh script, we’re getting the
>
Sorry for the delayed response on this - I just wanted to confirm that
yes Galaxy does this for uploaded files (if tools produce these line
endings we would not touch those). I believe this can be disabled by
clicking on the settings icon for a particular upload row in the
upload form and
Do you have ``nginx_x_accel_redirect_base = /_x_accel_redirect`` set in
your Galaxy ini file?
Do you know what user nginx runs as?
If yes, do you know if it has access to /home/galaxy/Software/galaxy/
database/files/000/dataset_140.dat?
I'd su to become that nginx user and just make sure it can
Did you figure this out?
It looks like you jobs are not detecting Galaxy's virtualenv and so
they are getting a dependency (in this case mercurial) from Galaxy
root packages.
I'd review this document:
https://docs.galaxyproject.org/en/master/admin/framework_dependencies.html#galaxy-job-handlers
I've never seen anything like this before - I am sorry. Hopefully
someone else will have a more concrete idea.
Does that Conda executable work on the command line?
I'd try removing the whole Conda folder and rebuilding it. Is it
possible the install was corrupted at some point - maybe during
Thanks for your interest in this topic. The collection operations
exist the way they do as tools distributed with the core framework
because they can't be expressed as normal tools and they utilize
abstractions that I don't consider public at this time (or really have
any confidence in making
Hello,
Thanks for working on this - Pulsar still hasn't quite caught up with
Galaxy in terms of support for Conda but we are getting there. So I
noticed two things that should help today when I was trying to
recreate your problem - the first is that Conda support in Pulsar
requires this PR
I perhaps don't understand the question. As long as the "datasets" are
Galaxy datasets and they appear in the command-line specific by the
"command" block of your tool - it should be transmitted. When you say
a "dataset list of files" - do you mean (1) a Galaxy collection, (2) a
set of files
So the PR that broke your tools is probably here https://github.com/
galaxyproject/galaxy/pull/3364/files. That pull request removed Galaxy from
the Python path of Galaxy tools - this gives tools a much cleaner
environment and prevents certain conflicts between Conda and Galaxy.
As part of that
I recently reworked the dependency resolver documentation and made
this explicit. I don't think the previous revision of that document
mentioned this important caveat at all. Thanks for Galaxy-ing!
Thanks,
-John
On Tue, Jan 24, 2017 at 7:01 AM, Peter Briggs
wrote:
At first glace I think the problem is that when you use the
``interpreter`` tag Galaxy has really broken logic for modifying the
command line to use that. ln works just like cp and mv. Those too
would be broken for this tool because of the interpreter attribute I
think. For this reason
I don't know what the particular issue is - I do know that after a lot
of trial and error we were able to get Postgres + SSL working at MSI
before I left my previous job there - so there is hope.
I will say however, upgrading even to one newer Galaxy - namely 16.01
might solve this issue. We
Glad this almost worked - I'm not sure what the problem is. I'd open
the file /cluster/galaxy/pulsar/pulsar/managers/util/drmaa/__init__.py
- and add some logging right before this line - (return
self.session.jobStatus(str(external_job_id))).
log.info("Fetching job status for %s" %
Can you clarify this? Do you want access to the job id during the job
execution or do you want to see it after the job is complete? It is
currently available after the job is complete using the "i" (view
details) button. If you want access to the job id as part of the tool
execution that is
I think this is caused because Galaxy needs to have samtools available
on its PATH. Is this an older version of Galaxy? I feel like newer
versions maybe make this error more clear.
Hope this helps - hopefully very quickly here we solve this problem by
switching to pysam for these operations so an
It looks like your library conditional is wrapped in a section called
"basicOptions". For this reason when it appears in your
block you need to use $basicOptions.library.type for instance instead
of $library.type. Hope this helps and thanks for using Galaxy!
-John
On Wed, Oct 26, 2016 at 11:39
In general we discourage this - but there is some conversation about
this here - at least as it pertains to the instance URL and API key:
http://dev.list.galaxyproject.org/find-UUID-of-current-history-in-tool-XML-wrapper-td4667113.html
The workflow is more tricky - if you are sure the job is
Hello Zipho,
Per your request I have reviewed the linked issues and created a
list of things I think can be done that are relatively easy to improve
the current situation here
https://github.com/galaxyproject/galaxy/issues/2980#issuecomment-250777589.
There are a lot bigger things that need to
Any follow up on this? I have never seen this - but it would be worth
specifying a full absolute path for conda_prefix in galaxy.ini and see
if that fixes it. If it does - I can open a PR that makes sure this
variable is always an absolute path.
On Thu, Sep 1, 2016 at 11:37 PM, Léo Biscassi
This usually means there is an error of some kind - perhaps a missing
tool. Sorry it is not more transparent - I'd check the Galaxy logs for
an error message if possible or open the JavaScript console of your
web browser
to a bunch of things that are opaque to a separate
> script. Would it be a useful goal to have a richer galaxy-tool interface that
> could make all information available to the tool wrapper visible to my Python
> script? One way to do that would just be to bundle everything up in JSON
Yup - thanks for the bug report. I have used your rna example to build
a minimal-ish example to fit into Galaxy's test tools framework here
https://github.com/galaxyproject/galaxy/pull/2795. I also tested in
16.01 and it worked - so this clearly broke in 16.04. I'll see if I
can track it down and
Just to provide a little more detail on the collection point... Daniel
Blankenberg presented a talk at the Galaxy Community Conference on
metagenomics that talked a lot about "large" collection handling in
Galaxy - https://gcc16.sched.org/event/5Y0M/metagenomics-with-galaxy.
I think his rule of
Thanks for the questions - I have tried to revise the planemo docs to
be more explicit about what collection identifiers are and where they
come from
(https://github.com/galaxyproject/planemo/commit/a811e652f23d31682f862f858dc792c1ef5a99ce).
- It needs to be a tab not spaces between the fields in that loc file
- can you confirm that it is a tab? (Many editors will implicitly
replace tabs with spaces.)
- You need to restart Galaxy after updating that file - can you
confirm it has been restarted?
-John
On Fri, Jul 22, 2016 at 11:34
I can't confirm it is going to work without negative repercussions - I
can only say that I cannot think of any non-obvious problems that
would result from doing this. It seems like it is going to be the
right thing to do in your case (but no promises :)).
If it works, it would be super awesome if
This information is not readily accessible during tool command line
generation. You can access some extra data with the $__user__ object -
it is the underlying model object. Are you hacking the session
information with extras or do you want stuff already associated with
the Galaxy session such as
I think newer versions of Galaxy give a better error message - I am
pretty sure the problem is you are missing samtools. It needs to be
available on the PATH for Galaxy if you use bam data.
-John
On Fri, Jun 3, 2016 at 1:00 PM, Anthony Underwood
wrote:
>
>
> Jobs
It is probably worth looking at this Galaxy issue - this may have
little to do with Docker or the Docker server might have a different
time configured than a local one? History updating is broken for many
clients in 16.04 and I am unsure if this intention is to fix it or
not.
Resending because I forgot to reply-all:
Older versions of Planemo should just respect an 'export
TMPDIR="/new/tmp"' before calling planemo. I think recently we
introduced a work around for some conda problems
(https://github.com/galaxyproject/planemo/pull/460) that causes this
not to be
Galaxy and Planemo cannot co-exist in the same conda environment I
don't think - because planemo requires galaxy-lib
(https://github.com/galaxyproject/galaxy-lib) which cannot be
installed in Galaxy's environment.
I don't understand the intricacies of what you are trying to do or why
it worked in
Thanks for the report and the workaround. I have opened a Pull Request
to make this truncation the default behavior in Galaxy 16.04.
https://github.com/galaxyproject/galaxy/pull/2265
-John
On Thu, Apr 28, 2016 at 5:43 AM, Tiziano Flati wrote:
> Solved.
>
> For your
Did you ever figure this out? I cannot think of anything that would
cause this - if the job conf is working with Galaxy it should work
with planemo. Are you sure Galaxy was actually using the configuration
and not just the local runner.
Is it possible you've modified run.sh to modify the
This mapping is not automatic - you need to write a small Python
method to take these parameters specified by the user and map them to
your cluster parameters. These methods are called dynamic job
destinations and described on the wiki at:
Yeah - your discoveries are exactly right. 16.04 has added the ability
to configure a bunch of extra things on a per-job-destination basis
but this isn't included yet - probably should be though. Pulsar would
be a way to go - hopefully soon Pulsar will be included with Galaxy
directly and it will
What kind of database backend are you using (postgres, myql, or the
default sqlite)?
If you are running many jobs - I would definitely encourage using postgres.
-John
On Sun, Mar 13, 2016 at 2:48 PM, Zuzanna K. Filutowska
wrote:
> Dear All,
>
> I am running newest
If you are running jobs as "the real user" with drmaa (using the chown
script) - you will probably want to setup a separate job destination
using the local runner for upload jobs. The upload tool uses files
that are created outside of Galaxy and these aren't modeled well.
Does this help?
-John
There is no option to enable this automatically. There is not ability
to modify this via the API (though I created an issue here
https://github.com/galaxyproject/galaxy/issues/1842).
I guess what I would do to implement this is to refactor places where
trans.user.stored_workflow_menu_entries is
Hans is correct prior to Galaxy 16.01. The 16.01 release of Galaxy
added a monitor attribute to config/tool_conf.xml.sample (e.g.
) that causes Galaxy to watch that file for
changes and reload the toolbox on modifications. No need to add this
to tool shed tool confs (and it would probably break
Forgot to cc the mailing list.
-John
On Thu, Feb 4, 2016 at 8:11 PM, John Chilton <jmchil...@gmail.com> wrote:
> Are you sure about the --file when calling Rscript?
>
> Here is the usage for my local version which does not expect an
> argument named --file:
>
> Usage: /pa
Nate has a branch of slurm drmaa that allows specifying a --clusters
argument in the native specification this can be used to target
multiple hosts.
More information can be found here:
https://github.com/natefoo/slurm-drmaa
Here is how Nate uses it to configure usegalaxy.org:
stalled this as root in
> /root/torque-6.0.0.1-1449528029_21cc3d8
>
> Thanks!
>
> On Wed, Jan 20, 2016 at 12:27 PM, John Chilton <jmchil...@gmail.com> wrote:
>>
>> Even if you just have two servers, I would strongly recommend you
>> setup a cluster
On Mon, Jan 25, 2016 at 3:26 PM, Peter Cock wrote:
> On Mon, Jan 25, 2016 at 11:33 AM, Peter Cock
> wrote:
>> Hello all,
>>
>> We're currently looking at changing our Galaxy setup to link user accounts
>> with Linux user accounts for better
Peter -
Nate and I are in agreement that the goal is to eliminate that
directory, so I don't want to put effort into automating its creation.
https://github.com/galaxyproject/galaxy/issues/1576
If you wish to open a PR that does this though, I would be happy to
merge this. The best place to
On Mon, Jan 25, 2016 at 3:44 PM, Peter Cock <p.j.a.c...@googlemail.com> wrote:
> Thanks John,
>
> On Mon, Jan 25, 2016 at 3:29 PM, John Chilton <jmchil...@gmail.com> wrote:
>> The script generated to call Galaxy is here:
>>
>> https://github.com/galaxyproje
list:list (as well as list:list:paired, list:list:list, etc...) can be
created via the API or using tools. As an example of the second - if
you had a tool that took in a dataset and split it into a list and
then mapped a list over that tool - Galaxy would produce a list:list
output from the
Even if you just have two servers, I would strongly recommend you
setup a cluster distributed resource manager (DRM) like SLURM, PBS, or
Condor and ensuring there exists a shared file system between Galaxy
and the node running the jobs. You wouldn't even need to use the CLI
job runner - you could
I've been swamped with release related things but my intention is to
dig deeply into this. This is a very serious bug. A work around is
just to disable beta workflow scheduling for now:
switch
force_beta_workflow_scheduled_min_steps=1
force_beta_workflow_scheduled_for_collections=True
to
We have heard no other reports of this and I have not replicated it
locally, is it possible this was like a client-side caching error or
something that went away?
If not, can you retry updating to the latest commit of release_15.10 -
there have been a good number of bug fixes though I don't
I would review the following threads - they have hacks for doing this
and required warnings:
http://dev.list.galaxyproject.org/find-UUID-of-current-history-in-tool-XML-wrapper-td4667113.html
http://dev.list.galaxyproject.org/Possible-to-pass-hostName-to-a-tool-td4667108.html
-John
On Sun, Dec
dynamic_proxy_manage_proxy=True is a terrible hack, I would set it to
False and use supervisord if you have any inclination to at all. Like
sqlite or the local job runner, it is just an attempt to make sure
things work out of the box but it isn't that robust.
-John
On Sat, Dec 19, 2015 at 1:20
Hey Brad,
What version of Galaxy are you on? Parameterized macros are pretty
new and will not work on older Galaxies.
-John
On Fri, Dec 4, 2015 at 3:36 AM, Langhorst, Brad wrote:
> Hi:
> I just installed hisat2 and the data manager from the toolshed ….
>
> Some macros seem
Despite sounding very similar, visualizations and display applications
are different things - display applications for the most part send or
expose Galaxy data to outside sources, visualizations run on the
Galaxy server and the client side.
It seems like you are trying to configure a
Yeah - those wrappers are not going to work with 15.05. Parameterized
XML macros were added in 15.07 it looks like
(http://galaxy.readthedocs.org/en/master/releases/15.07_announce.html).
-John
On Thu, Dec 10, 2015 at 10:28 PM, Langhorst, Brad wrote:
> Hi:
>
> I’m at
Someone should create an tools-iuc issue for this, I will certainly
participate also. I don't know anything RADSeq (or really much about
any kind of Seq) but I suspect I can find a way to help if there is a
TODO or ideas list that is tracked on github.
Thanks all,
-John
On Mon, Nov 23, 2015 at
s John, I found indeed the step order ids in the result from
> export_workflow_json. Helps a lot and now I won’t need to use soon-deprecated
> stuff.
>
> cheers,
> —
> Jorrit Boekel
> Proteomics systems developer
> BILS / Lehtiö lab
> Scilifelab Stockholm, Sweden
>
>
&
via pip, hmm...
-John
On Fri, Nov 20, 2015 at 12:58 AM, Tiago Antao <t...@popgen.net> wrote:
> On Thu, 19 Nov 2015 20:35:42 +0000
> John Chilton <jmchil...@gmail.com> wrote:
>
>> The latest development release of Galaxy which planemo targets by
>> default require
> NoseTestDiff-0.1-py2.7.egg
>
> Christian
>
> From: Peter Cock [p.j.a.c...@googlemail.com]
> Sent: Friday, November 20, 2015 12:41 PM
> To: Christian Brenninkmeijer
> Cc: John Chilton; galaxy-dev@lists.galaxyproject.org
> Subject: Re: [galaxy-dev] U
On Tue, Nov 17, 2015 at 8:21 PM, Tiago Antao <t...@popgen.net> wrote:
> On Tue, 17 Nov 2015 14:48:03 +0000
> John Chilton <jmchil...@gmail.com> wrote:
>
>> Thanks for the bug report. Somehow Galaxy isn't installing the
>> development wheels into the transient Galaxy's
The workflow API is the only place where we expose unencoded IDs and
we really shouldn't be doing it. I would instead focus on adapting to
using step_ids - they really should be more stable and usable. Order
index has lots of advantages
- You can build a request for a given workflow and apply it
Thanks for the bug report. Somehow Galaxy isn't installing the
development wheels into the transient Galaxy's virtualenv, I've wiped
out my planemo caches and I can't reproduce this locally.
Can you send me the green log messages at the beginning of the test
command as well as the few lines after
installed tools and their deps?
>
> --
> David Trudgian Ph.D.
> Computational Scientist, BioHPC
> UT Southwestern Medical Center
> Dallas, TX 75390-9039
> Tel: (214) 648-4833
>
> Please contact biohpc-help@utsouthwestern with general BioHPC inquries.
>
> -----Original M
d, but no success.
>> Code:
>>
>>>
>>>
>>> >> directory="output" />
>>>
>>
>>
>> It puts the samples into the collection correctly, but doesn't set a data
>> type.
>>
>> Weird enough: In t
on building up workflows from YAML can be found at
https://github.com/galaxyproject/galaxy/pull/1096 and
https://github.com/galaxyproject/bioblend/pull/143.
-John
On Tue, Nov 17, 2015 at 7:24 PM, Eric Rasche <e...@tamu.edu> wrote:
>
> On 11/17/2015 01:18 PM, John Chilton wrote:
>> The
Slowly trying to catch up on e-mail after a lot of travel in November
and I answered a variant of this to Damion directly, the most relevant
snippet was:
"
I would not symbolic link the
files though. I would just take the original collection and pipe it
into the next tool and add a dummy input to
> How does one obtain that, or convert the clear text version to an encrypted
> id?
>
> Thanks again,
>
> Bob
>
> - Forwarded Message -
> From: John Chilton <jmchil...@gmail.com>
> Cc: galaxy-dev@lists.galaxyproject.org
> Sent: Mon, 14 Sep 2015 13:44:01 - (UTC)
Albuquerque
<marcoalbuquerque@gmail.com> wrote:
> So in other words, there is no release that I can update my cloudman
> instance to?
>
> Marco
>
> On Sun, Oct 11, 2015 at 3:49 PM, John Chilton <jmchil...@gmail.com> wrote:
>>
>> Yeah - release_15.07 i
I'd keep working on pbs_python or switch to drmaa that frequently
works for the same distribution. There is also a CLI runner that just
executes qsub and qstat commands directly on the localhost or can also
ssh into a remote host and execute these commands.
90501d2d9.
>
> Is there maybe a different revision I should be using?
>
> Marco
>
> On Sun, Oct 11, 2015 at 11:03 AM, John Chilton <jmchil...@gmail.com> wrote:
>>
>> This sounds like it corresponds to this issue
>> https://github.com/galaxyproject/galaxy/issues/776. Is
Depending on how you set things up - either Galaxy, Nginx, or Apache
are creating a file for the upload that is incoming - in the above
case I imagine it is this file -
/home/galaxy/wkdir/galaxy/database/tmp/tmpp6j83l. This file is outside
of Galaxy's data model for tools and jobs - it is just a
Thanks for the bug report. This has been fixed by Dan with PR 798 for
version 15.07 if you deploy Galaxy via Git.
https://github.com/galaxyproject/galaxy/pull/798
-John
On Mon, Jun 1, 2015 at 9:17 AM, Anmol Hemrom wrote:
> Hi,
>
> I was trying to enforce not to allow empty
Hello All,
This is just a reminder that the Galaxy Tools and Collections Remote
Hackathon will be tomorrow Thursday the 17th and Friday the 18th of
September. More details can be found at:
https://github.com/galaxyproject/tools-iuc/issues/239
This Github issue includes a big list of potential
gt;
> I like that plan - is the XSD at the point where we can declare
> it in Galaxy Tool definition XML files s & validate them (via
> planemo or otherwise)?
>
> Peter
>
> On Wed, Sep 9, 2015 at 7:03 PM, John Chilton <jmchil...@gmail.com> wrote:
>>
Thanks,
> Bob
> - Original Message -
> From: John Chilton <jmchil...@gmail.com>
> To: rbrown1...@comcast.net
> Cc: galaxy-dev@lists.galaxyproject.org
> Sent: Wed, 09 Sep 2015 15:32:39 - (UTC)
> Subject: Re: [galaxy-dev] Opening Galaxy with a specified History Nam
Just reran this tool and it did indeed give me multiple outputs. I
think the problem is the files appear but the history needs to be
refreshed to see them. There is a little refresh icon at the top of
the history panel that is needed. This is a known limitation of the
multiple file output support
Thanks again for implementing this Christian and for outlining how to
maybe use it for testing. I'll try to add some support and
documentation to planemo for how to use it -
https://github.com/galaxyproject/planemo/issues/290 - it is not my
favorite long term 100% solution to this problem - but it
Updated the wiki with this information.
https://wiki.galaxyproject.org/Admin/Tools/ToolConfigSyntax#A.3Cversion_command.3E_tag_set.
We got to get away from using the wiki page and start documenting this
stuff in the XSD (https://github.com/JeanFred/Galaxy-XSD) directly and
build the documentation
Thanks for the report, this is a sort known issue
(https://trello.com/c/WodW2sLb,
https://wiki.galaxyproject.org/Develop/GSOC/2015Ideas#Easier_or_More_Robust_History_Imports_and_Exports).
I think at some point someone on the team will need to take some time
to work on history import and export, it
Not really sure what is happening here, does your galaxy.ini file have
the following sections:
[filter:proxy-prefix]
use = egg:PasteDeploy#prefix
prefix = /galaxy
[app:main]
filter-with = proxy-prefix
cookie_path = /galaxy
If yes, it might be best to share nginx configuration and your galaxy
Ryan,
Not sure what the problem is. I'm not sure the Galaxy that is
distributed with Biolinux is really setup to be used with PBS/Torque.
If you have pbs/torque installed and running properly and if you can
submit job scripts via the qsub command as whatever user Galaxy runs
under - the
So it wasn't in 15.01 - it was before that but the local job runner
was changed to set metadata as part of the job (I think probably
sometime in 2014). When it was setting metadata externally it might
have used slightly different dependency resolution strategies that
would result in samtools being
I don't think there is a way given the implementation. The name of the
association is repeatname_0|input1 but the | is treated as a special
symbol so it won't be used to get the parameter.
I have opened a PR to allow this:
https://github.com/galaxyproject/galaxy/pull/662
The syntax in the
On Fri, Aug 7, 2015 at 8:01 PM, Keith Suderman suder...@cs.vassar.edu wrote:
Greetings,
I started pulling Galaxy code from the dev branch a few months ago to take
advantage of the (then just emerging) dataset collections feature. However,
it is not clear to me from the latest release notes
My understanding is a bit limited but I believe the file is used to
track tool shed installed tools that use to belong in the Galaxy
distribution.
I don't think it has much use for a new Galaxy instance and no - I
don't believe there is a way from the command-line to sort of
synchronize this file
Are you running jobs on the head node or just Galaxy? If this is a
consistent problem and you are running jobs on the head ndoe I would
disable that.
As to resume just the failed jobs - this is not currently possible but
really should be ideally.
https://trello.com/c/lxVJy7fs
-John
On Mon, Jul
1 - 100 of 189 matches
Mail list logo