Dear Community,
The Galaxy Committers team is pleased to announce the release of Galaxy 19.09.
The release announcement for developers and admins can be found at
https://docs.galaxyproject.org/en/master/releases/19.09_announce.html
and user facing release notes are at
https://docs.galaxyproject.o
Hi Pietro,
Thanks for trying that runner!
Some background here regarding the volume directory issue:
https://github.com/galaxyproject/galaxy/pull/3946
https://github.com/galaxyproject/galaxy/pull/3946/files/2099c09f6ab5a8f5951d01cd6fa67681618e2dda#r112457231
It looks to be a known and documented
Does your tool_conf.xml file have monitor="true" on the toolbox tag
(e.g. -
https://github.com/galaxyproject/galaxy/blob/dev/config/tool_conf.xml.sample#L2)?
I guess the tool reload option will only work in a single process
setup - but the monitoring with watchdog should be sufficient to not
need
Sorry for the late response. This generally means that Galaxy thinks a
job script has been written and should be executable but the file
system and operating system doesn't think the file is yet ready for
execution. This could be for instance caused by NFS caching of file
system permissions I think
On Thu, Nov 16, 2017 at 12:01 PM, Matthias Bernt wrote:
> Hi all,
>
> I tried to install velvetoptimizer (for the assembly tutorial in the GTN).
>
> It lists velvetoptimiser as requirement in the main xml and the
> tool_dependencies.xml (listing the installation steps). Now there seems to
> be a c
Well that is no good - I take it you have outputs_to_working_directory
set to True in your galaxy.ini? That option doesn't get a lot of
testing and it seems like it is interfering with
retry_metadata_internally = True. What version of Galaxy are you
running? Another work around would be to try to f
Thanks for the report,
I suspect the breaking change would be this -
https://github.com/galaxyproject/galaxy/pull/4563. I added a bunch of
tests for these API changes that I though would ensure backward
compatibility but perhaps there is some sort of breakage here. I
opened a PR in response to thi
Unfortunately Galaxy has no native support for scheduled jobs or
workflow executions like this. Pretty much all computation in Galaxy
is kicked off by user action or external interaction via the API. So
if you want to place the actual computation inside of Galaxy as a
workflow or a tool you can - t
Something like this is possible with some caveats. It is possible to
detect memory and walltime errors - but not based on regex in tools
but instead by the job runner. So the SLURM runner implements
detection of out of memory errors and timeout I think - I don't think
most of the other runners do.
On Tue, Aug 8, 2017 at 8:01 AM, Matthias Bernt wrote:
> Dear dev-list,
>
> here an answer to my own question.
>
> The problem was that the tool "Group data by a column and perform aggregate
> operation on other columns." returned an error:
>
> Traceback (most recent call last): File
> "/gpfs1/data
There is an open issue for these paths not working with vanilla Galaxy
tools https://github.com/galaxyproject/galaxy/issues/1676.
-John
On Wed, Aug 9, 2017 at 10:03 AM, Peter Cock wrote:
> I'm puzzled now too, cross reference
>
> https://github.com/galaxy-iuc/standards/pull/46
>
> and the origin
The history state being 'ok' isn't a great metric for determining if
the workflow is complete. The history state essentially only tells you
if there are datasets of certain states in the history. At the start
of the workflow - the invocation may be backgrounded and getting ready
to run so there may
t;
> hi,
>
> i have question when submitting jobs as real user
>
> defaults = "$galaxy_root:default_ro,$tool_directory:default_ro"
>
> singularity gives a WARNING since the home dir is already mounted
> "WARNING: Not mounting requested bind point (already mounted in
We've done a lot of work in Galaxy dev on this problem over the last
few years - I'm not sure how much concrete progress we have made.
Nate started it and I did some work at the end of last year. Just to
summarize my most recent work on this - in
https://github.com/galaxyproject/galaxy/pull/3291/c
Sorry for the late response - but earlier this week Björn Grüning and I
added various support to Galaxy's development branch for Singularity.
The following pull request added Singularity support in job running (
https://github.com/galaxyproject/galaxy/pull/4175) - here job destinations
may describ
(Once more with the list cc'ed - sorry for the duplicate message.)
The IDs are encoded before they are shared out via the API so that raw
integers aren't exposed. The ID the API returns is generally the value
in the id column for that table.
Eric Rasche kindly contributed a script to the developm
I've never seen anything like this - have you modified Galaxy's
requirements.txt file or are you using the one that ships with Galaxy?
On Fri, May 12, 2017 at 11:20 AM, Yip, Miu ki wrote:
> Hi all,
>
> When attempting to start up Galaxy, via the run.sh script, we’re getting the
> following error
Sorry for the delayed response on this - I just wanted to confirm that
yes Galaxy does this for uploaded files (if tools produce these line
endings we would not touch those). I believe this can be disabled by
clicking on the settings icon for a particular upload row in the
upload form and uncheckin
I'm sorry for the late response to this - I don't have any great advice
regarding debugging these tool_dependencies.xml issues. I think those
virtualenv tags are particularly brittle and I recommend switching to a
more manual approach to Python dependencies if you are going to continue
using tool_d
Do you have ``nginx_x_accel_redirect_base = /_x_accel_redirect`` set in
your Galaxy ini file?
Do you know what user nginx runs as?
If yes, do you know if it has access to /home/galaxy/Software/galaxy/
database/files/000/dataset_140.dat?
I'd su to become that nginx user and just make sure it can
Did you figure this out?
It looks like you jobs are not detecting Galaxy's virtualenv and so
they are getting a dependency (in this case mercurial) from Galaxy
root packages.
I'd review this document:
https://docs.galaxyproject.org/en/master/admin/framework_dependencies.html#galaxy-job-handlers
I've never seen anything like this before - I am sorry. Hopefully
someone else will have a more concrete idea.
Does that Conda executable work on the command line?
I'd try removing the whole Conda folder and rebuilding it. Is it
possible the install was corrupted at some point - maybe during
inst
Thanks for your interest in this topic. The collection operations
exist the way they do as tools distributed with the core framework
because they can't be expressed as normal tools and they utilize
abstractions that I don't consider public at this time (or really have
any confidence in making publi
Hello,
Thanks for working on this - Pulsar still hasn't quite caught up with
Galaxy in terms of support for Conda but we are getting there. So I
noticed two things that should help today when I was trying to
recreate your problem - the first is that Conda support in Pulsar
requires this PR (https:
I perhaps don't understand the question. As long as the "datasets" are
Galaxy datasets and they appear in the command-line specific by the
"command" block of your tool - it should be transmitted. When you say
a "dataset list of files" - do you mean (1) a Galaxy collection, (2) a
set of files select
Sorry for the delay in responding - are you sure this is due to a
change in Galaxy and not a configuration difference between your
development and production server?
This line:
[Thu Aug 18 15:33:48 2016] [error] [client 192.168.29.12]
(13)Permission denied: xsendfile: cannot open file:
/softs/bio
So the PR that broke your tools is probably here https://github.com/
galaxyproject/galaxy/pull/3364/files. That pull request removed Galaxy from
the Python path of Galaxy tools - this gives tools a much cleaner
environment and prevents certain conflicts between Conda and Galaxy.
As part of that PR
I recently reworked the dependency resolver documentation and made
this explicit. I don't think the previous revision of that document
mentioned this important caveat at all. Thanks for Galaxy-ing!
Thanks,
-John
On Tue, Jan 24, 2017 at 7:01 AM, Peter Briggs
wrote:
> Hello Bjoern
>
> Yes it looks
At first glace I think the problem is that when you use the
``interpreter`` tag Galaxy has really broken logic for modifying the
command line to use that. ln works just like cp and mv. Those too
would be broken for this tool because of the interpreter attribute I
think. For this reason ``interprete
I don't know what the particular issue is - I do know that after a lot
of trial and error we were able to get Postgres + SSL working at MSI
before I left my previous job there - so there is hope.
I will say however, upgrading even to one newer Galaxy - namely 16.01
might solve this issue. We re-di
Glad this almost worked - I'm not sure what the problem is. I'd open
the file /cluster/galaxy/pulsar/pulsar/managers/util/drmaa/__init__.py
- and add some logging right before this line - (return
self.session.jobStatus(str(external_job_id))).
log.info("Fetching job status for %s" % external_job_id
Do you have access to the Galaxy logs? There is likely an error message
printed in them related to this - that would be helpful in diagnosing the
problem.
-John
On Fri, Oct 21, 2016 at 6:45 AM, Rachel Bell
wrote:
> Hi,
>
>
> I am looking to share a workflow I've created in Local Galaxy with an
Can you clarify this? Do you want access to the job id during the job
execution or do you want to see it after the job is complete? It is
currently available after the job is complete using the "i" (view
details) button. If you want access to the job id as part of the tool
execution that is possibl
I think this is caused because Galaxy needs to have samtools available
on its PATH. Is this an older version of Galaxy? I feel like newer
versions maybe make this error more clear.
Hope this helps - hopefully very quickly here we solve this problem by
switching to pysam for these operations so an
It looks like your library conditional is wrapped in a section called
"basicOptions". For this reason when it appears in your
block you need to use $basicOptions.library.type for instance instead
of $library.type. Hope this helps and thanks for using Galaxy!
-John
On Wed, Oct 26, 2016 at 11:39 A
In general we discourage this - but there is some conversation about
this here - at least as it pertains to the instance URL and API key:
http://dev.list.galaxyproject.org/find-UUID-of-current-history-in-tool-XML-wrapper-td4667113.html
The workflow is more tricky - if you are sure the job is goin
Hello Zipho,
Per your request I have reviewed the linked issues and created a
list of things I think can be done that are relatively easy to improve
the current situation here
https://github.com/galaxyproject/galaxy/issues/2980#issuecomment-250777589.
There are a lot bigger things that need to b
Hope you made some progress on this - my guess based on the stack
trace is that you have some subtle difference related to a parameter
named ``uncol`` in some tool that is contained within both instances.
On Fri, Jun 10, 2016 at 2:14 PM, Tony Schreiner
wrote:
> We have 2 galaxy instances running
Any follow up on this? I have never seen this - but it would be worth
specifying a full absolute path for conda_prefix in galaxy.ini and see
if that fixes it. If it does - I can open a PR that makes sure this
variable is always an absolute path.
On Thu, Sep 1, 2016 at 11:37 PM, Léo Biscassi wrote
Is this something inside the application or external to it? If it is
external to the application - I'd probably just target the database
directly (Postgres or MySQL). The internals of Galaxy use sqlalchemy
for the most part (http://www.sqlalchemy.org/). I've done a lot with
the database but I've ne
This usually means there is an error of some kind - perhaps a missing
tool. Sorry it is not more transparent - I'd check the Galaxy logs for
an error message if possible or open the JavaScript console of your
web browser
(https://developers.google.com/web/tools/chrome-devtools/debug/console/?hl=en
There is currently UI to do this unfortunately - there is API support
on the backend for this though. I'd encourage opening a Github issue
(https://github.com/galaxyproject/galaxy/issues/new) to request this
be added to the tool form.
Thanks,
-John
On Wed, Aug 31, 2016 at 8:34 AM, Katherine Beaul
nch of things that are opaque to a separate
> script. Would it be a useful goal to have a richer galaxy-tool interface that
> could make all information available to the tool wrapper visible to my Python
> script? One way to do that would just be to bundle everything up in JSON and
>
Yup - thanks for the bug report. I have used your rna example to build
a minimal-ish example to fit into Galaxy's test tools framework here
https://github.com/galaxyproject/galaxy/pull/2795. I also tested in
16.01 and it worked - so this clearly broke in 16.04. I'll see if I
can track it down and w
Just to provide a little more detail on the collection point... Daniel
Blankenberg presented a talk at the Galaxy Community Conference on
metagenomics that talked a lot about "large" collection handling in
Galaxy - https://gcc16.sched.org/event/5Y0M/metagenomics-with-galaxy.
I think his rule of thu
Thanks for the questions - I have tried to revise the planemo docs to
be more explicit about what collection identifiers are and where they
come from
(https://github.com/galaxyproject/planemo/commit/a811e652f23d31682f862f858dc792c1ef5a99ce).
http://planemo.readthedocs.io/en/latest/writing_advanced
- It needs to be a tab not spaces between the fields in that loc file
- can you confirm that it is a tab? (Many editors will implicitly
replace tabs with spaces.)
- You need to restart Galaxy after updating that file - can you
confirm it has been restarted?
-John
On Fri, Jul 22, 2016 at 11:34 AM,
I can't confirm it is going to work without negative repercussions - I
can only say that I cannot think of any non-obvious problems that
would result from doing this. It seems like it is going to be the
right thing to do in your case (but no promises :)).
If it works, it would be super awesome if
I believe shipping a mapping file with Galaxy (& galaxy-lib) is a good
way to go - there is an existing github issue on this:
https://github.com/galaxyproject/galaxy/issues/1927
Hopefully this could be made generic enough to be useful with other
resolvers such as environment modules and future re
This information is not readily accessible during tool command line
generation. You can access some extra data with the $__user__ object -
it is the underlying model object. Are you hacking the session
information with extras or do you want stuff already associated with
the Galaxy session such as c
I think newer versions of Galaxy give a better error message - I am
pretty sure the problem is you are missing samtools. It needs to be
available on the PATH for Galaxy if you use bam data.
-John
On Fri, Jun 3, 2016 at 1:00 PM, Anthony Underwood
wrote:
>
>
> Jobs that I run on a local Galaxy ins
It seems likely you are correct and a catch for this hasn't been
implemented in Galaxy or more likely this is a limitation of the DRMAA
library for SGE. Ideally the job state would be coming back as FAILED
or something like that:
https://github.com/galaxyproject/galaxy/blob/dev/lib/galaxy/jobs/run
It is probably worth looking at this Galaxy issue - this may have
little to do with Docker or the Docker server might have a different
time configured than a local one? History updating is broken for many
clients in 16.04 and I am unsure if this intention is to fix it or
not.
https://github.com/ga
This is one the reasons (though not the only one) I generally
discourage putting datatypes in the tool shed - Galaxy isn't very
clear about what interface it provides to datatypes and it would seem
very difficult from my perspective to maintain that interface and grow
Galaxy at the same time.
This
Resending because I forgot to reply-all:
Older versions of Planemo should just respect an 'export
TMPDIR="/new/tmp"' before calling planemo. I think recently we
introduced a work around for some conda problems
(https://github.com/galaxyproject/planemo/pull/460) that causes this
not to be respected
Galaxy and Planemo cannot co-exist in the same conda environment I
don't think - because planemo requires galaxy-lib
(https://github.com/galaxyproject/galaxy-lib) which cannot be
installed in Galaxy's environment.
I don't understand the intricacies of what you are trying to do or why
it worked in
Thanks for the report and the workaround. I have opened a Pull Request
to make this truncation the default behavior in Galaxy 16.04.
https://github.com/galaxyproject/galaxy/pull/2265
-John
On Thu, Apr 28, 2016 at 5:43 AM, Tiziano Flati wrote:
> Solved.
>
> For your information: the problem was
Sorry for the lack of a response on this - I think this fell through
the cracks because no one has worked directly on the FTP import
backend for a long time. I did have need for a configuration value to
prevent Galaxy from deleting files as you requested and implemented
that today.
https://github.
Did you ever figure this out? I cannot think of anything that would
cause this - if the job conf is working with Galaxy it should work
with planemo. Are you sure Galaxy was actually using the configuration
and not just the local runner.
Is it possible you've modified run.sh to modify the environme
This mapping is not automatic - you need to write a small Python
method to take these parameters specified by the user and map them to
your cluster parameters. These methods are called dynamic job
destinations and described on the wiki at:
https://wiki.galaxyproject.org/Admin/Config/Jobs#Dynamic_D
Yeah - your discoveries are exactly right. 16.04 has added the ability
to configure a bunch of extra things on a per-job-destination basis
but this isn't included yet - probably should be though. Pulsar would
be a way to go - hopefully soon Pulsar will be included with Galaxy
directly and it will b
What kind of database backend are you using (postgres, myql, or the
default sqlite)?
If you are running many jobs - I would definitely encourage using postgres.
-John
On Sun, Mar 13, 2016 at 2:48 PM, Zuzanna K. Filutowska
wrote:
> Dear All,
>
> I am running newest version of Galaxy with PBS and
If you are running jobs as "the real user" with drmaa (using the chown
script) - you will probably want to setup a separate job destination
using the local runner for upload jobs. The upload tool uses files
that are created outside of Galaxy and these aren't modeled well.
Does this help?
-John
There is no option to enable this automatically. There is not ability
to modify this via the API (though I created an issue here
https://github.com/galaxyproject/galaxy/issues/1842).
I guess what I would do to implement this is to refactor places where
trans.user.stored_workflow_menu_entries is re
Can you share your nginx config? Perhaps with a gist or something.
-John
On Mon, Feb 29, 2016 at 12:35 PM, Preussner, Jens
wrote:
> Hi all,
>
>
>
> I set up the data upload using nginx as described in the wiki
> (https://wiki.galaxyproject.org/Admin/Config/nginxProxy, using the
> subdirectory /g
I have fixed the path to the filter directory on the wiki, have you
copied lib/galaxy/tools/toolbox/filters/examples.py.sample to
lib/galaxy/tools/toolbox/filters/examples.py?
-John
On Fri, Feb 19, 2016 at 4:22 PM, SAPET, Frederic
wrote:
> Hello
>
> I'm trying to set up this feature :
> https://
Hans is correct prior to Galaxy 16.01. The 16.01 release of Galaxy
added a monitor attribute to config/tool_conf.xml.sample (e.g.
) that causes Galaxy to watch that file for
changes and reload the toolbox on modifications. No need to add this
to tool shed tool confs (and it would probably break the
Given there have been no objections and the 16.01 release notes
contained a deprecation note and no one has complained - I just wanted
to follow up on this thread with the announcement that 16.01 will be
the last release of Galaxy to support Python 2.6. Likewise the next
releases of pulsar and plan
Peter -
My plans for pre-GCC workflow work are sort of outlined in this issue:
https://github.com/galaxyproject/planemo/issues/408 (I want an
abstract for GCC and BOSC like "Planemo – A Scientific Workflow SDK").
I've been doing most of my work out of this branch
https://github.com/galaxyproject/
Forgot to cc the mailing list.
-John
On Thu, Feb 4, 2016 at 8:11 PM, John Chilton wrote:
> Are you sure about the --file when calling Rscript?
>
> Here is the usage for my local version which does not expect an
> argument named --file:
>
> Usage: /path/to/Rscript [--optio
Yes - this is available - under api/tools/ if you pass the
query parameter io_details=True.
(e.g. https://usegalaxy.org/api/tools/cat1?io_details=True)
I would recommend using bioblend if possible, in this case the
operation is available as the ``show_tool`` method which also exposes
the io_detai
; /root/torque-6.0.0.1-1449528029_21cc3d8
>
> Thanks!
>
> On Wed, Jan 20, 2016 at 12:27 PM, John Chilton wrote:
>>
>> Even if you just have two servers, I would strongly recommend you
>> setup a cluster distributed resource manager (DRM) like SLURM, PBS, or
>> Co
Nate has a branch of slurm drmaa that allows specifying a --clusters
argument in the native specification this can be used to target
multiple hosts.
More information can be found here:
https://github.com/natefoo/slurm-drmaa
Here is how Nate uses it to configure usegalaxy.org:
https://github.com
Peter,
We would like to replace all the mako with JS, if I was going to put
a bunch of effort into admin pages I'd start by reworking what was
there to use JavaScript and the API. That is me however, I have lots
of time to put into Galaxy fundamentals and refactoring. This is more
work and I am
On Mon, Jan 25, 2016 at 3:44 PM, Peter Cock wrote:
> Thanks John,
>
> On Mon, Jan 25, 2016 at 3:29 PM, John Chilton wrote:
>> The script generated to call Galaxy is here:
>>
>> https://github.com/galaxyproject/galaxy/blob/dev/lib/galaxy/datatypes/metadata.py#L838
&
Peter -
Nate and I are in agreement that the goal is to eliminate that
directory, so I don't want to put effort into automating its creation.
https://github.com/galaxyproject/galaxy/issues/1576
If you wish to open a PR that does this though, I would be happy to
merge this. The best place to ensu
values for depending on the job
destination.
You are going to want a smaller hack that can be backported just to
run the local job runner when that option is configured huh?
-John
On Mon, Jan 25, 2016 at 3:38 PM, Peter Cock wrote:
> On Mon, Jan 25, 2016 at 3:33 PM, John Chilton wrote:
>> On Mon
On Mon, Jan 25, 2016 at 3:26 PM, Peter Cock wrote:
> On Mon, Jan 25, 2016 at 11:33 AM, Peter Cock
> wrote:
>> Hello all,
>>
>> We're currently looking at changing our Galaxy setup to link user accounts
>> with Linux user accounts for better cluster integration (running jobs as the
>> actual user
The script generated to call Galaxy is here:
https://github.com/galaxyproject/galaxy/blob/dev/lib/galaxy/datatypes/metadata.py#L838
The job template stuff that setups of the environment the job runs in is here:
https://github.com/galaxyproject/galaxy/blob/dev/lib/galaxy/jobs/runners/util/job_scr
list:list (as well as list:list:paired, list:list:list, etc...) can be
created via the API or using tools. As an example of the second - if
you had a tool that took in a dataset and split it into a list and
then mapped a list over that tool - Galaxy would produce a list:list
output from the individ
Even if you just have two servers, I would strongly recommend you
setup a cluster distributed resource manager (DRM) like SLURM, PBS, or
Condor and ensuring there exists a shared file system between Galaxy
and the node running the jobs. You wouldn't even need to use the CLI
job runner - you could j
Peter - thanks for sharing this.
ansible-galaxy-extras and ansible-galaxy were just created by
different people at different times and I don't think a lot of thought
went into rationalizing variable names across projects. Indeed even
within ansible-galaxy-extras the variable names aren't very
cons
So job metrics are completely different than what you described here.
Job metrics are enabled by default and can be configured fairly
directly by modifying config/job_metrics_conf.xml. New plugins are
relatively easy to write and can be added as python files to
lib/galaxy/jobs/metrics/instrumenters
I've been swamped with release related things but my intention is to
dig deeply into this. This is a very serious bug. A work around is
just to disable beta workflow scheduling for now:
switch
force_beta_workflow_scheduled_min_steps=1
force_beta_workflow_scheduled_for_collections=True
to
force_
We have heard no other reports of this and I have not replicated it
locally, is it possible this was like a client-side caching error or
something that went away?
If not, can you retry updating to the latest commit of release_15.10 -
there have been a good number of bug fixes though I don't recall
I would review the following threads - they have hacks for doing this
and required warnings:
http://dev.list.galaxyproject.org/find-UUID-of-current-history-in-tool-XML-wrapper-td4667113.html
http://dev.list.galaxyproject.org/Possible-to-pass-hostName-to-a-tool-td4667108.html
-John
On Sun, Dec 27
dynamic_proxy_manage_proxy=True is a terrible hack, I would set it to
False and use supervisord if you have any inclination to at all. Like
sqlite or the local job runner, it is just an attempt to make sure
things work out of the box but it isn't that robust.
-John
On Sat, Dec 19, 2015 at 1:20 AM
retrying jobs that fail due to transient reasons.
-John
On Mon, Aug 3, 2015 at 11:23 PM, Alexander Vowinkel
wrote:
> Cluster has workers, jobs running on main node is disabled.
>
> 2015-08-03 14:44 GMT-05:00 John Chilton :
>>
>> Are you running jobs on the head node or just
Indeed, Galaxy does need that for sure. Eric has created an issue and
I think Dan has some work in progress.
https://github.com/galaxyproject/galaxy/issues/1188
-John
On Thu, Dec 10, 2015 at 10:37 PM, Langhorst, Brad wrote:
> Hi John:
>
> I wonder if galaxy tools need a minimum galaxy version f
Yeah - those wrappers are not going to work with 15.05. Parameterized
XML macros were added in 15.07 it looks like
(http://galaxy.readthedocs.org/en/master/releases/15.07_announce.html).
-John
On Thu, Dec 10, 2015 at 10:28 PM, Langhorst, Brad wrote:
> Hi:
>
> I’m at release_15.05
>
> Some of thi
Despite sounding very similar, visualizations and display applications
are different things - display applications for the most part send or
expose Galaxy data to outside sources, visualizations run on the
Galaxy server and the client side.
It seems like you are trying to configure a visualization
Hey Brad,
What version of Galaxy are you on? Parameterized macros are pretty
new and will not work on older Galaxies.
-John
On Fri, Dec 4, 2015 at 3:36 AM, Langhorst, Brad wrote:
> Hi:
> I just installed hisat2 and the data manager from the toolshed ….
>
> Some macros seem not to be expanded
Someone should create an tools-iuc issue for this, I will certainly
participate also. I don't know anything RADSeq (or really much about
any kind of Seq) but I suspect I can find a way to help if there is a
TODO or ideas list that is tracked on github.
Thanks all,
-John
On Mon, Nov 23, 2015 at 5:
0100
>
> But fails at or BEFORE
>
> commit b19e71ec465c7145840acf684f8f09eeebb99b5a
> Author: Nicola Soranzo
> Date: Mon Nov 16 14:57:53 2015 +
>
> Christian
>
> From: Peter Cock [p.j.a.c...@googlemail.com]
> Sent: Friday, N
__
> From: Peter Cock [p.j.a.c...@googlemail.com]
> Sent: Friday, November 20, 2015 12:41 PM
> To: Christian Brenninkmeijer
> Cc: John Chilton; galaxy-dev@lists.galaxyproject.org
> Subject: Re: [galaxy-dev] Unable to up Planemo against latest dev
>
> Whi
lling via pip, hmm...
-John
On Fri, Nov 20, 2015 at 12:58 AM, Tiago Antao wrote:
> On Thu, 19 Nov 2015 20:35:42 +0000
> John Chilton wrote:
>
>> The latest development release of Galaxy which planemo targets by
>> default requires virtualenv to be available. Can you verify t
rder ids in the result from
> export_workflow_json. Helps a lot and now I won’t need to use soon-deprecated
> stuff.
>
> cheers,
> —
> Jorrit Boekel
> Proteomics systems developer
> BILS / Lehtiö lab
> Scilifelab Stockholm, Sweden
>
>
>
>> On 19 Nov 2015,
Tue, Nov 17, 2015 at 8:21 PM, Tiago Antao wrote:
> On Tue, 17 Nov 2015 14:48:03 +
> John Chilton wrote:
>
>> Thanks for the bug report. Somehow Galaxy isn't installing the
>> development wheels into the transient Galaxy's virtualenv, I've wiped
>> o
The workflow API is the only place where we expose unencoded IDs and
we really shouldn't be doing it. I would instead focus on adapting to
using step_ids - they really should be more stable and usable. Order
index has lots of advantages
- You can build a request for a given workflow and apply it t
and their deps?
>
> --
> David Trudgian Ph.D.
> Computational Scientist, BioHPC
> UT Southwestern Medical Center
> Dallas, TX 75390-9039
> Tel: (214) 648-4833
>
> Please contact biohpc-help@utsouthwestern with general BioHPC inquries.
>
> -Original Message-
>
1 - 100 of 316 matches
Mail list logo