Darren,
There isn't a configuration option for controlling this, but the quick fix for
your own use while developing tools would be to comment out the lines in the
cleanup method of the JobWrapper that handle removal of the directory.
The change should be near line 641 of
On Jan 31, 2011, at 11:34 AM, Peter Cock wrote:
I may have found a bug though - I have several workflows using my SignalP
wrapper, which has a select parameter for model type. This has been marked
as to be set at runtime in the workflow, but the workflow step using
this tool is
being shown
Moving this to the galaxy-dev mailing list since it's about a local Galaxy
installation.
The error here is likely a misconfigured tool_data_table_conf.xml. The format
of this file has changed, see the tool_data_table_conf.xml.sample for an
example of the new structure. Let me know if your
I should have held onto that email a second longer. Also see the bowtie .loc
files referenced in your tool_data_table_conf.xml, likely in
tool-data/bowtie_indices.loc (and the corresponding .sample) to see that they
match the new format.
-Dannon
On Feb 23, 2011, at 2:38 PM, Dannon Baker
Darren,
While this is not currently possible, I'm currently finishing up a first pass
on a workflow API that will allow this sort of interaction and hope to have an
early version available by the end of this week. I can update you when that
has been committed.
-Dannon
On Feb 24, 2011, at
On Mar 16, 2011, at 5:43 PM, Darren Brown wrote:
However, when I run a workflow from the command line:
python /mnt/galaxy/galaxy_dist/scripts/api/workflow_execute.py
api-key url/api/workflows 38247d270c7cb1bb
'hist_id=38247d270c7cb1bb' '1=hda=30fc17ce78176bfb'
My hunch is that the step id
Martin,
There were a few issues with postgres 9 that were fixed in changeset 5074, what
revision is your galaxy instance running? If you are running something newer
than that, could you explain more explicitly the steps to reproduce the server
error?
Also, I've moved the thread to galaxy-dev
I'd be happy to review the scan results, feel free to send them to me. I'll
share it with the rest of the team as well.
Thanks!
-Dannon
On Mar 19, 2011, at 11:37 AM, Paul, Rohit (NIH/NCI) [C] wrote:
We recently ran a Nessus vulnerability scan against our server that hosts a
local
Hi Kostas,
Workflows are saved in the database. If you're looking for an external
representation you can generate one by going to 'Download or Export' in an
individual workflow's menu in the main workflows list. The download option
there gives a JSON representation of the workflow that can
Juan,
What version of freebayes are you running? The freebayes configuration in the
galaxy repository was written with the .4 series in mind, and it appears that
the options have changed with the .6 series. We'll update the tool config, but
in the meanwhile you could probably get it working
: unable to open file 1.0
BGZF ERROR: unable to open file 1.0
Could not open input BAM files
Juan
On Apr 1, 2011, at 1:53 PM, Dannon Baker wrote:
Juan,
What version of freebayes are you running? The freebayes configuration in
the galaxy repository was written with the .4 series in mind
Vasu,
No, a paid service is not at all required. jjw14's solution in that thread
predates native FTP transfer to galaxy, and that sort of intermediate paid host
is not necessary.
Find detailed instructions here:
https://bitbucket.org/galaxy/galaxy-central/wiki/UploadViaFTP
-Dannon
On Apr
To be clear, you were able to connect to the galaxy server using WinSCP in
plain FTP mode, and had no errors uploading the files?
-Dannon
On Apr 19, 2011, at 9:37 AM, vasu punj wrote:
Dannon,
I used winSCP to upload the files but dont see in Galaxy in history of
uploaded files or even
/11, vasu punj pu...@yahoo.com wrote:
From: vasu punj pu...@yahoo.com
Subject: Re: [galaxy-dev] FTP upload of data
To: Dannon Baker dannonba...@me.com
Date: Tuesday, April 19, 2011, 8:37 AM
Dannon,
I used winSCP to upload the files but dont see in Galaxy in history of
uploaded files
Taka,
Would you mind sharing the exact command you used to call
workflow_execute.py?
It does sound like you're doing something very similar to what
scripts/api/example_watch_folder.py does. That script also uploads from
the file system to a data library and subsequently executes a workflow
Dave,
What revision of galaxy are you running (hg tip)? We've made a few
changes quite recently to running workflows, though I haven't seen any
errors like this yet.
Also, what's different about the two workflows (history destinations,
multiple-inputs, rename actions, etc.)?
-Dannon
On
Taka,
Great, I'm glad the workflow api is now working for you. I'm not sure what you
mean with regards to work I'm doing for decrypting hda to actual id, but I will
say that a History/Dataset API is something many people have asked about and I
imagine that it will get done sooner rather than
Leandro,
I see what you mean, I misunderstood your original goal. There currently isn't
a way to execute single tools in this fashion.
It isn't exactly straightforward, but you could construct a workflow that
consisted of two steps-- an Input Dataset step, and whatever tool you wanted to
Without seeing an error message I'll guess that issue is that there is no C
interpreter. Your program is a compiled executable.
Try the following and let me know (including any error messages) if it doesn't
work:
command
shortest_path $infile1 $output
/command
-Dannon
On Apr 22, 2011,
After logging out and then using the back button, what is it that you're able
to do or that you're worried about? Any action on pages you click 'back' to
get to should now present a You must be logged in to Galaxy... error message.
To your question about disabling it, back button
You can certainly adjust the two values as needed for your particular
environment. Also check in your universe_wsgi.ini for:
set_metadata_externally = True
Something else you might want to consider, and what we do to handle the load on
the server at http://usegalaxy.org, is starting more
Not currently, though the API is being continually extended with new features
as the need arises.
-Dannon
On Apr 27, 2011, at 9:40 AM, Leandro Hermida wrote:
Hi Dannon,
Thanks for your replies and advice, this leads me to a quesion... is it
possible to execute a tool job via the Galaxy
Reema,
Apologies for the slow response. This recent reply (see below) to another
message on the mailing list from Ross deals with java tools in galaxy and might
be of use to you. See also the tool integration tutorial here:
https://bitbucket.org/galaxy/galaxy-central/wiki/AddToolTutorial
Rohit,
Apologies for the slow response. Yes, currently email notification is only
available as a step action in workflows, as you state below.
Regarding what you want to do for your project, there isn't currently a way to
share a workflow server-wide in a fashion that doesn't require an
Thanks for reporting this Assaf, I should have a fix committed shortly.
-Dannon
On Apr 28, 2011, at 3:43 PM, Assaf Gordon wrote:
Hi,
There's a small bug with the hide dataset button in the workflow editor -
once any dataset is marked as output (by clicking on the star icon),
there's no
Frederick,
The API is distributed along with galaxy. To enable it, set the following in
your universe_wsgi.ini:
enable_api = True
Once that's enabled, each user can create an API key using the API Keys
option in the user menu of the main masthead. For additional information, see
As to how to figure out the id for copying, there isn't an exposed method for
doing this. If you're willing to do a little work to decode it, you can do the
following in python.
For the encoded id (the b701da857886499b portion that you see in a dataset
url
Dayananda,
The tool data directory should have been created upon running galaxy the first
time. Does galaxy user have write access to this filesystem? The first thing
to check would be to inspect the galaxy directory and see what's actually
there. Does '
Shaun,
Currently, no, though this is an interesting idea. I've created an enhancement
request in bitbucket that you can follow here:
https://bitbucket.org/galaxy/galaxy-central/issue/613/conditional-and-where-tags-multiple
Thanks!
-Dannon
On Jul 5, 2011, at 12:03 PM, SHAUN WEBB wrote:
Chan,
When you go directly to the URL specified for importing (the for_direct_import
one in the error message from your screenshot) what do you see? Is the
workflow listed as being accessible, when you go to Download or Export?
-Dannon
On Jul 18, 2011, at 2:45 AM, chan fook mun wrote:
Hi
Not currently, though they should be. I'll add this shortly.
-Dannon
On Jul 25, 2011, at 6:22 PM, Duddy, John wrote:
I am doing an integration with Galaxy, and part of what I need to do is
trigger workflows. To do that, I need to list them.
I can do this if the user owns the workflows,
Chaolin,
You guessed correctly as to why we implemented this, getting exact line counts
on very large files is a time consuming process. You can still get an exact
line count using the Line/Word/Character count tool in the Text Manipulation
section.
If you're interested in the way it
This should not lock. The job(s) for the workflow will be queued until they're
able to execute, but the call will return.
-Dannon
On Aug 2, 2011, at 4:17 PM, Duddy, John wrote:
I’d like to have an external program that registers a file by absolute path
(link, not upload) in a data library,
Holger,
There isn't currently a galaxy-wide configuration for the from header address.
Most places use frm = 'galaxy-noreply@%s' % host, where host is the fqdn of the
box, as you've seen. I took a quick look, and it does look like everything
that sends mail uses util.send_mail, so you could
The 'step' in question is the actual workflow step id, since ordering of steps
in a workflow is flexible and might be changed without realizing it by moving
steps around in the editor. The easiest way to retrieve this identifier is to
use the API and view the workflow in question.
Here's an
I haven't had a chance to do anything on this yet, but I'll see if I can work
something out in the near future.
-Dannon
On Sep 7, 2011, at 9:34 PM, Glen Beane wrote:
On Sep 7, 2011, at 8:10 PM, Edward Kirton wrote:
i'm resurrecting this thread to see if there's any more support for the
The problem was caused by an unimplemented method in the TaskWrapper. I've
fixed it in changeset 6026:3f926d934d98.
-Dannon
On Sep 20, 2011, at 6:58 PM, Chorny, Ilya wrote:
Hi Nate,
We are having an issue that when use_tasked_jobs = True, the
job_wrapper.user = None in drama.py. Do
Ann,Unless I misunderstand what you're asking for, this should already be the case. For instance, if a workflow step takes in a file of 'tabular' type, only tabular files (and subtypes, including interval, etc.) should be presented as dropdowns in the list. Types such as plain text, binary files,
Reference genomes aren't distributed along with the galaxy source due to size
and other factors, but we do have a walkthrough for setting it all up on the
wiki at http://wiki.g2.bx.psu.edu/Admin/NGS%20Local%20Setup, see the Setting up
References section.
Thanks!
Dannon
On Oct 25, 2011, at
Oren,
Not yet. Right now, the only path to accomplish this would be to upload via
the API to a library, and then copy the contents to a history. This is a
feature that definitely needs to be implemented, we simply haven't had a chance
yet. If you're interested in writing it, I'm sure we'd
The easiest way is for you to issue a bitbucket pull request.
http://confluence.atlassian.com/display/BITBUCKET/Forking+a+Bitbucket+Repository#ForkingaBitbucketRepository-Step-by-StepExampleNowthePullRequest
Thanks for the contribution!
-Dannon
On Nov 1, 2011, at 5:16 AM, Hanfei Sun wrote:
On 10/31/2011 6:19 PM, Dannon Baker wrote:
Oren,
Not yet. Right now, the only path to accomplish this would be to upload via
the API to a library, and then copy the contents to a history. This is a
feature that definitely needs to be implemented, we simply haven't had a
chance yet
it
and try to run it from the published workflows page I get that error.
Ilya
-Original Message-
From: Dannon Baker [mailto:dannonba...@me.com]
Sent: Monday, November 07, 2011 9:00 AM
To: Chorny, Ilya
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] Error when clicking
it from the published workflows page I get that error.
Ilya
-Original Message-
From: Dannon Baker [mailto:dannonba...@me.com]
Sent: Monday, November 07, 2011 9:00 AM
To: Chorny, Ilya
Cc: galaxy-dev@lists.bx.psu.edu
Subject: Re: [galaxy-dev] Error when clicking on published
Yan,
1. Right now, we host and you can use usegalaxy.org ( main.g2.bx.psu.edu ),
and test.bx.psu.edu. Main will be the most reliable server and what we
recommend you use, but test has a few beta tools that aren't available on main
yet. That said, test.g2.bx.psu.edu is very much for testing
Yep, you're absolutely right. Looking at it, the intent was for slug_set to be
a flag that indicates if *any* slug was set, so we know to flush (only once, we
don't want to do so inside the loop). I've fixed this in changeset
6258:6ec2d7f4a64d.
Thanks!
-Dannon
On Nov 10, 2011, at 2:43 PM,
.
Option 4: local installation, where is the official download source(link and
installation)?
Option5: Cloud installation, do you have any more detailed information
regarding supporting hardware and software?
Best Wishes,
Yan
On Mon, Nov 7, 2011 at 8:26 PM, Dannon Baker dannonba
, 2011, at 4:13 PM, Yan Luo wrote:
Hi, Dannon,
Thanks for your response. I just tried test.bx.psu.edu, but it was not
available. Could you please double check the address?
Looking forward to hearing from you.
Thanks,
Yan
On Tue, Nov 15, 2011 at 3:53 PM, Dannon Baker dannonba...@me.com
Could you send me the traceback and perhaps a copy of the misbehaving tool
config files? I'll be happy to take a look.
Thanks!
Dannon
On Nov 21, 2011, at 9:53 AM, Steven Platt wrote:
A colleague of mine posted this on the Users list ... 4 days later and
no replies! I'm hoping that one of
Hi Craig,
Thanks for your interest in the galaxy API. For the parameters you're
uncertain about:
The 'workflow_id' is indeed the encoded workflow id. You can get this by
encoding it yourself, or doing a GET on /api/workflows for a list of *all*
workflows and their encoded id's (see example
Graham,
How are the output files being handled in split_var_length_barcodes_wrapper.py?
See http://wiki.g2.bx.psu.edu/Admin/Tools/Multiple%20Output%20Files for a
reference-- you're going to want to follow the bottom example there involving
$__new_file_path__ and the naming convention
Toqa,
Just to make sure I've understood your question: the problem is that a tool
that you're trying to wrap doesn't provide a way to specify a particular output
filename? Take a look at the from_work_dir attribute of a data element.
Or are you asking how to define outputs in general? For
Richard,
You're correct in that currently the workflow API affords no method for runtime
modification of tool parameters, other than inputs. Depending on your needs,
it might be feasible to have a few static workflows that you reuse often via
the workflow API. If that isn't the case, and you
and I am interested in this one only
..._CNTGS_DIST0_EM20.txt
any other method to overcome such problem
I have presentation tomorrwo. hope things will work fine.
Thank you,
From: Dannon Baker [dannonba...@me.com]
Sent: Monday, December 05, 2011
The only thing that sticks out to me is the 'export' format listed in inputs.
What are the children datatypes of export, or how is that set up? The filter
logic automatically includes all children datatypes of the specified formats.
-Dannon
On Dec 7, 2011, at 5:10 AM, Louise-Amélie Schmitt
7UH.
UK
Tel: +44 (0)1603 450601
On 05/12/2011 13:37, Dannon Baker dannonba...@me.com wrote:
Graham,
How are the output files being handled in
split_var_length_barcodes_wrapper.py? See
http://wiki.g2.bx.psu.edu/Admin/Tools/Multiple%20Output%20Files for a
reference-- you're
Previously, you did, as robots.txt was being served exclusively at
/static/robots.txt by default. I've just committed a fix in 6441:ad9a6d8afded
that resolves this.
-Dannon
On Dec 13, 2011, at 8:07 AM, SHAUN WEBB wrote:
Thanks. Do I need to have Galaxy running via apache for this to take
Roger,
The repeat construct creates an array of the various inputs over which you
can iterate within the command template of your tool config. See the
concatenate datasets wrapper (tools/filters/catWrapper.xml) for a full example,
but here are the relevant sections inline:
repeat
Thon,
This is a problem with the default client_max_body_size option in nginx being
set far too small in the nginx.conf on the cloud AMI. It'll be fixed with our
next AMI update, but you could also SSH in to your instance, edit the
nginx.conf to change the client_max_body_size to something
In recent versions, the setting in the Galaxy config to keep job files is:
cleanup_job = never
I discovered the problem. My pbs queue has a wall time restriction of 3600
seconds.
Is there a way to configure Galaxy to keep the job files for only failed
jobs? I'd like to keep
Christophe,
This is an issue that was resolved in changeset 6368:52de9815a7c4. The
changeset hasn't made it into galaxy-dist yet, which I assume you're using, but
that should happen very soon. You could wait for the update, or if you'd like,
you can pull the changes directly from
Carlos,
The method you describe below for mapping inputs is precisely the intended
approach. One reason for the long identifier being used instead of a simple
step number is that step number (ordering as you would see in the usual run
workflow dialog) can change without it being obvious to
Leon,
I'm not seeing this behavior in any of my instances. I assume they work
correctly through the standard galaxy interface, but is there anything special
about the workflows in question? Is there perhaps a tool with no inputs in use?
Thanks!
-Dannon
On Jan 6, 2012, at 11:59 AM, Leon
with this work around.
Cheers,
Leon
On Mon, Jan 9, 2012 at 3:18 PM, Dannon Baker dannonba...@me.com wrote:
Leon,
I'm not seeing this behavior in any of my instances. I assume they work
correctly through the standard galaxy interface, but is there anything
special about the workflows
Hi Carlos,
The enhancement request you linked would cover exactly what you want to do, but
unfortunately I don't have any updates other than that it's definitely still on
the wish list.
-Dannon
On Jan 11, 2012, at 11:02 AM, Carlos Borroto wrote:
Hi,
I would like to use the Rename Dataset
That should be ./run.sh --daemon, not --start-daemon. The error is just that
--start-daemon is an unknown option.
-Dannon
On Jan 17, 2012, at 9:25 AM, Ryan Golhar wrote:
So I just tried restarting Galaxy and it downloaded a bunch of new eggs then
errored out. My run.sh script didn't
Leandro,
Thanks for reporting this, I'm able to reproduce it and will let you know when
I have a fix.
-Dannon
On Jan 20, 2012, at 11:07 AM, Leandro Hermida wrote:
Hi,
There seems to be a weird bug with the Input dataset workflow
control feature, hard to explain clearly but I'll try my
parallelization feature but now it doesn't seem to be the case which is very
good.
thanks,
Leandro
On Wed, Jan 25, 2012 at 2:55 PM, Dannon Baker dannonba...@me.com wrote:
Leandro,
Thanks for reporting this, I'm able to reproduce it and will let you know
when I have a fix.
-Dannon
Pieter,
Take a look at templates/root/history_common.mako, specifically the
render_dataset and render_download_links methods, that should get you started
on adding custom buttons. The other buttons you see listed on that page are
conditionally visible based on settings in configuration like
Hi Dhivya,
Have you updated this instance or made any other changes recently? Do you see
any errors in the logs?
-Dannon
On Jan 31, 2012, at 3:03 PM, dhivya arasappan wrote:
Hi,
We have our own galaxy instance and I'm trying to add datasets to a data
library. It has always worked
Hi Cory,
The new call to sanitize_html was introduced to more effectively prevent
malicious content and possible XSS attacks, though I can't think off the
top of my head why we couldn't allow style content. I'll see what I can
do about relaxing the filter a little.
Thanks!
-Dannon
On
the best path forward is probably relaxing the filter
a bit, the initial pass was somewhat draconian. Would relaxing the
filter to allow style content to pass through work for your needs?
-Dannon
Thanks!
Cory
On 2012-02-01, at 12:01 PM, Dannon Baker wrote:
Hi Cory,
The new call
://test.g2.bx.psu.edu/u/cjfields/w/test
maybe it's tool-specific?
chris
On Feb 3, 2012, at 1:10 AM, Dannon Baker wrote:
Can you share a workflow that's failing for you on test with me? I just
tried a simple workflow with nothing but an input and FastQC and ran with
multiple inputs
Hi Dave,Yes, galaxy's standard run-workflow dialog has a feature where you can select multiple datasets as input for a single "Input Dataset" step. To do this, click the icon referenced by the tooltip in the screenshot below to select multiple files. All parameters remain static between executions
installation for
one method of doing this.
-Dannon
On Feb 6, 2012, at 4:53 PM, Dave Lin wrote:
Thank you Dannon. That is helpful.
What if I need to specify multiple inputs per run (i.e. .csfasta + .qual
file)?
-Dave
On Mon, Feb 6, 2012 at 1:27 PM, Dannon Baker dannonba...@me.com wrote:
Hi
Hi Charlie,
Our main galaxy server is actually running CentOS 5.10. The galaxy cloud
instances are built on Ubuntu 10.04 LTS. That said, as far as I know, any *nix
with python 2.5 or above should work fine.
The following two links have lots of information regarding setting up your own
results in a new
history option is checked?
This new feature is indeed very useful (thanks a million for it) but the
numbered suffixes make it hard to track what new history belongs to which
dataset.
Thanks,
L-A
Le 06/02/2012 23:00, Dannon Baker a écrit :
This method only works
What revision is your Galaxy instance at? I'm not seeing this behavior on tip
with a simple test, it may have been something we've fixed in a more recent
revision.
-Dannon
On Feb 10, 2012, at 11:14 AM, Petit III, Robert A. wrote:
Hi there,
I've run into an issue on my local galaxy
Hi Paul,
Thanks for this suggestion, it would definitely make sense to create a
reference to the actual workflow inputs in the new history. I'll see what I
can do!
-Dannon
On Feb 14, 2012, at 11:05 AM, Paul Gordon wrote:
Hi all,
I have noticed that when I run a workflow with output to a
Hi Jorrit,
There was a permissions issue with the new tool snapshot(snap-b28be9d5) created
yesterday that I've fixed, it should work next time you start an instance.
Thanks!
-Dannon
On Feb 14, 2012, at 10:37 AM, Jorrit Boekel wrote:
Dear list,
I am trying to deploy Galaxy as an
Sure, I'm happy to help, though in the future it's probably best to mail
galaxy-dev (cc'd for ticket tracking) directly as there are several of us that
may be able to answer and get a response out sooner.
That said, your problem is likely due to a permissions issue with the new tool
I hadn't had a chance to follow up again yet, but in your workflow, are you
using an Input Dataset step, or do you have the workflow beginning directly
with a tool?
-Dannon
On Feb 14, 2012, at 3:26 PM, Petit III, Robert A. wrote:
Here's the latest. I have tested this on Chrome, Firefox, and
Hi,
This is not a tool from our distribution. Please send questions about local
instances, core galaxy development, or your own tool development to the
galaxy-dev@lists.bx.psu.edu mailing list and *not* this galaxy-user mailing
list.
That said, if you could provide more information about
It's definitely an experimental feature at this point, and there's no wiki, but
basic support for breaking jobs into tasks does exist. It needs a lot more
work and can go in a few different directions to make it better, but check out
the wrappers with parallelism defined, and enable
Are those four tools being used on Galaxy Main already with this basic parallelism in place?Main still runs these jobs in the standard non-split fashion, and as a resource that is occasionally saturated (and thus doesn't necessarily have extra resources to parallelize to) willprobablycontinue
On Feb 16, 2012, at 5:15 AM, Peter Cock wrote:
On Wed, Feb 15, 2012 at 6:07 PM, Dannon Baker dannonba...@me.com wrote:
Main still runs these jobs in the standard non-split fashion, and as a
resource that is occasionally saturated (and thus doesn't necessarily have
extra resources
Very cool, I'll check it out! The addition of the JSON files is indeed very
new and was likely unfinished with respect to the base splitter.
-Dannon
On Feb 16, 2012, at 1:24 PM, Peter Cock wrote:
On Thu, Feb 16, 2012 at 4:28 PM, Peter Cock p.j.a.c...@googlemail.com wrote:
Hi Dan,
I think
Peter has it right in that we need to do this internally to ensure
functionality across a range of job runners. A side benefit is that it gives
us direct access to the tasks so that we can eventually do interesting things
with scheduling, resubmission, feedback, etc. If the overhead looks to
How did the user share the workflow with you, via link or directly with your
user? And which import option are you using? Does Clone in the workflow
context menu work?
-Dannon
On Feb 24, 2012, at 3:25 PM, Iry Witham wrote:
I have a Galaxy user who has created a workflow from a history and
Hi L-A,
This exists. See your universe_wsgi for the following lines:
# Optional list of email addresses of API users who can make calls on behalf of
# other users
#api_allow_run_as = None
And then with any api call from a user in the allow list above you can add an
extra parameter to the
because it's Javascript we're inserting.
I understand the security concerns though. Any advice on a more secure way to
allow particular content? Perhaps a whitelist of allowed scripts?
On Wed, Feb 1, 2012 at 23:58, Cory Spencer cspen...@sprocket.org wrote:
On 2012-02-01, at 1:33 PM, Dannon
This is definitely something we're looking to implement in Galaxy. I started
working on a proof of concept a while back for boxing workflows up as tools,
but I have not had a chance to finish it yet.
What Thon suggests in terms of a simple copy/paste is interesting, and probably
a far simpler
John,
I'm still working on figuring out who created the AMI you're running across (it
isn't under the main Galaxy cloud account, see the '072133624695' in the name
string), but for future reference we'll always keep the recommended AMI to use
listed on usegalaxy.org/cloud. At this point, it's
This was addressed in 6788:e58a87c91bc4. The reason for the initial change
that's causing these display issues was to eliminate potential XSS
vulnerabilities. There's now a configuration option (sanitize_all_html, which
is True by default) for local instances where you can disable the extra
Yes, see the table 'job'. You may or may not find this helpful, but here's a
big database schema from our wiki:
http://wiki.g2.bx.psu.edu/DataModel?action=AttachFiledo=viewtarget=galaxy_schema.png
-Dannon
On Mar 9, 2012, at 8:00 AM, christin weinberg wrote:
Hi,
thanks for the answer. I
Just one extra thought on this-- If you leave your instance up all the time it
may be worth looking into having a reserved micro instance up as the front end
(cheap, or free, with your intro tier) with SGE submission disabled. Then,
enable autoscaling(max 1) of m1.large/xlarge instances.
Hi Thon,
Thanks for reporting this. I see what the problem is here at least for the
clone duplication, and I've committed a fix in 6833:e8e361707865 that will
affect all workflows going forward.
Unfortunately, there isn't a complete solution for fixing the extra tags. The
problem was that
Ed,
The top bar is defined in templates/webapps/galaxy/base_panels.mako. To create
a new masthead tab, you would have to edit that file in addition to creating
the relevant controller methods for the functionality you wish to add.
-Dannon
On Mar 20, 2012, at 8:32 PM, Edward Hills wrote:
Liisa,
I'm not able to reproduce this locally with a fresh galaxy-dist. Is there
anything unique about your workflows or configuration here? And, this might be
a long shot, but do you frequently use tags with your workflows? There was an
issue that I've fixed with this recently that would
Assuming everything is generated correctly for the html report, what you're
probably running into is Galaxy's html sanitization. In your Galaxy instance's
universe_wsgi.ini you can set the following option (to false) to disable this:
# Sanitize All HTML
# By default, all tool output served as
1 - 100 of 481 matches
Mail list logo