University of North Carolina at Chapel Hill
From: Dannon Baker [dannonba...@me.com]
Sent: Tuesday, January 29, 2013 1:48 PM
To: Waldron, Michael H
Cc: Paul Boddie; James Taylor; Greg Von Kuster; galaxy-...@bx.psu.edu Dev
Subject: Re: [galaxy-dev
What probably happened here is that due to the filesize, the browser upload
failed but this went undetected.
The good news is that you *can* tell galaxy to copy directly, or you could even
use the files exactly where they are without any copy.
What you want to do is enable this section in
Hey Simon,
You're right -- the EMBOSS version supported by the Galaxy wrappers is
currently EMBOSS 5. We do have a Tool Dependency page here:
http://wiki.galaxyproject.org/Admin/Tools/Tool%20Dependencies, but
unfortunately while it lists versions for many tools it doesn't cover them all.
Hey Andy,
Financially, it's probably best to start small cluster-wise. What I'd probably
recommend for your particular project would be using a single m1.xlarge
instance as the head node, seeing how that goes, and then adding workers as you
find it useful. Should you find that it isn't
Cloudlaunch on the main public instance is fully supported and should work fine
-- I use it regularly for launching instances without issue.
There was a brief EC2 API outage yesterday (on the Amazon end) that caused
intermittent errors to all users of the API (including cloudlaunch), but that
Geert, this is great stuff!
One small correction -- the API is enabled by default as of revision
7022:8376ad08ae41 (April 2012).
-Dannon
On Jan 16, 2013, at 1:17 AM, Geert Vandeweyer geertvandewe...@gmail.com wrote:
Hi,
I've put together some exempels for activating and using the api.
While it isn't bulletproof yet (restarting jobs that fail due to spot instance
reclaims, etc), Cloudman (usegalaxy.org/cloud) does currently have support for
running worker nodes as spot instances. When you go to add nodes, just click
Use Spot Instances, and add your spot price.
For obvious
:
Hi Matthew,
Would you like to try out what we put together for modENCODE DCC?
Please see the README file in the docs folder at
https://github.com/modENCODE-DCC/Galaxy
Thanks,
Q
On Fri, Jan 18, 2013 at 10:17 AM, Dannon Baker dannonba...@me.com wrote:
Cloudlaunch on the main public
Thanks for pointing this out. I 'enhanced' the trello reporting last friday to
optionally @mention submitters as a method for both claiming a submission and
easier notifications, but it looks like something has gone awry and I'm looking
into it now.
-Dannon
On Jan 14, 2013, at 9:58 AM,
There's an issue with the MACS installation on the current cloud tools volume
(which will be fixed with the next volume update coming soon).
For existing instances, you can get MACS working correctly by executing the
following two commands (which change the default version of macs used) after
No -- we're reorganizing the storage approach the cloud uses and will not be
updating for this distribution release. Updates to the cloud deployment will
most likely resume with the next distribution.
Until then, you can always update your individual cloud instances to any
revision you'd
as an example or just the Galaxy software?
Any guess on when the next distribution will release occur?
On 12/22/12 9:04 AM, Dannon Baker dannonba...@me.com wrote:
No -- we're reorganizing the storage approach the cloud uses and will not
be updating for this distribution release. Updates to the cloud
that align with the XML mapping in Galaxy. Would go a long way to
make this easy and at the same time minimize the support requirements on
why something is not working.
On 12/22/12 9:11 AM, Dannon Baker dannonba...@me.com wrote:
The cloud automated update (through the admin UI) won't pull
My guess what's going on here is that you're still logged in as the ubuntu
user. `sudo su galaxy` and give it another shot.
-Dannon
On Dec 22, 2012, at 1:51 PM, Greg Von Kuster g...@bx.psu.edu wrote:
Hi Quang,
I didn't realize you were running on the cloud, which is not my area of
What specific errors are you seeing? Some tools have external dependencies
that need to be installed.
-Dannon
On Dec 18, 2012, at 11:29 AM, Tilahun Abebe tilahun.ab...@uni.edu wrote:
Hi,
We installed a local galaxy a couple of days ago following the basic
installation instruction
Check the setting 'allow_user_dataset_purge' in your universe_wsgi.ini -- this
is false by default to prevent errors, but changing that should allow users of
your instance to purge datasets permanently.
-Dannon
On Dec 14, 2012, at 12:18 PM, Fenglou Mao feng...@gmail.com wrote:
I installed
? Where could I find this file ?
Sarah
Dannon Baker a écrit :
Do you see any errors in Galaxy's paster.log, or in the javascript console?
-Dannon
On Dec 11, 2012, at 8:40 AM, Sarah Maman
sarah.ma...@toulouse.inra.fr
wrote:
Hello,
For one of my tools that I have added
On Dec 10, 2012, at 6:17 PM, Fabiano Lucchese fabiano.lucch...@hds.com wrote:
I appreciate your effort to help me, but it looks like my AWS account
has some serious hidden issues going on. I completely wiped out
CloudMan/Galaxy instances from my EC2 environment as well as their volumes,
Hey Dave,
What revision are you running locally? And, just to confirm, in galaxy the
file is recognized as a 'tabular' file type?
-Dannon
On Dec 7, 2012, at 1:25 PM, Dave Walton dave.wal...@jax.org wrote:
I'm seeing some weird behavior in our local galaxy instance and am wondering
if
This should be resolved in changeset 1ac27213bafb in galaxy-central. Thanks
for pointing this out!
-Dannon
On Dec 3, 2012, at 11:37 AM, Marc Logghe marc.log...@ablynx.com wrote:
Hi,
The conf of the parameters in question looks like this:
param name=project1 type=select label=Project
This isn't (at least at first) Pause/Resume as you might be expecting - where
you could manually pause a currently running job and continue it later. What
we're doing at least in the first pass is using 'Paused' as an internal state
that jobs only go into in two scenarios:
1) User quota is
Yes. If you click on the little sprocket icon in your history panel and go to
Copy Datasets, you'll be able to do this.
-Dannon
On Nov 20, 2012, at 2:52 PM, Thyssen, Gregory - ARS
gregory.thys...@ars.usda.gov wrote:
Hello
Is it possible to move a file from one history to another?
I have
This is actually possible using data libraries. What you'd want to do is
upload by filepath (described in the wiki page Brad linked, heading Upload
files from filesystem paths) and check the No box under Copy data into
Galaxy.
-Dannon
On Nov 20, 2012, at 4:13 PM, Langhorst, Brad
'5013377e0bf7'!
Is there any other way of manually getting it?
(Sorry, I'm not an expert on those new SCMs)
Sanjar.
On 11/13/2012 06:33 PM, Dannon Baker wrote:
Sanjar,
This is fixed as of 5013377e0bf7. This may not be in the next distribution,
but will be in the one after
It looks like you have conflicting blastxml entries. Edit your
datatypes_conf.xml to remove any references to blastxml (the toolshed manages
datatypes separately), restart galaxy, and you should be good to go.
-Dannon
On Nov 13, 2012, at 9:19 AM, rolandomantil...@gmail.com wrote:
Sent from
Richard,
This is fixed as of 5013377e0bf7. This may not be in the next distribution,
but will be in the one after that. Of course, you can manually pull the change
at any time.
-Dannon
On Nov 9, 2012, at 4:20 PM, Richard Park rp...@bu.edu wrote:
Hi Guys,
I updated to the latest galaxy
Sanjar,
This is fixed as of 5013377e0bf7. This may not be in the next distribution,
but will be in the one after that. Of course, you can manually pull the change
from galaxy-central at any time.
-Dannon
On Nov 13, 2012, at 9:45 AM, Sanjarbek Hudaiberdiev hudai...@icgeb.org wrote:
I
On Nov 12, 2012, at 10:23 AM, Jorrit Boekel jorrit.boe...@scilifelab.se wrote:
I was therefore looking for fault tolerance mechanisms in the galaxy project,
which I seem to remember existed. Somehow I can't find anything about it
right now though.
I've tested a little bit, and it seems
Unfortunately the cloud instance upgrade path requires some manual intervention
here due to tool migrations. SSH in to your instance, edit
/mnt/galaxyTools/galaxy-central/datatypes_conf.xml removing any references to
BlastXML. Save the file, restart galaxy, and you should be good to go.
be deleted. Current release version of abyss
as of May 30 2012 is Abyss 1.3.4
On 11/8/12 11:00 AM, Dannon Baker dannonba...@me.com wrote:
Unfortunately the cloud instance upgrade path requires some manual
intervention here due to tool migrations. SSH in to your instance, edit
/mnt
trying to
figure out how to use the software as designed.
Thanks
Scooter
On 11/8/12 11:49 AM, Dannon Baker dannonba...@me.com wrote:
Neither abyss wrapper in the toolshed installs binaries (you'd see a
tool_dependencies.xml in the repository), that's left up to the end user.
You might
to read builds file: [Errno 2] No such file or directory:
'/mnt/galaxyTools/galaxy-central/lib/galaxy/util/../../../tool-data/shared/
ncbi/builds.txt'
On 11/8/12 12:44 PM, Dannon Baker dannonba...@me.com wrote:
You do not need to restart or add/remove worker nodes, the master's tool
and data
On 11/8/12 1:55 PM, Dannon Baker dannonba...@me.com wrote:
That path should resolve to
/mnt/galaxyTools/galaxy-central/tool-data/shared/ucsc/ (or ensembl, ncbi,
etc)
Can you tell me the output of 'hg tip' 'hg st' and 'hg diff' from the root
/mnt/galaxyTools/galaxy-central
Hi Quang,
This is indeed temporary. You can get things working in the interim by adding
BWA via the toolshed.
-Dannon
On Nov 7, 2012, at 2:12 PM, Quang Trinh quang.tr...@gmail.com wrote:
Hi dev,
I launched an instance of Galaxy on Amazon ( AMI ami-da58aab3 ) this
morning and noticed bwa
On Nov 6, 2012, at 11:25 PM, Vladimir Yamshchikov yaxi...@gmail.com wrote:
Error attempting to display contents of library (SC datasets):
(OperationalError) no such column: True u'SELECT dataset_permissions.id AS
dataset_permissions_id, dataset_permissions.create_time AS
Hi Juan,
Thanks for reporting this, it is indeed a bug. The fix below isn't quite
correct (if there is an external metadata job, we do actually want to terminate
it) but I'll take care of it.
For reporting bugs in the future, certainly feel free to message this list or
you can also file an
Galaxy Cloudman does not support Stop/Start through the AWS interface, this is
known to cause problems and should be avoided. The persistence design allows
for complete termination and restart -- the issue with your startup zone can be
worked around currently by launching through the AWS
For this instance, you'll need to restart using the old method for launching
via the console, specifying the zone 1b. Detection of the zone volumes are in
for existing clusters and specifying those for launch is on the short list of
things coming up for cloud launch.
On Oct 31, 2012, at
Hey Peter,
I must have missed this the first time through, I like the change below and can
apply it.
-Dannon
On Oct 30, 2012, at 8:26 AM, Peter Cock p.j.a.c...@googlemail.com wrote:
Hi all,
The issue I raised below about column alignment in the display of
tabular files still affects the
On Oct 29, 2012, at 4:40 PM, Oleksandr Moskalenko o...@hpc.ufl.edu wrote:
This is an interesting project. I'm glad to see more people working on
phylogenetics related wrappers and workflows. I wonder if we could get a
Phylogenetics category added to the main Galaxy Toolshed, so we could put
On Oct 24, 2012, at 3:36 AM, Joachim Jacob joachim.ja...@vib.be wrote:
PS: I would have posted on Trello, but I am not allowed to do so. I
understand that this is the way to propose enhancements.
For adding cards, anyone can use the form at http://galaxyproject.org/trello.
A new card will be
On Oct 22, 2012, at 3:52 AM, David van Enckevort david.van.enckev...@nbic.nl
wrote:
On the main galaxy page there is a link 'Report Issue' to the bitbucket issue
tracker, however since a few weeks it is not possible to view or report
issues anymore since it requires membership of the
On Oct 21, 2012, at 6:11 AM, Tom Hait sth...@gmail.com wrote:
so I tried to change the URL to: http://localhost:8080,
http://127.0.0.1:8080 ...
It also didn't work.
Any Ideas about could go wrong?
First thing I'd check would be to verify that your galaxy server is currently
running, and
The renaming input selector uses # instead of $ to allow combinations with
workflow parameters.
So, in your case, #{input} should work. There are also options (basename,
upper, lower) that you can use to format the text. So, #{input | upper} would
use the input name but ensure that it was
. September 2012 4:10 PM, Dannon Baker dannonba...@me.com wrote:
The renaming input selector uses # instead of $ to allow combinations
with workflow parameters.
So, in your case, #{input} should work. There are also options
(basename, upper, lower) that you can use to format the text. So
Kourosh,
My first guess would be a misconfiguration of the security groups or the like
-- are you launching the instance using the regular AWS console?
If so, it might be worth it to try using Galaxy's Cloud Launch at
https://main.g2.bx.psu.edu/cloudlaunch which will help you format any
Hanfei,
I'd be happy to take a look at the report and share it with the rest of the
team if you'd like to send it directly to me.
Regarding SSL, this is definitely something that you can set up for your own
instance, see the documentation for configuring proxies on the wiki
The cloud admin interface's automatic update mechanism doesn't currently
support toolshed updates due to the interaction required. At this point, if
you're updating a cloud instance you'll need to do it manually. SSH in, switch
to the galaxy user, navigate to /mnt/galaxyTools/galaxy-central,
We have a wiki page describing the addition of both simple and complex
datatypes, here's the link:
http://wiki.g2.bx.psu.edu/Admin/Datatypes/Adding%20Datatypes
Let me know if this isn't sufficient, and I can try to help.
-Dannon
On Aug 28, 2012, at 3:56 PM, Alfredo Guilherme Silva Souza
Hi Makis,
This should work to get you an update-able galaxy instance going forward,
without having to manually migrate anything.
Please note that this method will clobber any existing changes to code, so if
you've modified any of the core galaxy you'll need to patch the new files. All
of
The problem here is that the public toolshed interface has been updated more
recently than the galaxy install on that cloud instance. You should be good
to go if you update galaxy (possible through the admin interface at
your_ec2_instance/cloud/admin/) to the latest version.
-Dannon
On Jul
For mail configuration, see the Mail and Notification section in your
universe_wsgi.ini -- all you need to do is specify an SMTP server that Galaxy
can use.
Regarding the invalid password, keep in mind that this would be your previously
registered password (from when you were using the sqlite
I see the bug you're running into. As a temporary solution, executing
workflows via the API with 'no_add_to_history' in the payload should work as
expected. I'll have a permanent fix out shortly.
-Dannon
On Jul 5, 2012, at 5:12 PM, Thon Deboer wrote:
Hi,
I am continueing to struggle
On Jul 5, 2012, at 5:33 PM, Thon Deboer wrote:
data['no_add_to_history']=True ?
Should do it.
___
Please keep all replies on the list by using reply all
in your mail client. To manage your subscriptions to this
and other Galaxy lists,
to the tool inputs
at the highest level of the workflow, and you'll see the multiple dataset
flagging when you go to run it next time.
-Dannon
On Jul 4, 2012, at 3:19 AM, Bernd Jagla wrote:
Dannon Baker dannonbaker@... writes:
Hi Dave,
Yes, galaxy's standard run-workflow dialog has a feature
It looks like what happened is that you've updated the instance that was
originally connected to this database from galaxy-central at some point
recently. The current tip of galaxy-central is at database version 103.
The first two options that come to mind are:
1) Hook a fresh galaxy-central
to contribute my API enhancements specifically for importing,
creating, deleting workflows. I also had a delete library api call,
but I think that was also added to the galaxy code recently.
thanks,
Richard
On Tue, Apr 10, 2012 at 3:45 PM, Dannon Baker dannonba...@me.com wrote:
Richard
On Jun 14, 2012, at 11:37 AM, Peter Cock wrote:
My hunch is that all the child-jobs are created and added to the
queue and some of this is happening in parallel leading to
contention over the SQLite database. Does this sound likely?
Yep, this is exactly what's happening. It isn't just the
On Jun 14, 2012, at 12:48 PM, Peter Cock wrote:
In a separate example with 33 sub-tasks, there were two of these
inversions, while in yet another example with 33 sub-tasks there was
a trio submitted out of order. This non-deterministic behavior is a
little surprising, but in itself not an
You can probably modfiy /home/galaxy/.sge_request to make it work for now, but
for portability of tools I'd still recommend the requirements tag. Changes
to .sge_request will not be persisted after shutdown, so you'll have to redo it
with each new cluster.
On Jun 13, 2012, at 9:25 PM, Jose
. it just does not show the
rest of the file which it normally does
Thon
On May 29, 2012, at 03:01 PM, Dannon Baker dannonba...@me.com wrote:
There isn't a universe toggle for the tabular display. The VCF datatype
inherits the pretty printing from the base tabular datatype, and if you'd
;set=variant5-variant49-variant10
On May 29, 2012, at 03:39 PM, Dannon Baker dannonba...@me.com wrote:
Hmm. Chrome on OSX looks good to me, please do send over the VCF file and I
can take a look.
Do you see any javascript errors in the browser console?
On May 29, 2012, at 6:31 PM
On a related point, I've noticed sometimes one child job from a split task
can fail, yet the rest of the child jobs continue to run on the cluster
wasting
CPU time. As soon as one child job dies (assuming there are no plans for
attempting a retry), I would like the parent task to kill all
Liisa,
Are there any errors in your paster.log or javascript console? What revision
are you running?
-Dannon
On May 3, 2012, at 2:28 PM, Liisa Koski wrote:
Hello,
I cloned a workflow on my local Galaxy installation, renamed it, made some
edits and pressed save. It has been saving now
,
misc_blurb = hda.blurb )
Would it be helpful if I submit a pull request for this? Cause I was
wondering if for changes so simple as this one, a pull request from a
third party introduces more overhead than help.
On Tue, May 1, 2012 at 3:05 PM, Dannon Baker dannonba...@me.com wrote
I'll take care of it. Thanks for reminding me about the TODO!
On May 1, 2012, at 10:03 AM, Dannon Baker dannonba...@me.com wrote:
On May 1, 2012, at 9:51 AM, Peter Cock wrote:
I'm a little confused about tasks.py vs drmaa.py but that TODO
comment looks pertinent. Is that the problem here
Sure, good idea. I'll tie it in.
-Dannon
On May 1, 2012, at 3:03 PM, Carlos Borroto wrote:
Hi,
Recently Full Path display was added as an option. I was wondering
if this information could also be available when accessing a dataset
information through the API.
Thanks,
Carlos
Hi Sarah,
You should be able to do this with the -r option of the clone command, so: `hg
clone -r b258de1e6cea https://bitbucket.org/galaxy/galaxy-dist`.
-Dannon
On Apr 25, 2012, at 3:08 AM, Sarah Maman wrote:
Hello,
I would like to get source code with Mercurial of this tarballs :
Hi Dave,
The problem here is that the galaxy update failed to merge a change to run.sh
because of minor customizations it has. We'll have a long term fix out for
this in the next week, but for now what you can do is ssh in to your instance
and update run.sh yourself prior to restarting
In changeset 7013:dae7eefe2f71 I added the full file path to the dataset View
Details page. Galaxy administrators will always see this, and if you set
expose_dataset_path to True in your universe_wsgi.ini, users will see it as
well. Hopefully that's what you're looking for, but let me know if
Josh,
Check out the cleanup_job setting in universe_wsgi.ini(and included below). It
sounds like 'cleanup_job = onsuccess' is exactly what you're looking for.
-Dannon
# Clean up various bits of jobs left on the filesystem after completion. These
# bits include the job working directory,
Hi Frank,
This should be resolved as of changeset 7057:08fbfeaaf3e1. Thanks!
-Dannon
On Mar 27, 2012, at 4:37 AM, Frank Sørensen wrote:
Hi Dannon,
Unfortunately our server is (still) behind a firewall, so I can't share
anything, but I hope you can reproduce the error from the following
Hi Danny,
For moving workflows from one instance to another you'll want to click on
Download or Export in the workflow context menu and use the URL for
Importing to Another Galaxy. It looks something like this: (note the
for_direct_import, that's how you'll know you have the right link
On Apr 18, 2012, at 8:44 AM, Frank Sørensen wrote:
- Should I pull from galaxy-central to get this update (7057:08fbfeaaf3e1)?
- If so, could you please tell me how I do that?
Sure, you can pull from galaxy-central by issuing a manual pull with source
from your galaxy-dist directory:
hg
On Mar 26, 2012, at 12:18 PM, Shantanu Pavgi wrote:
{{{
TimeoutError: QueuePool limit of size 40 overflow 50 reached, connection
timed out, timeout 30
}}}
These limits are not reached for regular (non-workflow) galaxy jobs. Any help
on optimum values for these settings or performance
Mo,
I just tested one of my amazon keys with PuTTYgen and the default settings and
it worked. Can you verify the contents of the .pem file you're using for me?
On windows I'd open it in http://notepad-plus-plus.org/ or a similar utility to
see it correctly.
The file should look something
Robert,
It sounds like you're experiencing two separate problems to me. Let's
isolating the problem to just the FastQC tool install, first, and move on from
there. There is a FastQC tool in your toolbar, and execution results in the
error you describe in the previous email of 'Fastq failed.
On Apr 10, 2012, at 2:42 PM, Daniel Patrick Sullivan wrote:
Is this somthing that somebody could possibly commment on? Should I
just try to install a more recent version of python? Thank-you so
much for your help and guidance.
Hi Dan,
This past October we did officially deprecate
On Mon, Dec 5, 2011 at 6:32 PM, Dannon Baker dannonba...@me.com wrote:
Richard,
You're correct in that currently the workflow API affords no method for
runtime modification of tool parameters, other than inputs. Depending on
your needs, it might be feasible to have a few static
Could you share an offending workflow with me? Or is it any workflow? I've
not seen this behavior before, but it definitely shouldn't be random. So in
your situation the same workflow run multiple times will not consistently hide
the same datasets?
-Dannon
On Mar 23, 2012, at 6:05 AM,
Assuming everything is generated correctly for the html report, what you're
probably running into is Galaxy's html sanitization. In your Galaxy instance's
universe_wsgi.ini you can set the following option (to false) to disable this:
# Sanitize All HTML
# By default, all tool output served as
It's in the [app:main] section. See the universe_wsgi.ini.sample file for the
exact position-- this file will always be up-to-date with the initial
configuration options that might have shown up as you've updated your galaxy
instance.
-Dannon
On Mar 22, 2012, at 7:50 PM, Jose Navas wrote:
Liisa,
I'm not able to reproduce this locally with a fresh galaxy-dist. Is there
anything unique about your workflows or configuration here? And, this might be
a long shot, but do you frequently use tags with your workflows? There was an
issue that I've fixed with this recently that would
Hi Thon,
Thanks for reporting this. I see what the problem is here at least for the
clone duplication, and I've committed a fix in 6833:e8e361707865 that will
affect all workflows going forward.
Unfortunately, there isn't a complete solution for fixing the extra tags. The
problem was that
Ed,
The top bar is defined in templates/webapps/galaxy/base_panels.mako. To create
a new masthead tab, you would have to edit that file in addition to creating
the relevant controller methods for the functionality you wish to add.
-Dannon
On Mar 20, 2012, at 8:32 PM, Edward Hills wrote:
Just one extra thought on this-- If you leave your instance up all the time it
may be worth looking into having a reserved micro instance up as the front end
(cheap, or free, with your intro tier) with SGE submission disabled. Then,
enable autoscaling(max 1) of m1.large/xlarge instances.
Yes, see the table 'job'. You may or may not find this helpful, but here's a
big database schema from our wiki:
http://wiki.g2.bx.psu.edu/DataModel?action=AttachFiledo=viewtarget=galaxy_schema.png
-Dannon
On Mar 9, 2012, at 8:00 AM, christin weinberg wrote:
Hi,
thanks for the answer. I
This was addressed in 6788:e58a87c91bc4. The reason for the initial change
that's causing these display issues was to eliminate potential XSS
vulnerabilities. There's now a configuration option (sanitize_all_html, which
is True by default) for local instances where you can disable the extra
This is definitely something we're looking to implement in Galaxy. I started
working on a proof of concept a while back for boxing workflows up as tools,
but I have not had a chance to finish it yet.
What Thon suggests in terms of a simple copy/paste is interesting, and probably
a far simpler
John,
I'm still working on figuring out who created the AMI you're running across (it
isn't under the main Galaxy cloud account, see the '072133624695' in the name
string), but for future reference we'll always keep the recommended AMI to use
listed on usegalaxy.org/cloud. At this point, it's
because it's Javascript we're inserting.
I understand the security concerns though. Any advice on a more secure way to
allow particular content? Perhaps a whitelist of allowed scripts?
On Wed, Feb 1, 2012 at 23:58, Cory Spencer cspen...@sprocket.org wrote:
On 2012-02-01, at 1:33 PM, Dannon
Hi L-A,
This exists. See your universe_wsgi for the following lines:
# Optional list of email addresses of API users who can make calls on behalf of
# other users
#api_allow_run_as = None
And then with any api call from a user in the allow list above you can add an
extra parameter to the
How did the user share the workflow with you, via link or directly with your
user? And which import option are you using? Does Clone in the workflow
context menu work?
-Dannon
On Feb 24, 2012, at 3:25 PM, Iry Witham wrote:
I have a Galaxy user who has created a workflow from a history and
Peter has it right in that we need to do this internally to ensure
functionality across a range of job runners. A side benefit is that it gives
us direct access to the tasks so that we can eventually do interesting things
with scheduling, resubmission, feedback, etc. If the overhead looks to
On Feb 16, 2012, at 5:15 AM, Peter Cock wrote:
On Wed, Feb 15, 2012 at 6:07 PM, Dannon Baker dannonba...@me.com wrote:
Main still runs these jobs in the standard non-split fashion, and as a
resource that is occasionally saturated (and thus doesn't necessarily have
extra resources
Very cool, I'll check it out! The addition of the JSON files is indeed very
new and was likely unfinished with respect to the base splitter.
-Dannon
On Feb 16, 2012, at 1:24 PM, Peter Cock wrote:
On Thu, Feb 16, 2012 at 4:28 PM, Peter Cock p.j.a.c...@googlemail.com wrote:
Hi Dan,
I think
Hi,
This is not a tool from our distribution. Please send questions about local
instances, core galaxy development, or your own tool development to the
galaxy-dev@lists.bx.psu.edu mailing list and *not* this galaxy-user mailing
list.
That said, if you could provide more information about
It's definitely an experimental feature at this point, and there's no wiki, but
basic support for breaking jobs into tasks does exist. It needs a lot more
work and can go in a few different directions to make it better, but check out
the wrappers with parallelism defined, and enable
Are those four tools being used on Galaxy Main already with this basic parallelism in place?Main still runs these jobs in the standard non-split fashion, and as a resource that is occasionally saturated (and thus doesn't necessarily have extra resources to parallelize to) willprobablycontinue
Hi Paul,
Thanks for this suggestion, it would definitely make sense to create a
reference to the actual workflow inputs in the new history. I'll see what I
can do!
-Dannon
On Feb 14, 2012, at 11:05 AM, Paul Gordon wrote:
Hi all,
I have noticed that when I run a workflow with output to a
301 - 400 of 481 matches
Mail list logo