Hello Dmitry,
This use case isn't really addressed by Galaxy currently.
There is not a generic way of doing grabbing the users credentials
like this or producing URLs from inside the tool. This depends on a
lot of different things - how Galaxy's proxy is configured, etc... so
in many (all?) cases
Hi Amit,
You may have solved this already, but if not, there are (at least) two
good options, with the second better if your data is already on the same
computer that you have Galaxy running. A file 2G or over will never load
through the browser - this same rule applies to a local and the publ
Hi Jennifer,
Active and Deleted histories can be permanently deleted using from the
History pane "Options -> Saved Histories", then at the top of the middle
panel click on "Advanced Search", then click on "status: all". Check the
box for the histories to be discarded and then click on the butt
I need to clear space from my quota of usage in my galaxy main user
account, but I deleted 2 histories, rather than permanently deleting them.
How can I access them again so that I may permanently delete them to clear
the space?
Thank you,
Jennifer
Hi Sally,
I can point you to a few resources that explain what we are working on,
what we plan to work on, and how the architecture is put together. All
possible development directions versus what is prioritized because it
fits the larger project goals will be hopefully be clearer after a
rev
Hi Joshua,
If you are still having issues, you'll want to contact the folks running
that specific instance for support. They'll know the state of the
server. See this link, under the section "User Support" for their google
group contact:
https://wiki.galaxyproject.org/PublicGalaxyServers#Cist
It looks like this is already the case, but only documented in the code
(lib/galaxy/webapps/galaxy/api/workflows.py):
def _update_step_parameters(step, param_map):
"""
Update ``step`` parameters based on the user-provided ``param_map``
dict.
``param_map`` should be structured as follo
The current method for supplying parameters to tools using the REST API for
workflows is fairly broken. The structure is as follow:
"parameters":{
"tool_1": {
"param":"param_name",
"value":"param value"
},
"tool_2": {
"param":"param_name",
"value":"par
If they're fetching as it runs the tests, you may not need to do it up
front explicitly. It's worth noting, though, that this is how galaxy
normally runs (checking for and fetching eggs up front -- except when using
the --stop-daemon argument) under run.sh.
On Mon, May 5, 2014 at 11:15 AM, Peter
On Mon, May 5, 2014 at 3:47 PM, Dannon Baker wrote:
> On Mon, May 5, 2014 at 10:32 AM, Peter Cock
> wrote:
>>
>> However, while it pre-fetched PyYAML-3.10-py2.7-linux-x86_64-ucs4.egg
>> the old error persists. Is there a case-sensitivity issue here (PyYAML
>> versus pyyaml)?
>
> Yep, I've adjuste
On Mon, May 5, 2014 at 3:47 PM, Dannon Baker wrote:
> On Mon, May 5, 2014 at 10:32 AM, Peter Cock
> wrote:
>>
>> However, while it pre-fetched PyYAML-3.10-py2.7-linux-x86_64-ucs4.egg
>> the old error persists. Is there a case-sensitivity issue here (PyYAML
>> versus pyyaml)?
>
>
> Yep, I've adjus
On Mon, May 5, 2014 at 10:32 AM, Peter Cock wrote:
> However, while it pre-fetched PyYAML-3.10-py2.7-linux-x86_64-ucs4.egg
> the old error persists. Is there a case-sensitivity issue here (PyYAML
> versus pyyaml)?
>
Yep, I've adjusted that now and my guess is it'll work next time.
Let me know if
On Mon, May 5, 2014 at 3:30 PM, Geert Vandeweyer
wrote:
> Hi,
>
> I'm having issues uploading bam files to our galaxy server. It fails for
> both FTP and browser upload. Upload.py is executed on the local job runner.
>
> The error is:
>
> Traceback (most recent call last):
> File "/galaxy/galaxy
Change made:
https://github.com/peterjc/pico_galaxy/commit/19d24c71b5c24cb908907f25e5d992627165736c
https://travis-ci.org/peterjc/pico_galaxy/builds/24455410
That worked in one sense:
$ python scripts/fetch_eggs.py
Fetched http://eggs.galaxyproject.org/Mako/Mako-0.4.1-py2.7.egg
Fetched
http://eg
Hi,
I'm having issues uploading bam files to our galaxy server. It fails for
both FTP and browser upload. Upload.py is executed on the local job runner.
The error is:
Traceback (most recent call last):
File "/galaxy/galaxy-dist/tools/data_source/upload.py", line 390, in
__main__()
Fi
Can you add a `python scripts/fetch_eggs.py` to your travis config just
after the stop-daemon? That should parse eggs.ini and fetch all of the
eggs.
On Mon, May 5, 2014 at 9:51 AM, Peter Cock wrote:
> On Mon, May 5, 2014 at 2:30 PM, Dannon Baker
> wrote:
> > Hey Peter,
> >
> > Just looking at
Are you using usegalaxy.org?
This generally indicates a temporary error with one of our cluster node (if
using usegalaxy.org), or a misconfiguration of the job management system if
you're seeing this on a local cluster. If you're using usegalaxy.org I'd
recommend simply retrying the upload.
-Dan
On Mon, May 5, 2014 at 2:30 PM, Dannon Baker wrote:
> Hey Peter,
>
> Just looking at the travis job, it looks like there's not a fetch_eggs step
> at the start, but rather that it's fetching as they're require()'d. Is that
> correct?
Yes, https://github.com/peterjc/pico_galaxy/blob/master/.travi
Hey Weiyan,
You can't (currently) specify disk destinations per user, but you can
certainly use multiple disks in a pool. For one example, see the sample
object store configuration here (and also in your galaxy distribution):
https://bitbucket.org/galaxy/galaxy-central/src/92519a9bfa32a42ce47a63f
Hello!
I tried to upload some file to Galaxy but received th error "Job output
not returned from cluster". I got it for the first time, file upload
always worked in the past.
What can I do?
Thanks for help, Gerwin
--
Gerwin Heller, Ph.D.
Medical University of Vienna
Department of Medicine I
Galaxy developers:
We can set the file_path in universe_wsgi.ini to specify the saved Galaxy
results.But whether it is possible for us to specify file_path for each Galaxy
registered users,for which multi-hard disks can be used effectively in some
ways?
Thank you very much for any help.
Regard
Hey Peter,
Just looking at the travis job, it looks like there's not a fetch_eggs step
at the start, but rather that it's fetching as they're require()'d. Is
that correct? In any event, I'm changing the order of those requires
in 13302:92519a9bfa32, which may resolve your issue. They work fine
Hi all,
Recently my TravisCI tests have started failing during the tool
functional tests due to what looks like a missing dependency:
e.g. https://travis-ci.org/peterjc/pico_galaxy/jobs/2682
...
galaxy.eggs DEBUG 2014-05-05 11:26:47,387 Fetched
http://eggs.galaxyproject.org/boto/boto-2.27.0-p
23 matches
Mail list logo