Bitbucket will only show the first 7 characters when browsing through commits.
Try this:
https://bitbucket.org/galaxy/galaxy-central/changeset/5013377e0bf7a656ea593098f1d1b38f3d6928c6
-Scott
- Original Message -
Hi,
Can you please send the entire link to access this changeset? I
Make sure that you are pulling from galaxy-central and not galaxy-dist.
I believe something like:
hg pull -r rev# -u http://bitbucket.org/galaxy/galaxy-central
should work. The problem may be that you are pointing to galaxy-dist by
default, which will not know about that revision (yet).
-Scott
It should have been fixed yesterday - if a job is running locally and the user
cancels it, then the
job will be stopped immediately.
Thanks, Juan!
-Scott
- Original Message -
Ok, thanks for the response!
On 6 November 2012 12:50, Dannon Baker dannonba...@me.com wrote:
Hi
That sounds perfectly reasonable. I'll make the change.
Thanks, Juan!
-Scott
- Original Message -
Hi galaxy devs,
We are in the final stages of deploying a new Galaxy instance. We
downloaded the galaxy-dist version a month ago aprox. We realized
that, when launching a job and
Well, lib/galaxy/datatypes/registry.py hasn't changed in over two months. Out
of curiousity,
I tried downloading a fresh copy of galaxy-central (without any initial
configuration or database,
either) and didn't encounter the same problem. I even tried messing with that
line of code and
Hi Tom-
Are you still having this problem? If so, then could you try your request again
and return what
you're seeing from the console's logging output for that request? If not, then
anything that you may
have done to get around this problem would be appreciated.
-Scott
- Original
Ok -it's in. Thanks again! I will add a to-do item to put output-merge messages
into stdout so that they're more visible.
-Scott
- Original Message -
Thanks, Peter! I'll get to it this afternoon EDT.
-Scott
- Original Message -
On Thu, Oct 18, 2012 at 5:19 PM, Scott
This was fixed in galaxy-central last week and should be part of
galaxy-dist. What happened is that controllers were migrated and a
couple of modules were left out. See changeset 210c39f4bf7f or later,
which I believe was fixed on October 4th.
In particular, the problem you're seeing had to do
Excuse me - I meant to say that this will part of the upcoming
galaxy-dist.
-Scott
- Original Message -
This was fixed in galaxy-central last week and should be part of
galaxy-dist. What happened is that controllers were migrated and a
couple of modules were left out. See
Hey Peter-
Thanks - I'll look into it. If you're able to reproduce the problem easily
and wouldn't mind crafting a pull request, then it would be much
appreciated. Otherwise I'll put this on my to-do list to be done soon.
I or someone else may want to revisit the exception handling to prevent
That's been fixed in a galaxy-central commit last night; the webapps were moved
around,
and this didn't point to the new webapps directory. See changeset 210c39f4bf7f
or later.
In particular, the problem you're seeing had to do with the
templates/library/common/library_common.mako
pointing
Thanks, Peter! Those are good suggestions. I'll look into it soon.
-Scott
- Original Message -
Hi all,
I've been running into some sporadic errors on our Cluster while
using the latest development Galaxy, and the error handling has
made this quite difficult to diagnose.
For a
Just to be clear, whenever a tool's stdio rules are triggered
there should be messages prepended to stdout and/or stderr.
If a tool defines a stdio regular expression and the regular
expression matches on stdout, a message should be prepended to
stdout. However, these will be empty if tools
Maybe a better question is why you would want two separate versions
of galaxy running to begin with. There could be another way to solve
your problem.
-Scott
- Original Message -
The main problem with different versions of galaxy is that the
database schema that they expect (as
Good - after taking a step back I thought it seemed better to leave everything
null
rather than guess at an exit code. Does anyone have any objections to adding an
exit code column with null values for previously-executed jobs?
-Scott
- Original Message -
Or allow the error code
You should be able to use the following:
stdio
exit_code range=1: /
exit_code range=:0 /
/stdio
Square brackets are not supported in the range.
However, note that many shells will not return an
exit code that is less than 0. Instead, the lowest
8 bits (i.e., value mod 256) will be
The wiki markup was also fixed. The // is not supported.
-Scott
- Original Message -
My apologies:
stdio
exit_code range=1: /
exit_code range=:-1 /
/stdio
Again, note that the second range will not be useful as OS X and
Linux (or at least the distros I've used) will never
I'll check it out. Thanks.
- Original Message -
Hi all (and in particular, Scott),
I've just updated my development server and found the following
error when running jobs on our SGE cluster via DRMMA:
galaxy.jobs.runners.drmaa ERROR 2012-09-18 09:43:20,698 Job wrapper
finish
I have to admit that I'm a little confused as to why you would
be getting this error at all - the job variable is introduced
at line 298 in the same file, and it's used as the last variable
to check_tool_output in the changeset you pointed to.
(Also, thanks for pointing to it - that made
code
either (line ~1045), while JobWrapper's does around line 315.
cheers,
jorrit
On 09/18/2012 03:55 PM, Scott McManus wrote:
I have to admit that I'm a little confused as to why you would
be getting this error at all - the job variable is introduced
at line 298 in the same file
Ok - that change was made. The difference is that the change
is applied to the task instead of the job. It's in changeset
7713:bfd10aa67c78, and it ran successfully in my environments
on local, pbs, and drmaa runners. Let me know if there are
any problems.
Thanks again for your patience.
Sorry - that's changeset 7714:3f12146d6d81
-Scott
- Original Message -
Ok - that change was made. The difference is that the change
is applied to the task instead of the job. It's in changeset
7713:bfd10aa67c78, and it ran successfully in my environments
on local, pbs, and drmaa
I found that the problem happened if I used the UI for editing text. What I did
instead was edit the wiki using the text-based mode instead. If you have set
your
preferences to use the UI for editing by default, then you will need to change
it
back to use the text-based editing instead; I
I haven't worked with an LSF cluster before, but it looks like it has DRMAA
bindings. That means that you could try setting up a DRMAA galaxy runner :
http://wiki.g2.bx.psu.edu/Admin/Config/Performance/Cluster
-Scott
- Original Message -
Hello,
Could you please tell me if it's
It's the last rewrite rule that's causing the infinite recursion:
RewriteRule ^/galaxy(.*) http://localhost:8080$1 [P]
Check out this link:
http://httpd.apache.org/docs/current/mod/mod_rewrite.html#rewriterule
Look for the table of given rules and resulting substitutions - the proxying
I'm writing documentation now - I'll have something this afternoon.
Sorry for the delay.
-Scott
- Original Message -
No, I call the executable directly from the xml. It kept failing
although it seemed to finish the job and I realized that on success
the executable prints a summary
Please see http://wiki.g2.bx.psu.edu/Admin/Tools/Tool%20Config%20Syntax .
There is a section for stdio, regex, and exit_code tag sets. The
documentation
applies to the latest galaxy-dist, though most of what's mentioned (aside from
updating
stdout and stderr with warning messages) is
Hey Sebastian-
It may help to consider other pieces aside from compute nodes
that you will need, such as nodes for proxies and databases,
networking gear (such as switches and cables), and so on.
http://usegalaxy.org/production has some details, and there are
high-level pieces explained at
Ok - that makes sense. So the from_work_dir attribute is part of the tool's
data element under outputs, and Jeremy fixed a timing issue.
Thanks, Jeremy!
-Scott
- Original Message -
Iry,
This is probably an issue with using the from_work_dir attribute.
Prior to changeset
Clare-
Have you resolved this yet? I had seen similar issues with respect
to pycrypto before I used virtualenv, but I don't believe that I saw
it afterwards. I'll take a look and see if I can reproduce it.
I only have obvious things to recommend right now. For example,
make sure that the
Hi Sarah-
I'm sure others have experiences that could help, so I hope they will
chime in. Nate put together some slides two years ago on production setups,
including some architecture for the server main.g2.bx.psu.edu:
Hi Dan-
Thanks again for your great hospitality at UCI!
It looks like you've done most of the sane things, and a quick
check looks like the PBS egg is requesting protocol version 1
while the server supports protocol version 2. One possibility
is that there is a protocol mismatch, which could
I've been told by Galaxy folks to keep this on galaxy-dev.
My apologies for any confusion.
-Scott
- Original Message -
Hi Dan-
Thanks again for your great hospitality at UCI!
It looks like you've done most of the sane things, and a quick
check looks like the PBS egg is
I haven't been able to reproduce this yet with the instructions you
gave, but I'm not using the same environment. Can you give me an idea
of what tools you're using outside of SciPy/NumPy/Enthought stuff?
There is the possibility that the virtualenv.py script isn't being
sourced correctly. We
Sorry - I fouled up and should have kept this on galaxy-dev.
-Scott
- Original Message -
I'm forwarding this to galaxy-user instead of galaxy-dev.
-Scott
- Forwarded Message -
From: Scott McManus scottmcma...@gatech.edu
To: Sarah Maman sarah.ma...@toulouse.inra.fr
Cc
A suggested change will be coming down the pipe shortly, but it's good to
hear that it will be useful!
-Scott
- Original Message -
On Tue, May 1, 2012 at 3:46 PM, Dannon Baker dannonba...@me.com
wrote:
I'll take care of it. Thanks for reminding me about the TODO!
This seems to
36 matches
Mail list logo