Hi,
Thanks a lot it actually helped. It is not exactly as straightforward in
drmaa.py but somehow I could manage.
However, it was not the problem. For some reason, the user needs to
write files from the node to job_working_directory/00X// and the
latter is not world-writable. I had to
Hi everyone,
I'm (still) having issues with running jobs as the real user on our PBS
Pro cluster. When I try running a job, it ends up in error state and
displays the following error message:
touch: cannot touch `/home/galaxy/.drmaa/9167860.pbs-master2.embl.de.started':
Permission denied
Hi everyone,
I just wanted to ask how the extra_file_path is handled in case of job
running as the real user since the file_path is only writable by the
galaxy user. Any clue?
Thanks,
L-A
___
Please keep all replies on the list by using
At first we thought it could be an ssh issue but submitting jobs and
getting the output back isn't a problem when I do it from my personal
user manually, so it's really related to Galaxy. We're using PBS Pro btw.
And I'm still at loss... :(
L-A
Le 23/04/2012 15:42, zhengqiu cai a écrit :
I
Hello everyone,
I'm still trying to set up the job submission as the real user, and I
get a mysterious error. The job obviously runs somewhere and when it
ends it is in error state and displays the following message: Job
output not returned from cluster
In the Galaxy log I have the
Mmm... for some reason LD_LIBRARY_PATH was ignored but it looks like
it's working fine when I set the lib path in a .conf file in
/etc/ld.so.conf.d/
Hopefully it'll not break again :)
Best,
L-A
Le 19/04/2012 17:19, Louise-Amélie Schmitt a écrit :
Hi everyone,
I'm currently trying to set up
Hi everyone,
I'm currently trying to set up our local Galaxy so it can run jobs as
the real user. I followed the documentation and set the galaxy user as a
sudoer. However, I get an error message whenever I'm trying to run a job:
galaxy.jobs.runners.drmaa ERROR 2012-04-19 14:57:48,376
Hi everyone,
I as wondering if there was a way to get the history name of an input
dataset within a tool (i.e. in the cheetah between the command tags),
is there?
Thanks,
L-A
___
Please keep all replies on the list by using reply all
in
Ok, after a lot of testing and searching in the libs, I found the solution:
$inputdataset.dataset.history.name
Le 18/04/2012 10:22, Louise-Amélie Schmitt a écrit :
Hi everyone,
I as wondering if there was a way to get the history name of an input
dataset within a tool (i.e. in the cheetah
I love the improvements you recently made to the web interface layout in
galaxy-central, it's really neat! Collapsing the margins and making
everything smoother makes the interface really easier to read.
Thanks for the hard work, as always,
L-A
Hi,
I'm having the same issue, has it been fixed since then?
Thanks,
L-A
Le 07/11/2011 21:43, Nate Coraor a écrit :
On Nov 4, 2011, at 1:11 PM, Carlos Borroto wrote:
Hi,
Reading a little more about this problem, I see Galaxy uses python
tempfile library
Hi,
Try:
--no_overlap $singleOrPair.no_overlap
Best,
L-A
Le 28/03/2012 18:59, cai cai a écrit :
Hi All,
I am trying to add a tool to Galaxy, here is my xml configuration:
## Set params based on whether reads are single-end or paired.
#if $singleOrPair.readType ==
Le 27/03/2012 11:03, Louise-Amélie Schmitt a écrit :
Le 26/03/2012 16:13, Nate Coraor a écrit :
On Mar 26, 2012, at 5:11 AM, Louise-Amélie Schmitt wrote:
Hello everyone,
I wanted to start the drmaa job runner and followed the instructions
in the wiki, but I have this error message when I
Hi everyone,
The drmaa runner literally floods the Galaxy output with its own output
(dozens of lines every second) which makes the Galaxy log impossible to
read. Would there be a way to separate the two of them? I tried to look
into the code but I'm not fully sure about what exactly produces
Hello everyone,
I wanted to start the drmaa job runner and followed the instructions in
the wiki, but I have this error message when I start Galaxy:
galaxy.jobs ERROR 2012-03-23 15:28:49,845 Job runner is not loadable:
galaxy.jobs.runners. drmaa
Traceback (most recent call last):
File
Hi Nate, and thanks for the reply
- We saw the outputs_to_working_directory option in the .ini file, but it only
concerns output files, is there a way to make a local copy of all the input
files in the job working directory? (including indices)
Hi L-A,
Unfortunately no. This option was only
Hello,
We're currently trying to switch to a big cluster but we have a lot of
doubts and questions, especially since the I/O is a serious issue for
our NFS.
- We saw the outputs_to_working_directory option in the .ini file, but
it only concerns output files, is there a way to make a local
Hello everyone!
I just wanted to know if the user impersonation will be available
through the API someday :) It could be veeery useful for triggering
automatic QA on the data without having to share the resulting histories
afterwards.
Best regards,
L-A
to save space and time?
I really do appreciate the help -- thanks!
Daniel
*From:*galaxy-dev-boun...@lists.bx.psu.edu
[mailto:galaxy-dev-boun...@lists.bx.psu.edu] *On Behalf Of
*Louise-Amélie Schmitt
*Sent:* Friday, February 24, 2012 12:23 AM
*To:* galaxy-dev@lists.bx.psu.edu
*Subject:* Re: [galaxy
Hello Daniel,
I had similar issues when setting up our own. The POST size limit is not
set by Galaxy. If I remember correctly it's in your Apache config,
though I don't remember exactly where it was.
For the .loc files, here is an example:
dm3-btdm3D.melanogaster 3
,
but pbs_python links against libtorque, which is part of the torque client,
which must be installed somewhere on the local system.
Ok, thanks a lot!
Best,
L-A
Best,
L-A
Le 30/01/2012 18:20, Nate Coraor a écrit :
On Jan 30, 2012, at 12:07 PM, Louise-Amélie Schmitt wrote:
Hi Nate,
Thanks
Hi Tanguy,
You can set that at the very bottom of your universe_wsgi.ini file. I
did it myself with Torque to set a different behavior for a couple of
tools, it works fine. The related Wiki page is here:
http://wiki.g2.bx.psu.edu/Admin/Config/Performance/Cluster :)
Best,
L-A
Le 17/02/2012
are normally
separate. For instance:
http://pkgs.org/download/torque-drmaa
If going from source, you can enable DRMAA compilation with --enable-drmaa; I
don't recall if that is on by default (I don't think it is).
chris
On Feb 17, 2012, at 3:49 AM, Louise-Amélie Schmitt wrote:
You don't
Hello,
I'm not sure this will fix the issue but you might have to use
-p $param_file.params
instead of
-p $params
in your command line.
Best,
L-A
Le 06/02/2012 22:15, Jeffrey Long a écrit :
Hi all,
I am having trouble with what looked to me at first to be a simple
syntactical issue with
Hello Dannon
Could it be possible to have the input dataset's display name appended
to the new history's name instead of plain numbers when the Send
results in a new history option is checked?
This new feature is indeed very useful (thanks a million for it) but
the numbered suffixes
:
On Jan 30, 2012, at 12:07 PM, Louise-Amélie Schmitt wrote:
Hi Nate,
Thanks for the leads!
But setting DRMAA_LIBRARY_PATH means I'm in trouble since the libraries are on
machine B which is maintained by our IT dept. I cannot access them from machine
A.
Is it a desperate situation? Will it work if I
:
drmaa://[native_options]/
I'm a bit confused, I would have expected something like:
drmaa://[machine]/[native_options]/
like for TORQUE. Did I miss something?
Best,
L-A
Le 19/01/2012 19:43, Nate Coraor a écrit :
On Jan 16, 2012, at 5:22 AM, Louise-Amélie Schmitt wrote:
Hello,
We want to move
Hi Ross
Thanks a million, I hg-pull'ed it solved the problem!! Please push it to
galaxy-dist? ;)
I ran in another little bug though, but nothing serious, you might even
have already spotted it by now:
In lib/galaxy/web/framework/__init__.py the line 873 made the whole
thing crash due to a
Hello!
I'm running into an error when trying to download a composite dataset
from the history with the floppy disk icon. Here is the error message in
the logs:
galaxy.web.controllers.dataset ERROR 2012-01-18 16:32:10,324 Unable to
remove temporary library download archive and directory
Hello,
We want to move Galaxy's jobs from our small TORQUE local install to a
big cluster running PBS Pro.
In the universe_wsgi.ini, I changed the cluster address as follows:
default_cluster_job_runner = pbs:///
to:
default_cluster_job_runner = pbs://sub-master/clng_new/
where sub-master is
Hello,
I'm running in a strange issue which I cannot seem to find a solution
for. In every tool I created from scratch, the input dataset selector
does not only include the datasets corresponding to the file types I set
in the format= attribute in the param tag in my XML tool file, but
:
The only thing that sticks out to me is the 'export' format listed in inputs.
What are the children datatypes of export, or how is that set up? The filter
logic automatically includes all children datatypes of the specified formats.
-Dannon
On Dec 7, 2011, at 5:10 AM, Louise-Amélie Schmitt
Hello,
Since our local Galaxy is a little sluggish, we were wondering if it
could be related to broken or missing database indexes (we're using
PostgreSQL), so we would like to know how they are managed in Galaxy,
and if there is a way to restore them automatically.
Thanks,
L-A
Hello,
We're experiencing a major issue in the web application and don't know
where to look for a potential solution: Whichever link we click on it
always brigs us back to the welcome page (galaxy/root).
There are a few exceptions to that:
- saved histories, saved datasets and api key in the
Thanks a lot Ross, it works fine now.
Best,
L-A
Le 30/11/2011 11:34, Ross a écrit :
Louise, I had the same problem.
After
hg revert -r 3ee9430186fb
everything was back to normal for me.
I hope this helps..
2011/11/30 Louise-Amélie Schmitt louise-amelie.schm...@embl.de
mailto:louise
?
require_login = True
On Nov 30, 2011, at 5:34 AM, Ross wrote:
Louise, I had the same problem.
After
hg revert -r 3ee9430186fb
everything was back to normal for me.
I hope this helps..
2011/11/30 Louise-Amélie Schmitt louise-amelie.schm...@embl.de
mailto:louise-amelie.schm...@embl.de
Hello
, 2011, at 10:34 AM, Louise-Amélie Schmitt wrote:
Hello Greg
We tried to run the toolshed like you explained (thanks a lot for the quick
answer btw), it starts fine, but when we try to access it on the web, we get
this error in the browser:
Server Error
An error occurred. See the error logs
,modencode_worm,modencode_fly,yeast_sgd
# GeneTrack servers: tool-data/shared/genetrack/genetrack_sites.txt
genetrack_display_sites = main,test
On Nov 25, 2011, at 3:01 AM, Louise-Amélie Schmitt wrote:
Hello Greg
Please find attached the file you asked for. Just in case, I also
sent you
Hello Greg
We tried to run the toolshed like you explained (thanks a lot for the
quick answer btw), it starts fine, but when we try to access it on the
web, we get this error in the browser:
Server Error
An error occurred. See the error logs for more information. (Turn debug
on to display
Le 29/08/2011 18:54, Nate Coraor a écrit :
Louise-Amélie Schmitt wrote:
Le 29/08/2011 15:52, Nate Coraor a écrit :
Louise-Amélie Schmitt wrote:
Hello everyone,
These questions are a bit silly but I'm really ignorant when it
comes to security. Sorry about that.
Why use API keys instead
I just changed it and ran into an error, so I modified the line and it
now works fine:
$__app__.security.encode_id( '%s' % $input1.id )
What was the 'file.' originally for?
Le 24/08/2011 15:00, Louise-Amélie Schmitt a écrit :
Thanks a lot!
I found another way to do it but it is awfully more
Le 30/08/2011 16:51, Nate Coraor a écrit :
Louise-Amélie Schmitt wrote:
I just changed it and ran into an error, so I modified the line and
it now works fine:
$__app__.security.encode_id( '%s' % $input1.id )
What was the 'file.' originally for?
On the API side, a library or history's contents
Le 30/08/2011 18:00, Nate Coraor a écrit :
Louise-Amélie Schmitt wrote:
Le 30/08/2011 16:51, Nate Coraor a écrit :
Louise-Amélie Schmitt wrote:
I just changed it and ran into an error, so I modified the line and
it now works fine:
$__app__.security.encode_id( '%s' % $input1.id )
What
Le 29/08/2011 15:52, Nate Coraor a écrit :
Louise-Amélie Schmitt wrote:
Hello everyone,
These questions are a bit silly but I'm really ignorant when it
comes to security. Sorry about that.
Why use API keys instead of user names? Is it to to prevent anyone
from figuring out who is behind
again (change true to false) since it didn't work
anymore.
Best,
L-A
Le 15/06/2011 09:56, Louise-Amélie Schmitt a écrit :
Hi, and sorry for the late reply
Here is the last pull: 5355:50e249442c5a
I'll try to be as concise as I can but I can send you
Thanks a lot!
I found another way to do it but it is awfully more complicated so I'll
change as soon as I have some time.
Best,
L-A
Le 23/08/2011 19:49, Nate Coraor a écrit :
Louise-Amélie Schmitt wrote:
Hi,
I would need to make a tool that can get the API dataset id out of
the input
# for your local install, that would also
be helpful for us to know.
Best,
Jen
Galaxy team
On 6/6/11 2:15 AM, Louise-Amélie Schmitt wrote:
Hi,
Since I haven't updated Galaxy for a while now I don't know if it was
actually fixed but I had issues with my default-selected checkboxes:
When I
since you could put
in all the details, or just write back and I can create a simple ticket.
Thanks!
Jen
Galaxy team
On 8/5/11 6:59 AM, Louise-Amélie Schmitt wrote:
Hello,
Just a quick message to ask if you had the time to work on passing
parameters to workflows through the API. Just to know
Hello,
Just a quick message to ask if you had the time to work on passing
parameters to workflows through the API. Just to know, since I'm using
workflows with the API.
Best,
L-A
___
Please keep all replies on the list by using reply
, Louise-Amélie Schmitt a écrit :
Hello everyone,
I'm working on a script that uploads files and launchs workflows on
them, but I keep getting errors that appear more or less randomly when
the display() and submit() functions are called. In a nutshell, there
is a 1/3 chance the calls fail this way
Hello everyone,
I'm working on a script that uploads files and launchs workflows on
them, but I keep getting errors that appear more or less randomly when
the display() and submit() functions are called. In a nutshell, there is
a 1/3 chance the calls fail this way.
Nevertheless, the actions
Hello everyone,
I'm currently trying to automate data loading and pre-processing through
Galaxy's API, and I will need to delete and share histories at some
point. I know the API is still very new but is there a way to do that by
any chance?
Thanks,
L-A
Hi,
Could you please show us the contents of your pg_hba.conf file? This is
where you configure which machines can access the database through the
network.
Best,
L-A
Le 27/06/2011 20:33, Marco Moretto a écrit :
Hi Gus,
it seems that your postgres is not configured to accept connection
Hi,
I ran into a little something that is a bit annoying for debug when
trying to upload files through the API with
library_upload_from_import_dir.py. When the specified folder is wrong,
python tries to process the error tuple like a dict, so the original
error is hard to find.
I modified
)
AuthType Basic
AuthBasicProvider ldap
AuthLDAPURL ldap://ocs.embl.org/cn=Users,dc=embl,dc=org?uid;
AuthzLDAPAuthoritative off
Satisfy any
Allow from all
/Location
L-A
Le 17/06/2011 14:14, Louise-Amélie Schmitt a écrit :
Hi everyone
I'm currently
Hi,
Since I haven't updated Galaxy for a while now I don't know if it was
actually fixed but I had issues with my default-selected checkboxes:
When I deselected them, the value sent in the query remained as if they
were still selected. Even when I re-ran the job, all the checkboxes were
Le 02/06/2011 21:23, Nate Coraor a écrit :
Louise-Amélie Schmitt wrote:
On Thu, 2 Jun 2011 11:48:55 -0400, Nate Coraorn...@bx.psu.edu wrote:
Greg Von Kuster wrote:
Hello Louise,
I've CC'd Nate on this as he may be able to help - although no
guarantees. I'm not expert enough in this area
can help as well.
It is likely that LDAP is playing a role in this behavior.
On Apr 14, 2011, at 1:00 PM, Louise-Amélie Schmitt wrote:
The thing is, we use LDAP logging so we can't even access the website
without logging in.
Moreover, when I logged in, I arrived on the data analysis
Hi everyone
First let me thank all the Team Galaxy, the conference was really great.
Kanwei asked me to send the tool I told him about. It's one of these
tools for which you can't know the exact number of output datasets
before tool run. Here are the files. It's really simple actually, but it
Hi,
Have you tried grep'ing the error message in the lib files to see where
Galaxy goes to during the upload of these files?
like: grep /The uploaded binary file contains inappropriate content/
`find *`
I had a similar problem when I modified Galaxy to manage .gz files
without uncompressing
Well I'm still missing something...
Is there a way to use the conditional tag sets in the tool's xml with
it? To suppress the unnecessary options for a given file type? or more
than one?
regards,
L-A
Le 20/05/2011 13:15, Louise-Amélie Schmitt a écrit :
It's exactly what I needed
the
same thing so jobs don't crash when resources requested are more than
available?
regards,
Leandro
2011/5/19 Louise-Amélie Schmitt louise-amelie.schm...@embl.de
mailto:louise-amelie.schm...@embl.de
Hi,
In a previous message, I explained how I did to multithreads
certain jobs
Louise-Amélie Schmitt louise-amelie.schm...@embl.de
mailto:louise-amelie.schm...@embl.de
Hi again Leandro
Well I might not have been really clear, perhaps I should have
re-read the mail before posting it :)
The thing is, it was not an issue of Torque starting jobs when
in advance for your help
Regards,
L-A
Le mardi 12 avril 2011 à 16:41 +0200, Louise-Amélie Schmitt a écrit :
Hello everyone
I have a couple of questions regarding user and dataset management.
1) We use LDAP for user registration and logging, would it be possible
to retrieve automatically
, Louise-Amélie Schmitt a écrit :
Ooops, this is very right, I totally forgot about that. It woud have
become problematic at some point I guess. Thank you for pointing this
out!
I changed it so the new database is associated with brand new
appropriate directories. (and dropped and re-created
the same error message with supposedly nonexistent
histories.
Thanks again
L-A
Le vendredi 15 avril 2011 à 13:47 +0200, Hans-Rudolf Hotz a écrit :
On 04/15/2011 10:56 AM, Louise-Amélie Schmitt wrote:
Hi Hans, thanks for your reply.
- is your PostgreSQL database in sync with the database
12:06:55 -0400, Greg Von Kuster g...@bx.psu.edu
wrote:
On Apr 14, 2011, at 11:49 AM, Louise-Amélie Schmitt wrote:
Here is the result I got from the debug statements:
galaxy.web.controllers.library_common DEBUG 2011-04-14 17:46:02,286 ###
history: None
This is the problem - when you
On Fri, 8 Apr 2011 16:33:58 -0400, Kanwei Li kan...@gmail.com wrote:
Hi Louise,
Have you considered doing a SQL dump and import? Sounds easier to me
than writing a perl script ;)
-K
Oh yes I did... Yeah, that solution sounded so sweet that is was the first
thing I tried. But don't be
Hello everyone
I just met a huge problem concerning the database. I'm currently trying
to transfer my data from MySQL to PostgreSQL by writing a Perl script
that would do the job.
Here is the issue: In the form_definition table, one of the field
identifiers is desc, which is a reserved SQL
Le vendredi 08 avril 2011 à 11:12 -0400, Sean Davis a écrit :
2011/4/8 Louise-Amélie Schmitt louise-amelie.schm...@embl.de:
Hello everyone
I just met a huge problem concerning the database. I'm currently trying
to transfer my data from MySQL to PostgreSQL by writing a Perl script
Hello
We managed to install Galaxy according to the unified method, with the
runner and the web application running on separate machines sharing by
NFS the same storage space where the Galaxy files are.
The thing is, the data must be saved in another NFS storage space so we
had to define the
71 matches
Mail list logo